Compare commits

...

141 Commits

Author SHA1 Message Date
nepyope
a72db65f62 working trained policy 2025-04-15 15:41:24 +02:00
nepyope
2488efa8bd add hopejr env 2025-04-13 14:30:29 +02:00
nepyope
5d9b4fcae9 hopejr_real 2025-04-13 14:28:05 +02:00
nepyope
c70054c0b9 added hopejr yaml 2025-04-13 14:10:27 +02:00
nepyope
62b1896304 rip this branch 2025-04-12 10:35:00 +02:00
Sebastian-Contrari
0306e18640 fix teleop 2025-04-11 16:28:06 +02:00
Sebastian-Contrari
54b685053e test scs 2025-04-10 17:14:40 +02:00
Sebastian-Contrari
5b7e25ed18 push new exoskeleton code 2025-04-10 16:21:34 +02:00
Sebastian-Contrari
f6e862d421 hopejr teleop code 2025-04-02 16:19:15 +02:00
Pepijn
91f549b2ce use float32 instead of int 2025-03-18 16:23:33 +01:00
nepyope
f945641de9 added SM8512BL 2025-01-14 14:19:38 +01:00
nepyope
5bd41a3dca added feetech failsafe comment 2025-01-09 18:14:01 +01:00
Remi Cadene
f996a13f70 Add test3 test4 2024-12-29 13:30:17 +01:00
Remi Cadene
743ebfa7c1 Cremaillaire HF 2024-12-24 11:33:55 +01:00
Remi Cadene
2c45660d77 WIP 2024-11-27 13:59:37 +01:00
Remi Cadene
9dd4414c6e fix autocalib moss 2024-10-26 10:43:52 +02:00
Remi
07e8716315 Add FeetechMotorsBus, SO-100, Moss-v1 (#419)
Co-authored-by: jess-moss <jess.moss@huggingface.co>
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-10-25 11:23:55 +02:00
Arsen Ohanyan
114870d703 Fix link (#482)
Co-authored-by: Remi <remi.cadene@huggingface.co>
2024-10-23 16:24:06 +02:00
Bastian Krohg
2efee45ef1 Update 9_use_aloha.md, missing comma (#479) 2024-10-23 16:13:26 +02:00
Boris Zimka
c351e1fff9 Fix gymnasium version as pre-1.0.0 (#471)
Co-authored-by: Remi <re.cadene@gmail.com>
Co-authored-by: Remi <remi.cadene@huggingface.co>
2024-10-18 10:23:27 +02:00
Alexander Soare
cd0fc261c0 Make say(blocking=True) work for Linux (#460) 2024-10-17 15:22:21 +01:00
Remi
77478d50e5 Refactor record with add_frame (#468)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-10-16 20:51:35 +02:00
Remi
97b1feb0b3 Add policy/act_aloha_real.yaml + env/act_real.yaml (#429)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-10-10 17:12:45 +02:00
Eugene Mironov
c29e70e5a1 Fix issue with wrong using index instead of camera_index in opencv (#466) 2024-10-09 11:35:19 +02:00
Remi
d5b669634a Fix nightly by updating .cache in dockerignore (#464) 2024-10-07 11:35:35 +02:00
Simon Alibert
1a343c3591 Add support for Stretch (hello-robot) (#409)
Co-authored-by: Remi <remi.cadene@huggingface.co>
Co-authored-by: Remi Cadene <re.cadene@gmail.com>
2024-10-04 18:56:42 +02:00
Remi
26f97cfd17 Enable CI for robot devices with mocked versions (#398)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-10-03 17:05:23 +02:00
Simon Alibert
72f402d44b Fix dataset card (#453) 2024-09-25 16:56:05 +02:00
Alexander Soare
92573486a8 Don't use async envs by default (#448) 2024-09-20 15:22:52 +02:00
Simon Alibert
c712d68f6a Fix nightlies (#443) 2024-09-18 14:51:45 +02:00
Dana Aubakirova
f431a08efa small fix: assertion error message in envs/utils.py (#426)
Co-authored-by: Remi <re.cadene@gmail.com>
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
Co-authored-by: Remi <remi.cadene@huggingface.co>
2024-09-12 18:03:34 +02:00
Remi
beaa427504 Fix slow camera fps with Aloha (#433) 2024-09-12 14:20:24 +02:00
Mishig
a88dd602d9 [Vizualization] Better error message (#430)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-09-12 10:46:48 +02:00
Mishig
6c0324f467 [Vizualization] Fix video layout (#431) 2024-09-12 10:06:29 +02:00
Alexander Soare
a60d27b132 Raise ValueError if horizon is incompatible with downsampling (#422) 2024-09-09 17:22:46 +01:00
Mishig
9c463661c1 [Vizualization] Better UI on small screens (like in smartphones) (#423) 2024-09-09 15:39:40 +02:00
Mishig
4255655618 [Vizualization] Show user error if videos codec is not supported (#424) 2024-09-09 15:38:41 +02:00
Joe Clinton
f17d9a2ba1 Bug: Fix VQ-Bet not working when n_action_pred_token=1 (#420)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-09-09 09:41:13 +01:00
Remi
9ff829a3a1 Add comments for Aloha (#417)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-09-06 21:07:52 +02:00
Mishig
d6516f0e03 [Visualization tool] Fix when dim state != dim action (#415) 2024-09-06 17:07:26 +02:00
Jack Vial
b0b8612eff fix(calibrate): fix calibrate arms option type. should be str not int (#418)
Co-authored-by: Remi <remi.cadene@huggingface.co>
2024-09-06 14:44:31 +02:00
Mishig
1072a055db [Visualization tool] Fix videos sync (#416) 2024-09-06 10:16:08 +02:00
Remi
9c9f5cac90 Add IntelRealSenseCamera (#410)
Co-authored-by: Simon Alibert <simon.alibert@huggingface.co>
Co-authored-by: shantanuparab-tr <shantanu@trossenrobotics.com>
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-09-05 23:59:41 +02:00
Simon Alibert
9d0c6fe419 Fix nightlies & untrack json files from git lfs (#414) 2024-09-05 15:07:43 +02:00
Simon Alibert
54ac25cfc9 Revert "hotfix"
This reverts commit 150a292795.
2024-09-05 12:54:53 +02:00
Remi Cadene
150a292795 hotfix 2024-09-04 22:03:33 +02:00
Remi
429a463aff Control aloha robot natively (#316)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-09-04 19:28:05 +02:00
Jack Vial
27ba2951d1 fix(tdmpc): Add missing save_freq to tdmpc policy config (#404)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-09-02 19:04:41 +01:00
Jack Vial
b2896d38f5 fix(act): n_vae_encoder_layers config parameter wasn't being used (#400) 2024-09-02 18:29:27 +01:00
Kenneth Gerald Hamilton
c0da806232 repair mailto link (#397) 2024-09-01 00:11:39 +02:00
Mishig
114e09f570 rm EpisodeSampler from viz (#389) 2024-08-30 10:53:55 +02:00
Simon Alibert
04a995e7d1 Fix safe_action (#395) 2024-08-30 10:36:05 +02:00
Michel Aractingi
4806336816 Add the possibility to visualize language instructions in visualize_dataset_html.py (#388)
Co-authored-by: Mishig <dmishig@gmail.com>
2024-08-28 11:50:31 +02:00
Remi
1ce418e4a1 Add koch bimanual (#385) 2024-08-28 00:53:31 +02:00
Michel Aractingi
eb4c505cff Support for converting OpenX datasets from RLDS format to LeRobotDataset (#354)
Signed-off-by: youliangtan <tan_you_liang@hotmail.com>
Co-authored-by: Simon Alibert <alibert.sim@gmail.com>
Co-authored-by: youliangtan <tan_you_liang@hotmail.com>
Co-authored-by: Remi <re.cadene@gmail.com>
2024-08-27 09:07:00 +02:00
Mishig
aad59e6b6b Fix videos in visualize_dataset are not in sync (#382) 2024-08-26 17:38:48 +02:00
Alexander Soare
9ce98bb93c Add safety limits on relative action target (#373) 2024-08-26 14:30:18 +01:00
Alexander Soare
97086cdcdf Make gripper_open_degree a config param (#379) 2024-08-26 12:28:16 +01:00
Alexander Soare
9c7649f140 Make sure init_hydra_config does not require any keys (#376) 2024-08-23 12:27:08 +01:00
Zhuoheng Li
a2592a5563 Provide more information to the user (#358)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
Co-authored-by: Remi <re.cadene@gmail.com>
2024-08-23 11:00:35 +01:00
ellacroix
b5ad79a7d3 Fix typo in tutorial (#371) 2024-08-21 14:14:01 +02:00
Remi
996468bcce Update README.md 2024-08-20 16:45:57 +02:00
Remi
f98200297d Slightly improve tutorial and README (#370)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-08-20 16:41:39 +02:00
NielsRogge
86bbd16d43 Improve discoverability on the hub (#325)
Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-08-19 15:16:46 +02:00
Alexander Soare
0f6e0f6d74 Fix input dim (#365) 2024-08-19 11:42:32 +01:00
Remi
fc3e545e03 Update README.md 2024-08-19 11:14:10 +02:00
Simon Alibert
b98ea415c1 Add dataset cards (#363) 2024-08-16 10:08:44 +02:00
Remi
bbe9057225 Improve control robot ; Add process to configure motor indices (#326)
Co-authored-by: Simon Alibert <alibert.sim@gmail.com>
Co-authored-by: jess-moss <jess.moss@dextrousrobotics.com>
Co-authored-by: Marina Barannikov <marina.barannikov@huggingface.co>
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-08-15 18:11:33 +02:00
Alexander Soare
8c4643687c fix bug in example 2 (#361) 2024-08-15 13:59:47 +01:00
Julien Perez
fab037f78d feat for the GPU poors : Add GPU availability check in evaluate_pretr… (#359)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-08-13 16:03:05 +01:00
Simon Alibert
03d647269e Fix CI builds (#357) 2024-08-12 17:57:03 +02:00
Remi
2252b42337 Add visualize_dataset_html with http.server (#188) 2024-08-08 20:19:06 +03:00
Adrien
bc6384bb80 fix ci (#351)
Signed-off-by: Adrien <adrien@huggingface.co>
2024-08-05 16:12:26 +02:00
resolver101757
8df7e63d61 Update README for cross-platform installation compatibility (#347) 2024-07-30 00:48:41 +02:00
Halvard Bariller
7a3cb1ad34 Adjust the timestamps' description in Diffusion Policy (#343)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-07-26 12:47:03 +01:00
Alexander Soare
f8a6574698 Add online training with TD-MPC as proof of concept (#338) 2024-07-25 11:16:38 +01:00
Alexander Soare
abbb1d2367 Make sure policies don't mutate the batch (#323) 2024-07-22 20:38:33 +01:00
Simon Alibert
0b21210d72 Convert datasets to av1 encoding (#302) 2024-07-22 20:08:59 +02:00
Simon Alibert
461d5472d3 Fix visualize_image_transforms (#333) 2024-07-18 22:26:00 +02:00
Simon Alibert
c75ea789a8 Detect secrets in pre-commit (#332) 2024-07-18 19:39:15 +02:00
Simon Alibert
ee200e86cb Ensure no upper bound constraints on dependencies (#327) 2024-07-18 12:07:15 +02:00
Simon Alibert
8865e19c12 Fix datasets missing versions (#318) 2024-07-16 23:02:31 +02:00
Alexander Soare
5f5efe7cb9 Improve error message when attempting to overwrite a training output folder (#322) 2024-07-16 16:50:31 +01:00
Alexander Soare
c0101f0948 Fix ACT temporal ensembling (#319) 2024-07-16 10:27:21 +01:00
Remi
5e54e39795 Add real robot devices and scripts to control real robot (#288)
Co-authored-by: Simon Alibert <alibert.sim@gmail.com>
2024-07-15 17:43:10 +02:00
Remi
5ffcb48a9a Add available list of raw repo ids (#312) 2024-07-13 11:30:50 +02:00
Alexander Soare
471eab3d7e Make ACT compatible with "observation.environment_state" (#314) 2024-07-11 13:12:22 +01:00
Seungjae Lee
64425d5e00 Bug fix: fix error when setting select_target_actions_indices in vqbet (#310)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-07-10 17:56:11 +01:00
Simon Alibert
e410e5d711 Improve video benchmark (#282)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
Co-authored-by: Remi <re.cadene@gmail.com>
2024-07-09 20:20:25 +02:00
Alexander Soare
cc2f6e7404 Train diffusion pusht_keypoints (#307)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-07-09 12:35:50 +01:00
Alexander Soare
a4d77b99f0 Include observation.environment_state with keypoints in PushT dataset (#303)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-07-09 08:27:40 +01:00
Alexander Soare
7bd5ab16d1 Fix generation of dataset test artifact (#306) 2024-07-05 11:02:26 +01:00
Simon Alibert
74362ac453 Add VQ-BeT copyrights (#299) 2024-07-04 13:02:31 +02:00
Simon Alibert
964f9e86d6 Cleanup config defaults (#300) 2024-07-04 11:53:29 +02:00
Nur Muhammad "Mahi" Shafiullah
7a5fc76b9f Added new credits and citations (#301) 2024-07-03 20:57:47 +02:00
Alexander Soare
342f429f1c Add test to make sure policy dataclass configs match yaml configs (#292) 2024-06-26 09:09:40 +01:00
Seungjae Lee
7d1542cae1 Add VQ-BeT (#166) 2024-06-26 08:55:02 +01:00
Alexander Soare
9aa4cdb976 Checkpoint on final step of training even when it doesn't coincide with save_freq. (#284) 2024-06-20 08:27:01 +01:00
Simon Alibert
2abef3bef9 Enable video_reader backend (#220)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-06-19 17:15:25 +02:00
Thomas Wolf
48951662f2 Bug fix: missing attention mask in VAE encoder in ACT policy (#279)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-06-19 12:07:21 +01:00
Thomas Wolf
56199fb76f Update readme to detail the lerobot dataset format (#275)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-06-18 13:40:03 +01:00
Thomas Wolf
11f1cb5dc9 Bug fix: fix setting different learning rates between backbone and main model in ACT policy (#280) 2024-06-18 13:31:35 +01:00
Jihoon Oh
b72d574891 fix Unet global_cond_dim to use state dim, not action dim (#278) 2024-06-17 15:17:28 +01:00
Alexander Soare
15dd682714 Add multi-image support to diffusion policy (#218) 2024-06-17 08:11:20 +01:00
Marina Barannikov
e28fa2344c added visualization for min and max transforms (#271)
Co-authored-by: Simon Alibert <alibert.sim@gmail.com>
2024-06-17 09:09:57 +02:00
Simon Alibert
a92d79fff2 Fix nightlies (#273) 2024-06-14 17:11:19 +01:00
Thomas Wolf
125bd93e29 Improve push_dataset_to_hub API + Add unit tests (#231)
Co-authored-by: Remi <re.cadene@gmail.com>
Co-authored-by: Simon Alibert <alibert.sim@gmail.com>
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-06-13 15:18:02 +02:00
Marina Barannikov
c38f535c9f FIx make_dataset to match transforms config (#264)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-06-12 19:45:42 +02:00
Marina Barannikov
ff8f6aa6cd Add data augmentation in LeRobotDataset (#234)
Co-authored-by: Simon Alibert <alibert.sim@gmail.com>
Co-authored-by: Remi Cadene <re.cadene@gmail.com>
2024-06-11 19:20:55 +02:00
Ikko Eltociear Ashimine
1cf050d412 chore: update 4_train_policy_with_script.md (#257)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-06-11 08:24:39 +01:00
Wael Karkoub
54c9776bde Improves Type Annotations (#252) 2024-06-10 19:09:48 +01:00
Luc Georges
a06598678c feat(ci): add trufflehog secrets detection (#254) 2024-06-10 14:25:43 +02:00
Thomas Lips
055a6f60c6 add root argument to the dataset visualizer to visualize local datasets (#249) 2024-06-10 10:44:32 +02:00
Simon Alibert
e54d6ea1eb Make display_sys_info.py install-agnostic (#253) 2024-06-07 15:02:17 +02:00
Alexander Soare
1eb4bfe2e4 Fix videos_dir documentation (#247) 2024-06-05 08:25:20 +01:00
Alexander Soare
21f222fa1d Add out_dir option to eval (#244) 2024-06-04 21:01:53 +02:00
amandip7
33362dbd17 Adding parameter dataloading_s to console logs and wandb for tracking… (#243)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-06-04 17:02:05 +01:00
Ruijie
b0d954c6e1 Fix bug in normalize to avoid divide by zero (#239)
Co-authored-by: rj <rj@teleopstrio-razer.lan>
Co-authored-by: Remi <re.cadene@gmail.com>
2024-06-04 12:21:28 +02:00
Simon Alibert
bd3111f28b Fix visualize_dataset.py --help (#241) 2024-06-03 16:35:16 +02:00
Alexander Soare
cf15cba5fc Remove redundant slicing operation in Diffusion Policy (#240) 2024-06-03 13:04:24 +01:00
jganitzer
042e193995 Typo in examples\4_train_policy_with_script.md (#235) 2024-05-31 18:14:14 +01:00
Remi
d585c73f9f Add real-world support for ACT on Aloha/Aloha2 (#228)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-05-31 15:31:02 +02:00
Radek Osmulski
504d2aaf48 add EpisodeAwareSampler (#217)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-05-31 13:43:47 +01:00
Radek Osmulski
83f4f7f7e8 Add precision param to format_big_number (#232) 2024-05-31 10:19:01 +02:00
Alexander Soare
633115d861 Fix chaining in MultiLerobotDataset (#233) 2024-05-31 09:03:28 +01:00
Alexander Soare
57fb5fe8a6 Improve documentation on VAE encoder inputs (#215) 2024-05-30 19:16:44 +02:00
Alexander Soare
0b51a335bc Add a test for MultiLeRobotDataset making sure it produces all frames. (#230)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-05-30 17:46:25 +01:00
Alexander Soare
111cd58f8a Add MultiLerobotDataset for training with multiple LeRobotDatasets (#229) 2024-05-30 16:12:21 +01:00
Remi
265b0ec44d Refactor env to add key word arguments from config yaml (#223)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2024-05-30 13:45:22 +02:00
Remi
2c2e4e14ed Add aloha_dora_format.py (#201)
Co-authored-by: Thomas Wolf <thomwolf@users.noreply.github.com>
2024-05-30 11:26:39 +02:00
Simon Alibert
13310681b1 Enable cuda for end-to-end tests (#222) 2024-05-29 23:02:23 +02:00
Alexander Soare
3d625ae6d3 Handle crop_shape=None in Diffusion Policy (#219) 2024-05-28 18:27:33 +01:00
Alexander Soare
e3b9f1c19b Add resume training (#205)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-05-28 12:04:23 +01:00
Simon Alibert
7ec76ee235 Fix nightly builds (#216) 2024-05-28 10:43:34 +02:00
Radek Osmulski
3b86050ab0 throw an error if config.do_maks_loss and action_is_pad not provided in batch (#213)
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
2024-05-27 09:06:26 +01:00
Alexander Soare
6d39b73399 Adds a tutorial section on how to use arbitrary configuration files (#206) 2024-05-24 12:39:11 +01:00
Simon Alibert
aca424a481 Add dev docker image (#189)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-05-23 14:39:14 +02:00
Simon Alibert
35c1ce7a66 Fix install issues (#191) 2024-05-23 14:25:18 +02:00
Alexander Soare
e67da1d7a6 Add tutorials for using the training script and (#196)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-05-21 16:47:49 +01:00
Alexander Soare
b6c216b590 Add Automatic Mixed Precision option for training and evaluation. (#199) 2024-05-20 18:57:54 +01:00
Alexander Soare
2b270d085b Disable online training (#202)
Co-authored-by: Remi <re.cadene@gmail.com>
2024-05-20 18:27:54 +01:00
841 changed files with 42380 additions and 5482 deletions

View File

@@ -0,0 +1,68 @@
{
"homing_offset": [
2048,
3072,
3072,
-1024,
-1024,
2048,
-2048,
2048,
-2048
],
"drive_mode": [
1,
1,
1,
0,
0,
1,
0,
1,
0
],
"start_pos": [
2015,
3058,
3061,
1071,
1071,
2035,
2152,
2029,
2499
],
"end_pos": [
-1008,
-1963,
-1966,
2141,
2143,
-971,
3043,
-1077,
3144
],
"calib_mode": [
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"LINEAR"
],
"motor_names": [
"waist",
"shoulder",
"shoulder_shadow",
"elbow",
"elbow_shadow",
"forearm_roll",
"wrist_angle",
"wrist_rotate",
"gripper"
]
}

View File

@@ -0,0 +1,68 @@
{
"homing_offset": [
2048,
3072,
3072,
-1024,
-1024,
2048,
-2048,
2048,
-1024
],
"drive_mode": [
1,
1,
1,
0,
0,
1,
0,
1,
0
],
"start_pos": [
2035,
3024,
3019,
979,
981,
1982,
2166,
2124,
1968
],
"end_pos": [
-990,
-2017,
-2015,
2078,
2076,
-1030,
3117,
-1016,
2556
],
"calib_mode": [
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"LINEAR"
],
"motor_names": [
"waist",
"shoulder",
"shoulder_shadow",
"elbow",
"elbow_shadow",
"forearm_roll",
"wrist_angle",
"wrist_rotate",
"gripper"
]
}

View File

@@ -0,0 +1,68 @@
{
"homing_offset": [
2048,
3072,
3072,
-1024,
-1024,
2048,
-2048,
2048,
-2048
],
"drive_mode": [
1,
1,
1,
0,
0,
1,
0,
1,
0
],
"start_pos": [
2056,
2895,
2896,
1191,
1190,
2018,
2051,
2056,
2509
],
"end_pos": [
-1040,
-2004,
-2006,
2126,
2127,
-1010,
3050,
-1117,
3143
],
"calib_mode": [
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"LINEAR"
],
"motor_names": [
"waist",
"shoulder",
"shoulder_shadow",
"elbow",
"elbow_shadow",
"forearm_roll",
"wrist_angle",
"wrist_rotate",
"gripper"
]
}

View File

@@ -0,0 +1,68 @@
{
"homing_offset": [
2048,
3072,
3072,
-1024,
-1024,
2048,
-2048,
2048,
-2048
],
"drive_mode": [
1,
1,
1,
0,
0,
1,
0,
1,
0
],
"start_pos": [
2068,
3034,
3030,
1038,
1041,
1991,
1948,
2090,
1985
],
"end_pos": [
-1025,
-2014,
-2015,
2058,
2060,
-955,
3091,
-940,
2576
],
"calib_mode": [
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"DEGREE",
"LINEAR"
],
"motor_names": [
"waist",
"shoulder",
"shoulder_shadow",
"elbow",
"elbow_shadow",
"forearm_roll",
"wrist_angle",
"wrist_rotate",
"gripper"
]
}

View File

@@ -65,7 +65,6 @@ htmlcov/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
@@ -73,6 +72,11 @@ coverage.xml
.hypothesis/
.pytest_cache/
# Ignore .cache except calibration
.cache/*
!.cache/calibration/
!.cache/calibration/**
# Translations
*.mo
*.pot

2
.gitattributes vendored
View File

@@ -3,4 +3,4 @@
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.mp4 filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.json filter=lfs diff=lfs merge=lfs -text
*.json !text !filter !merge !diff

View File

@@ -10,31 +10,26 @@ on:
env:
PYTHON_VERSION: "3.10"
# CI_SLACK_CHANNEL: ${{ secrets.CI_DOCKER_CHANNEL }}
jobs:
latest-cpu:
name: CPU
runs-on: ubuntu-latest
runs-on:
group: aws-general-8-plus
steps:
- name: Cleanup disk
- name: Install Git LFS
run: |
sudo df -h
# sudo ls -l /usr/local/lib/
# sudo ls -l /usr/share/
sudo du -sh /usr/local/lib/
sudo du -sh /usr/share/
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo du -sh /usr/local/lib/
sudo du -sh /usr/share/
sudo df -h
sudo apt-get update
sudo apt-get install git-lfs
git lfs install
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Check out code
uses: actions/checkout@v4
with:
lfs: true
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -51,52 +46,25 @@ jobs:
tags: huggingface/lerobot-cpu
build-args: PYTHON_VERSION=${{ env.PYTHON_VERSION }}
# - name: Post to a Slack channel
# id: slack
# #uses: slackapi/slack-github-action@v1.25.0
# uses: slackapi/slack-github-action@6c661ce58804a1a20f6dc5fbee7f0381b469e001
# with:
# # Slack channel id, channel name, or user id to post message.
# # See also: https://api.slack.com/methods/chat.postMessage#channels
# channel-id: ${{ env.CI_SLACK_CHANNEL }}
# # For posting a rich message using Block Kit
# payload: |
# {
# "text": "lerobot-cpu Docker Image build result: ${{ job.status }}\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}",
# "blocks": [
# {
# "type": "section",
# "text": {
# "type": "mrkdwn",
# "text": "lerobot-cpu Docker Image build result: ${{ job.status }}\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}"
# }
# }
# ]
# }
# env:
# SLACK_BOT_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
latest-cuda:
name: GPU
runs-on: ubuntu-latest
runs-on:
group: aws-general-8-plus
steps:
- name: Cleanup disk
- name: Install Git LFS
run: |
sudo df -h
# sudo ls -l /usr/local/lib/
# sudo ls -l /usr/share/
sudo du -sh /usr/local/lib/
sudo du -sh /usr/share/
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo du -sh /usr/local/lib/
sudo du -sh /usr/share/
sudo df -h
sudo apt-get update
sudo apt-get install git-lfs
git lfs install
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Check out code
uses: actions/checkout@v4
with:
lfs: true
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -113,27 +81,29 @@ jobs:
tags: huggingface/lerobot-gpu
build-args: PYTHON_VERSION=${{ env.PYTHON_VERSION }}
# - name: Post to a Slack channel
# id: slack
# #uses: slackapi/slack-github-action@v1.25.0
# uses: slackapi/slack-github-action@6c661ce58804a1a20f6dc5fbee7f0381b469e001
# with:
# # Slack channel id, channel name, or user id to post message.
# # See also: https://api.slack.com/methods/chat.postMessage#channels
# channel-id: ${{ env.CI_SLACK_CHANNEL }}
# # For posting a rich message using Block Kit
# payload: |
# {
# "text": "lerobot-gpu Docker Image build result: ${{ job.status }}\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}",
# "blocks": [
# {
# "type": "section",
# "text": {
# "type": "mrkdwn",
# "text": "lerobot-gpu Docker Image build result: ${{ job.status }}\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}"
# }
# }
# ]
# }
# env:
# SLACK_BOT_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
latest-cuda-dev:
name: GPU Dev
runs-on:
group: aws-general-8-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Check out code
uses: actions/checkout@v4
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU dev
uses: docker/build-push-action@v5
with:
context: .
file: ./docker/lerobot-gpu-dev/Dockerfile
push: true
tags: huggingface/lerobot-gpu:dev
build-args: PYTHON_VERSION=${{ env.PYTHON_VERSION }}

View File

@@ -16,7 +16,8 @@ jobs:
name: CPU
strategy:
fail-fast: false
runs-on: ubuntu-latest
runs-on:
group: aws-general-8-plus
container:
image: huggingface/lerobot-cpu:latest
options: --shm-size "16gb"
@@ -43,7 +44,8 @@ jobs:
name: GPU
strategy:
fail-fast: false
runs-on: [single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu"
@@ -70,6 +72,8 @@ jobs:
# files: ./coverage.xml
# verbose: true
- name: Tests end-to-end
env:
DEVICE: cuda
run: make test-end-to-end
# - name: Generate Report

View File

@@ -54,3 +54,31 @@ jobs:
- name: Poetry check
run: poetry check
poetry_relax:
name: Poetry relax
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v3
- name: Install poetry
run: pipx install poetry
- name: Install poetry-relax
run: poetry self add poetry-relax
- name: Poetry relax
id: poetry_relax
run: |
output=$(poetry relax --check 2>&1)
if echo "$output" | grep -q "Proposing updates"; then
echo "$output"
echo ""
echo "Some dependencies have caret '^' version requirement added by poetry by default."
echo "Please replace them with '>='. You can do this by hand or use poetry-relax to do this."
exit 1
else
echo "$output"
fi

View File

@@ -42,26 +42,14 @@ jobs:
build_modified_dockerfiles:
name: Build modified Docker images
needs: get_changed_files
runs-on: ubuntu-latest
runs-on:
group: aws-general-8-plus
if: ${{ needs.get_changed_files.outputs.matrix }} != ''
strategy:
fail-fast: false
matrix:
docker-file: ${{ fromJson(needs.get_changed_files.outputs.matrix) }}
steps:
- name: Cleanup disk
run: |
sudo df -h
# sudo ls -l /usr/local/lib/
# sudo ls -l /usr/share/
sudo du -sh /usr/local/lib/
sudo du -sh /usr/share/
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo du -sh /usr/local/lib/
sudo du -sh /usr/share/
sudo df -h
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

View File

@@ -10,6 +10,8 @@ on:
- "examples/**"
- ".github/**"
- "poetry.lock"
- "Makefile"
- ".cache/**"
push:
branches:
- main
@@ -19,6 +21,8 @@ on:
- "examples/**"
- ".github/**"
- "poetry.lock"
- "Makefile"
- ".cache/**"
jobs:
pytest:
@@ -32,14 +36,18 @@ jobs:
with:
lfs: true # Ensure LFS files are pulled
- name: Install EGL
run: sudo apt-get update && sudo apt-get install -y libegl1-mesa-dev
- name: Install apt dependencies
# portaudio19-dev is needed to install pyaudio
run: |
sudo apt-get update && \
sudo apt-get install -y libegl1-mesa-dev ffmpeg portaudio19-dev
- name: Install poetry
run: |
pipx install poetry && poetry config virtualenvs.in-project true
echo "${{ github.workspace }}/.venv/bin" >> $GITHUB_PATH
# TODO(rcadene, aliberts): python 3.12 seems to be used in the tests, not python 3.10
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
@@ -58,7 +66,6 @@ jobs:
-W ignore::UserWarning:gymnasium.utils.env_checker:247 \
&& rm -rf tests/outputs outputs
pytest-minimal:
name: Pytest (minimal install)
runs-on: ubuntu-latest
@@ -70,11 +77,15 @@ jobs:
with:
lfs: true # Ensure LFS files are pulled
- name: Install apt dependencies
run: sudo apt-get update && sudo apt-get install -y ffmpeg
- name: Install poetry
run: |
pipx install poetry && poetry config virtualenvs.in-project true
echo "${{ github.workspace }}/.venv/bin" >> $GITHUB_PATH
# TODO(rcadene, aliberts): python 3.12 seems to be used in the tests, not python 3.10
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
@@ -104,8 +115,11 @@ jobs:
with:
lfs: true # Ensure LFS files are pulled
- name: Install EGL
run: sudo apt-get update && sudo apt-get install -y libegl1-mesa-dev
- name: Install apt dependencies
# portaudio19-dev is needed to install pyaudio
run: |
sudo apt-get update && \
sudo apt-get install -y libegl1-mesa-dev portaudio19-dev
- name: Install poetry
run: |

20
.github/workflows/trufflehog.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
on:
push:
name: Secret Leaks
permissions:
contents: read
jobs:
trufflehog:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Secret Scanning
uses: trufflesecurity/trufflehog@main
with:
extra_args: --only-verified

36
.gitignore vendored
View File

@@ -2,12 +2,17 @@
logs
tmp
wandb
# Data
data
outputs
.vscode
rl
# Apple
.DS_Store
# VS Code
.vscode
# HPC
nautilus/*.yaml
*.key
@@ -61,7 +66,6 @@ htmlcov/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
@@ -69,6 +73,11 @@ coverage.xml
.hypothesis/
.pytest_cache/
# Ignore .cache except calibration
.cache/*
!.cache/calibration/
!.cache/calibration/**
# Translations
*.mo
*.pot
@@ -90,6 +99,7 @@ instance/
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
@@ -102,13 +112,6 @@ ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
@@ -119,6 +122,13 @@ celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
venv/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
@@ -136,3 +146,9 @@ dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/

View File

@@ -14,11 +14,11 @@ repos:
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/asottile/pyupgrade
rev: v3.15.2
rev: v3.16.0
hooks:
- id: pyupgrade
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.3
rev: v0.5.2
hooks:
- id: ruff
args: [--fix]
@@ -31,3 +31,7 @@ repos:
args:
- "--check"
- "--no-update"
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.4
hooks:
- id: gitleaks

View File

@@ -20,7 +20,7 @@ Some of the ways you can contribute to 🤗 LeRobot:
* Contributing to the examples or to the documentation.
* Submitting issues related to bugs or desired new features.
Following the guides below, feel free to open issues and PRs and to coordinate your efforts with the community on our [Discord Channel](https://discord.gg/VjFz58wn3R). For specific inquiries, reach out to [Remi Cadene](remi.cadene@huggingface.co).
Following the guides below, feel free to open issues and PRs and to coordinate your efforts with the community on our [Discord Channel](https://discord.gg/VjFz58wn3R). For specific inquiries, reach out to [Remi Cadene](mailto:remi.cadene@huggingface.co).
If you are not sure how to contribute or want to know the next features we working on, look on this project page: [LeRobot TODO](https://github.com/orgs/huggingface/projects/46)

126
Makefile
View File

@@ -5,11 +5,12 @@ PYTHON_PATH := $(shell which python)
# If Poetry is installed, redefine PYTHON_PATH to use the Poetry-managed Python
POETRY_CHECK := $(shell command -v poetry)
ifneq ($(POETRY_CHECK),)
PYTHON_PATH := $(shell poetry run which python)
PYTHON_PATH := $(shell poetry run which python)
endif
export PATH := $(dir $(PYTHON_PATH)):$(PATH)
DEVICE ?= cpu
build-cpu:
docker build -t lerobot:latest -f docker/lerobot-cpu/Dockerfile .
@@ -18,61 +19,100 @@ build-gpu:
docker build -t lerobot:latest -f docker/lerobot-gpu/Dockerfile .
test-end-to-end:
${MAKE} test-act-ete-train
${MAKE} test-act-ete-eval
${MAKE} test-diffusion-ete-train
${MAKE} test-diffusion-ete-eval
${MAKE} test-tdmpc-ete-train
${MAKE} test-tdmpc-ete-eval
${MAKE} test-default-ete-eval
${MAKE} DEVICE=$(DEVICE) test-act-ete-train
${MAKE} DEVICE=$(DEVICE) test-act-ete-eval
${MAKE} DEVICE=$(DEVICE) test-act-ete-train-amp
${MAKE} DEVICE=$(DEVICE) test-act-ete-eval-amp
${MAKE} DEVICE=$(DEVICE) test-diffusion-ete-train
${MAKE} DEVICE=$(DEVICE) test-diffusion-ete-eval
${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-train
${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-train-with-online
${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-eval
${MAKE} DEVICE=$(DEVICE) test-default-ete-eval
${MAKE} DEVICE=$(DEVICE) test-act-pusht-tutorial
test-act-ete-train:
python lerobot/scripts/train.py \
policy=act \
policy.dim_model=64 \
env=aloha \
wandb.enable=False \
training.offline_steps=2 \
training.online_steps=0 \
eval.n_episodes=1 \
eval.batch_size=1 \
device=cpu \
training.save_model=true \
device=$(DEVICE) \
training.save_checkpoint=true \
training.save_freq=2 \
policy.n_action_steps=20 \
policy.chunk_size=20 \
training.batch_size=2 \
training.image_transforms.enable=true \
hydra.run.dir=tests/outputs/act/
test-act-ete-eval:
python lerobot/scripts/eval.py \
-p tests/outputs/act/checkpoints/000002 \
-p tests/outputs/act/checkpoints/000002/pretrained_model \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=8 \
device=cpu \
device=$(DEVICE) \
test-act-ete-train-amp:
python lerobot/scripts/train.py \
policy=act \
policy.dim_model=64 \
env=aloha \
wandb.enable=False \
training.offline_steps=2 \
training.online_steps=0 \
eval.n_episodes=1 \
eval.batch_size=1 \
device=$(DEVICE) \
training.save_checkpoint=true \
training.save_freq=2 \
policy.n_action_steps=20 \
policy.chunk_size=20 \
training.batch_size=2 \
hydra.run.dir=tests/outputs/act_amp/ \
training.image_transforms.enable=true \
use_amp=true
test-act-ete-eval-amp:
python lerobot/scripts/eval.py \
-p tests/outputs/act_amp/checkpoints/000002/pretrained_model \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=8 \
device=$(DEVICE) \
use_amp=true
test-diffusion-ete-train:
python lerobot/scripts/train.py \
policy=diffusion \
policy.down_dims=\[64,128,256\] \
policy.diffusion_step_embed_dim=32 \
policy.num_inference_steps=10 \
env=pusht \
wandb.enable=False \
training.offline_steps=2 \
training.online_steps=0 \
eval.n_episodes=1 \
eval.batch_size=1 \
device=cpu \
training.save_model=true \
device=$(DEVICE) \
training.save_checkpoint=true \
training.save_freq=2 \
training.batch_size=2 \
training.image_transforms.enable=true \
hydra.run.dir=tests/outputs/diffusion/
test-diffusion-ete-eval:
python lerobot/scripts/eval.py \
-p tests/outputs/diffusion/checkpoints/000002 \
-p tests/outputs/diffusion/checkpoints/000002/pretrained_model \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=8 \
device=cpu \
device=$(DEVICE) \
test-tdmpc-ete-train:
python lerobot/scripts/train.py \
@@ -82,24 +122,46 @@ test-tdmpc-ete-train:
dataset_repo_id=lerobot/xarm_lift_medium \
wandb.enable=False \
training.offline_steps=2 \
training.online_steps=2 \
training.online_steps=0 \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=2 \
device=cpu \
training.save_model=true \
device=$(DEVICE) \
training.save_checkpoint=true \
training.save_freq=2 \
training.batch_size=2 \
training.image_transforms.enable=true \
hydra.run.dir=tests/outputs/tdmpc/
test-tdmpc-ete-train-with-online:
python lerobot/scripts/train.py \
env=pusht \
env.gym.obs_type=environment_state_agent_pos \
policy=tdmpc_pusht_keypoints \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=10 \
device=$(DEVICE) \
training.offline_steps=2 \
training.online_steps=20 \
training.save_checkpoint=false \
training.save_freq=10 \
training.batch_size=2 \
training.online_rollout_n_episodes=2 \
training.online_rollout_batch_size=2 \
training.online_steps_between_rollouts=10 \
training.online_buffer_capacity=15 \
eval.use_async_envs=true \
hydra.run.dir=tests/outputs/tdmpc_online/
test-tdmpc-ete-eval:
python lerobot/scripts/eval.py \
-p tests/outputs/tdmpc/checkpoints/000002 \
-p tests/outputs/tdmpc/checkpoints/000002/pretrained_model \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=8 \
device=cpu \
device=$(DEVICE) \
test-default-ete-eval:
python lerobot/scripts/eval.py \
@@ -107,4 +169,22 @@ test-default-ete-eval:
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=8 \
device=cpu \
device=$(DEVICE) \
test-act-pusht-tutorial:
cp examples/advanced/1_train_act_pusht/act_pusht.yaml lerobot/configs/policy/created_by_Makefile.yaml
python lerobot/scripts/train.py \
policy=created_by_Makefile.yaml \
env=pusht \
wandb.enable=False \
training.offline_steps=2 \
eval.n_episodes=1 \
eval.batch_size=1 \
env.episode_length=2 \
device=$(DEVICE) \
training.save_model=true \
training.save_freq=2 \
training.batch_size=2 \
training.image_transforms.enable=true \
hydra.run.dir=tests/outputs/act_pusht/
rm lerobot/configs/policy/created_by_Makefile.yaml

192
README.md
View File

@@ -22,8 +22,22 @@
</div>
<h2 align="center">
<p><a href="https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md">New robot in town: SO-100</a></p>
</h2>
<div align="center">
<img src="media/so100/leader_follower.webp?raw=true" alt="SO-100 leader and follower arms" title="SO-100 leader and follower arms" width="50%">
<p>We just added a new tutorial on how to build a more affordable robot, at the price of $110 per arm!</p>
<p>Teach it new skills by showing it a few moves with just a laptop.</p>
<p>Then watch your homemade robot act autonomously 🤯</p>
<p>Follow the link to the <a href="https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md">full tutorial for SO-100</a>.</p>
</div>
<br/>
<h3 align="center">
<p>State-of-the-art Machine Learning for real-world robotics</p>
<p>LeRobot: State-of-the-art AI for real-world robotics</p>
</h3>
---
@@ -58,25 +72,32 @@
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
## Installation
Download our source code:
```bash
git clone https://github.com/huggingface/lerobot.git && cd lerobot
git clone https://github.com/huggingface/lerobot.git
cd lerobot
```
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
```bash
conda create -y -n lerobot python=3.10 && conda activate lerobot
conda create -y -n lerobot python=3.10
conda activate lerobot
```
Install 🤗 LeRobot:
```bash
pip install .
pip install -e .
```
> **NOTE:** Depending on your platform, If you encounter any build errors during this step
you may need to install `cmake` and `build-essential` for building some of our dependencies.
On linux: `sudo apt-get install cmake build-essential`
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
- [aloha](https://github.com/huggingface/gym-aloha)
- [xarm](https://github.com/huggingface/gym-xarm)
@@ -84,7 +105,7 @@ For simulations, 🤗 LeRobot comes with gymnasium environments that can be inst
For instance, to install 🤗 LeRobot with aloha and pusht, use:
```bash
pip install ".[aloha, pusht]"
pip install -e ".[aloha, pusht]"
```
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
@@ -99,6 +120,7 @@ wandb login
```
.
├── examples # contains demonstration examples, start here to learn about LeRobot
| └── advanced # contains even more examples for those who have mastered the basics
├── lerobot
| ├── configs # contains hydra yaml files with all options that you can override in the command line
| | ├── default.yaml # selected by default, it loads pusht environment and diffusion policy
@@ -108,10 +130,12 @@ wandb login
| | ├── datasets # various datasets of human demonstrations: aloha, pusht, xarm
| | ├── envs # various sim environments: aloha, pusht, xarm
| | ├── policies # various policies: act, diffusion, tdmpc
| | ├── robot_devices # various real devices: dynamixel motors, opencv cameras, koch robots
| | └── utils # various utilities
| └── scripts # contains functions to execute via command line
| ├── eval.py # load policy and evaluate it on an environment
| ├── train.py # train a policy via imitation learning and/or reinforcement learning
| ├── control_robot.py # teleoperate a real robot, record data, run a policy
| ├── push_dataset_to_hub.py # convert your dataset into LeRobot dataset format and upload it to the Hugging Face hub
| └── visualize_dataset.py # load a dataset and render its demonstrations
├── outputs # contains results of scripts execution: logs, videos, model checkpoints
@@ -122,13 +146,21 @@ wandb login
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically download data from the Hugging Face hub.
You can also locally visualize episodes from a dataset by executing our script from the command line:
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
```bash
python lerobot/scripts/visualize_dataset.py \
--repo-id lerobot/pusht \
--episode-index 0
```
or from a dataset in a local folder with the root `DATA_DIR` environment variable (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
```bash
DATA_DIR='./my_local_data_dir' python lerobot/scripts/visualize_dataset.py \
--repo-id lerobot/pusht \
--episode-index 0
```
It will open `rerun.io` and display the camera streams, robot states and actions, like this:
https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
@@ -136,6 +168,53 @@ https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-f
Our script can also visualize datasets stored on a distant server. See `python lerobot/scripts/visualize_dataset.py --help` for more instructions.
### The `LeRobotDataset` format
A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.
Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:
```
dataset attributes:
├ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
│ ├ observation.images.cam_high (VideoFrame):
│ │ VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
│ ├ observation.state (list of float32): position of an arm joints (for instance)
│ ... (more observations)
│ ├ action (list of float32): goal position of an arm joints (for instance)
│ ├ episode_index (int64): index of the episode for this sample
│ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
│ ├ timestamp (float32): timestamp in the episode
│ ├ next.done (bool): indicates the end of en episode ; True for the last frame in each episode
│ └ index (int64): general index in the whole dataset
├ episode_data_index: contains 2 tensors with the start and end indices of each episode
│ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
│ └ to: (1D int64 tensor): last frame index for each episode — shape (num episodes,)
├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
│ ├ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
│ ...
├ info: a dictionary of metadata on the dataset
│ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
│ ├ fps (float): frame per second the dataset is recorded/synchronized to
│ ├ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
│ └ encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
├ videos_dir (Path): where the mp4 videos or png images are stored/accessed
└ camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
```
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
- hf_dataset stored using Hugging Face datasets library serialization to parquet
- videos are stored in mp4 format to save space or png files
- episode_data_index saved using `safetensor` tensor serialization format
- stats saved using `safetensor` tensor serialization format
- info are saved using JSON
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can set the `DATA_DIR` environment variable to your root dataset folder as illustrated in the above section on dataset visualization.
### Evaluate a pretrained policy
Check out [example 2](./examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.
@@ -149,18 +228,19 @@ python lerobot/scripts/eval.py \
```
Note: After training your own policy, you can re-evaluate the checkpoints with:
```bash
python lerobot/scripts/eval.py \
-p PATH/TO/TRAIN/OUTPUT/FOLDER
python lerobot/scripts/eval.py -p {OUTPUT_DIR}/checkpoints/last/pretrained_model
```
See `python lerobot/scripts/eval.py --help` for more instructions.
### Train your own policy
Check out [example 3](./examples/3_train_policy.py) that illustrates how to start training a model.
Check out [example 3](./examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
In general, you can use our training script to easily train any policy. Here is an example of training the ACT policy on trajectories collected by humans on the Aloha simulation environment for the insertion task:
```bash
python lerobot/scripts/train.py \
policy=act \
@@ -174,17 +254,49 @@ The experiment directory is automatically generated and will show up in yellow i
hydra.run.dir=your/new/experiment/dir
```
In the experiment directory there will be a folder called `checkpoints` which will have the following structure:
```bash
checkpoints
├── 000250 # checkpoint_dir for training step 250
│ ├── pretrained_model # Hugging Face pretrained model dir
│ │ ├── config.json # Hugging Face pretrained model config
│ │ ├── config.yaml # consolidated Hydra config
│ │ ├── model.safetensors # model weights
│ │ └── README.md # Hugging Face model card
│ └── training_state.pth # optimizer/scheduler/rng state and training step
```
To resume training from a checkpoint, you can add these to the `train.py` python command:
```bash
hydra.run.dir=your/original/experiment/dir resume=true
```
It will load the pretrained model, optimizer and scheduler states for training. For more information please see our tutorial on training resumption [here](https://github.com/huggingface/lerobot/blob/main/examples/5_resume_training.md).
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding:
```bash
wandb.enable=true
```
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser:
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explaination of some commonly used metrics in logs.
![](media/wandb.png)
Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. After training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python lerobot/scripts/eval.py --help` for more instructions.
Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python lerobot/scripts/eval.py --help` for more instructions.
#### Reproduce state-of-the-art (SOTA)
We have organized our configuration files (found under [`lerobot/configs`](./lerobot/configs)) such that they reproduce SOTA results from a given model variant in their respective original works. Simply running:
```bash
python lerobot/scripts/train.py policy=diffusion env=pusht
```
reproduces SOTA results for Diffusion Policy on the PushT task.
Pretrained policies, along with reproduction details, can be found under the "Models" section of https://huggingface.co/lerobot.
## Contribute
@@ -197,13 +309,13 @@ To add a dataset to the hub, you need to login using a write-access token, which
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Then move your dataset folder in `data` directory (e.g. `data/aloha_static_pingpong_test`), and push your dataset to the hub with:
Then point to your raw dataset folder (e.g. `data/aloha_static_pingpong_test_raw`), and push your dataset to the hub with:
```bash
python lerobot/scripts/push_dataset_to_hub.py \
--data-dir data \
--dataset-id aloha_static_pingpong_test \
--raw-format aloha_hdf5 \
--community-id lerobot
--raw-dir data/aloha_static_pingpong_test_raw \
--out-dir data \
--repo-id lerobot/aloha_static_pingpong_test \
--raw-format aloha_hdf5
```
See `python lerobot/scripts/push_dataset_to_hub.py --help` for more instructions.
@@ -215,14 +327,14 @@ If your dataset format is not supported, implement your own in `lerobot/common/d
Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
You first need to find the checkpoint located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). It should contain:
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
- `config.yaml`: A consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility.
To upload these to the hub, run the following:
```bash
huggingface-cli upload ${hf_user}/${repo_name} path/to/checkpoint/dir
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
```
See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.
@@ -255,7 +367,7 @@ with profile(
## Citation
If you want, you can cite this work with:
```
```bibtex
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
@@ -263,3 +375,45 @@ If you want, you can cite this work with:
year = {2024}
}
```
Additionally, if you are using any of the particular policy architecture, pretrained models, or datasets, it is recommended to cite the original authors of the work as they appear below:
- [Diffusion Policy](https://diffusion-policy.cs.columbia.edu)
```bibtex
@article{chi2024diffusionpolicy,
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
journal = {The International Journal of Robotics Research},
year = {2024},
}
```
- [ACT or ALOHA](https://tonyzhaozh.github.io/aloha)
```bibtex
@article{zhao2023learning,
title={Learning fine-grained bimanual manipulation with low-cost hardware},
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
journal={arXiv preprint arXiv:2304.13705},
year={2023}
}
```
- [TDMPC](https://www.nicklashansen.com/td-mpc/)
```bibtex
@inproceedings{Hansen2022tdmpc,
title={Temporal Difference Learning for Model Predictive Control},
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
booktitle={ICML},
year={2022}
}
```
- [VQ-BeT](https://sjlee.cc/vq-bet/)
```bibtex
@article{lee2024behavior,
title={Behavior generation with latent actions},
author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
journal={arXiv preprint arXiv:2403.03181},
year={2024}
}
```

271
benchmarks/video/README.md Normal file
View File

@@ -0,0 +1,271 @@
# Video benchmark
## Questions
What is the optimal trade-off between:
- maximizing loading time with random access,
- minimizing memory space on disk,
- maximizing success rate of policies,
- compatibility across devices/platforms for decoding videos (e.g. video players, web browsers).
How to encode videos?
- Which video codec (`-vcodec`) to use? h264, h265, AV1?
- What pixel format to use (`-pix_fmt`)? `yuv444p` or `yuv420p`?
- How much compression (`-crf`)? No compression with `0`, intermediate compression with `25` or extreme with `50+`?
- Which frequency to chose for key frames (`-g`)? A key frame every `10` frames?
How to decode videos?
- Which `decoder`? `torchvision`, `torchaudio`, `ffmpegio`, `decord`, or `nvc`?
- What scenarios to use for the requesting timestamps during benchmark? (`timestamps_mode`)
## Variables
**Image content & size**
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
For these reasons, we run this benchmark on four representative datasets:
- `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
- `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
- `aliberts/paris_street`: (720 x 1280 pixels) real-world outdoor, moving camera.
- `aliberts/kitchen`: (1080 x 1920 pixels) real-world indoor, fixed camera.
Note: The datasets used for this benchmark need to be image datasets, not video datasets.
**Data augmentations**
We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
### Encoding parameters
| parameter | values |
|-------------|--------------------------------------------------------------|
| **vcodec** | `libx264`, `libx265`, `libsvtav1` |
| **pix_fmt** | `yuv444p`, `yuv420p` |
| **g** | `1`, `2`, `3`, `4`, `5`, `6`, `10`, `15`, `20`, `40`, `None` |
| **crf** | `0`, `5`, `10`, `15`, `20`, `25`, `30`, `40`, `50`, `None` |
Note that `crf` value might be interpreted differently by various video codecs. In other words, the same value used with one codec doesn't necessarily translate into the same compression level with another codec. In fact, the default value (`None`) isn't the same amongst the different video codecs. Importantly, it is also the case for many other ffmpeg arguments like `g` which specifies the frequency of the key frames.
For a comprehensive list and documentation of these parameters, see the ffmpeg documentation depending on the video codec used:
- h264: https://trac.ffmpeg.org/wiki/Encode/H.264
- h265: https://trac.ffmpeg.org/wiki/Encode/H.265
- AV1: https://trac.ffmpeg.org/wiki/Encode/AV1
### Decoding parameters
**Decoder**
We tested two video decoding backends from torchvision:
- `pyav` (default)
- `video_reader` (requires to build torchvision from source)
**Requested timestamps**
Given the way video decoding works, once a keyframe has been loaded, the decoding of subsequent frames is fast.
This of course is affected by the `-g` parameter during encoding, which specifies the frequency of the keyframes. Given our typical use cases in robotics policies which might request a few timestamps in different random places, we want to replicate these use cases with the following scenarios:
- `1_frame`: 1 frame,
- `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
- `6_frames`: 6 consecutive frames (e.g. `[t + i / fps for i in range(6)]`)
Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`.
Additionally, because some policies might request single timestamps that are a few frames appart, we also have the following scenario:
- `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`),
However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded.
## Metrics
**Data compression ratio (lower is better)**
`video_images_size_ratio` is the ratio of the memory space on disk taken by the encoded video over the memory space taken by the original images. For instance, `video_images_size_ratio=25%` means that the video takes 4 times less memory space on disk compared to the original images.
**Loading time ratio (lower is better)**
`video_images_load_time_ratio` is the ratio of the time it takes to decode frames from the video at a given timestamps over the time it takes to load the exact same original images. Lower is better. For instance, `video_images_load_time_ratio=200%` means that decoding from video is 2 times slower than loading the original images.
**Average Mean Square Error (lower is better)**
`avg_mse` is the average mean square error between each decoded frame and its corresponding original image over all requested timestamps, and also divided by the number of pixels in the image to be comparable when switching to different image sizes.
**Average Peak Signal to Noise Ratio (higher is better)**
`avg_psnr` measures the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Higher PSNR indicates better quality.
**Average Structural Similarity Index Measure (higher is better)**
`avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity.
One aspect that can't be measured here with those metrics is the compatibility of the encoding accross platforms, in particular on web browser, for visualization purposes.
h264, h265 and AV1 are all commonly used codecs and should not be pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
- `yuv420p` is more widely supported across various platforms, including web browsers.
- `yuv444p` offers higher color fidelity but might not be supported as broadly.
<!-- **Loss of a pretrained policy (higher is better)** (not available)
`loss_pretrained` is the result of evaluating with the selected encoding/decoding settings a policy pretrained on original images. It is easier to understand than `avg_l2_error`.
**Success rate after retraining (higher is better)** (not available)
`success_rate` is the result of training and evaluating a policy with the selected encoding/decoding settings. It is the most difficult metric to get but also the very best. -->
## How the benchmark works
The benchmark evaluates both encoding and decoding of video frames on the first episode of each dataset.
**Encoding:** for each `vcodec` and `pix_fmt` pair, we use a default value for `g` and `crf` upon which we change a single value (either `g` or `crf`) to one of the specified values (we don't test every combination of those as this would be computationally too heavy).
This gives a unique set of encoding parameters which is used to encode the episode.
**Decoding:** Then, for each of those unique encodings, we iterate through every combination of the decoding parameters `backend` and `timestamps_mode`. For each of them, we record the metrics of a number of samples (given by `--num-samples`). This is parallelized for efficiency and the number of processes can be controlled with `--num-workers`. Ideally, it's best to have a `--num-samples` that is divisible by `--num-workers`.
Intermediate results saved for each `vcodec` and `pix_fmt` combination in csv tables.
These are then all concatenated to a single table ready for analysis.
## Caveats
We tried to measure the most impactful parameters for both encoding and decoding. However, for computational reasons we can't test out every combination.
Additional encoding parameters exist that are not included in this benchmark. In particular:
- `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1.
- `-tune` which allows to optimize the encoding for certains aspects (e.g. film quality, fast decoding, etc.).
See the documentation mentioned above for more detailled info on these settings and for a more comprehensive list of other parameters.
Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few:
- `torchaudio`
- `ffmpegio`
- `decord`
- `nvc`
Note as well that since we are mostly interested in the performance at decoding time (also because encoding is done only once before uploading a dataset), we did not measure encoding times nor have any metrics regarding encoding.
However, besides the necessity to build ffmpeg from source, encoding did not pose any issue and it didn't take a significant amount of time during this benchmark.
## Install
Building ffmpeg from source is required to include libx265 and libaom/libsvtav1 (av1) video codecs ([compilation guide](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu)).
**Note:** While you still need to build torchvision with a conda-installed `ffmpeg<4.3` to use the `video_reader` decoder (as described in [#220](https://github.com/huggingface/lerobot/pull/220)), you also need another version which is custom-built with all the video codecs for encoding. For the script to then use that version, you can prepend the command above with `PATH="$HOME/bin:$PATH"`, which is where ffmpeg should be built.
## Adding a video decoder
Right now, we're only benchmarking the two video decoder available with torchvision: `pyav` and `video_reader`.
You can easily add a new decoder to benchmark by adding it to this function in the script:
```diff
def decode_video_frames(
video_path: str,
timestamps: list[float],
tolerance_s: float,
backend: str,
) -> torch.Tensor:
if backend in ["pyav", "video_reader"]:
return decode_video_frames_torchvision(
video_path, timestamps, tolerance_s, backend
)
+ elif backend == ["your_decoder"]:
+ return your_decoder_function(
+ video_path, timestamps, tolerance_s, backend
+ )
else:
raise NotImplementedError(backend)
```
## Example
For a quick run, you can try these parameters:
```bash
python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \
--repo-ids \
lerobot/pusht_image \
aliberts/aloha_mobile_shrimp_image \
--vcodec libx264 libx265 \
--pix-fmt yuv444p yuv420p \
--g 2 20 None \
--crf 10 40 None \
--timestamps-modes 1_frame 2_frames \
--backends pyav video_reader \
--num-samples 5 \
--num-workers 5 \
--save-frames 0
```
## Results
### Reproduce
We ran the benchmark with the following parameters:
```bash
# h264 and h265 encodings
python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \
--repo-ids \
lerobot/pusht_image \
aliberts/aloha_mobile_shrimp_image \
aliberts/paris_street \
aliberts/kitchen \
--vcodec libx264 libx265 \
--pix-fmt yuv444p yuv420p \
--g 1 2 3 4 5 6 10 15 20 40 None \
--crf 0 5 10 15 20 25 30 40 50 None \
--timestamps-modes 1_frame 2_frames 6_frames \
--backends pyav video_reader \
--num-samples 50 \
--num-workers 5 \
--save-frames 1
# av1 encoding (only compatible with yuv420p and pyav decoder)
python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \
--repo-ids \
lerobot/pusht_image \
aliberts/aloha_mobile_shrimp_image \
aliberts/paris_street \
aliberts/kitchen \
--vcodec libsvtav1 \
--pix-fmt yuv420p \
--g 1 2 3 4 5 6 10 15 20 40 None \
--crf 0 5 10 15 20 25 30 40 50 None \
--timestamps-modes 1_frame 2_frames 6_frames \
--backends pyav \
--num-samples 50 \
--num-workers 5 \
--save-frames 1
```
The full results are available [here](https://docs.google.com/spreadsheets/d/1OYJB43Qu8fC26k_OyoMFgGBBKfQRCi4BIuYitQnq3sw/edit?usp=sharing)
### Parameters selected for LeRobotDataset
Considering these results, we chose what we think is the best set of encoding parameter:
- vcodec: `libsvtav1`
- pix-fmt: `yuv420p`
- g: `2`
- crf: `30`
Since we're using av1 encoding, we're choosing the `pyav` decoder as `video_reader` does not support it (and `pyav` doesn't require a custom build of `torchvision`).
### Summary
These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_frames` and `backend=pyav`
| video_images_size_ratio | vcodec | pix_fmt | | | |
|------------------------------------|------------|---------|-----------|-----------|-----------|
| | libx264 | | libx265 | | libsvtav1 |
| repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | **16.97%** | 17.58% | 18.57% | 18.86% | 22.06% |
| aliberts/aloha_mobile_shrimp_image | 2.14% | 2.11% | 1.38% | **1.37%** | 5.59% |
| aliberts/paris_street | 2.12% | 2.13% | **1.54%** | **1.54%** | 4.43% |
| aliberts/kitchen | 1.40% | 1.39% | **1.00%** | **1.00%** | 2.52% |
| video_images_load_time_ratio | vcodec | pix_fmt | | | |
|------------------------------------|---------|---------|----------|---------|-----------|
| | libx264 | | libx265 | | libsvtav1 |
| repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | 6.45 | 5.19 | **1.90** | 2.12 | 2.47 |
| aliberts/aloha_mobile_shrimp_image | 11.80 | 7.92 | 0.71 | 0.85 | **0.48** |
| aliberts/paris_street | 2.21 | 2.05 | 0.36 | 0.49 | **0.30** |
| aliberts/kitchen | 1.46 | 1.46 | 0.28 | 0.51 | **0.26** |
| | | vcodec | pix_fmt | | | |
|------------------------------------|----------|----------|--------------|----------|-----------|--------------|
| | | libx264 | | libx265 | | libsvtav1 |
| repo_id | metric | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | avg_mse | 2.90E-04 | **2.03E-04** | 3.13E-04 | 2.29E-04 | 2.19E-04 |
| | avg_psnr | 35.44 | 37.07 | 35.49 | **37.30** | 37.20 |
| | avg_ssim | 98.28% | **98.85%** | 98.31% | 98.84% | 98.72% |
| aliberts/aloha_mobile_shrimp_image | avg_mse | 2.76E-04 | 2.59E-04 | 3.17E-04 | 3.06E-04 | **1.30E-04** |
| | avg_psnr | 35.91 | 36.21 | 35.88 | 36.09 | **40.17** |
| | avg_ssim | 95.19% | 95.18% | 95.00% | 95.05% | **97.73%** |
| aliberts/paris_street | avg_mse | 6.89E-04 | 6.70E-04 | 4.03E-03 | 4.02E-03 | **3.09E-04** |
| | avg_psnr | 33.48 | 33.68 | 32.05 | 32.15 | **35.40** |
| | avg_ssim | 93.76% | 93.75% | 89.46% | 89.46% | **95.46%** |
| aliberts/kitchen | avg_mse | 2.50E-04 | 2.24E-04 | 4.28E-04 | 4.18E-04 | **1.53E-04** |
| | avg_psnr | 36.73 | 37.33 | 36.56 | 36.75 | **39.12** |
| | avg_ssim | 95.47% | 95.58% | 95.52% | 95.53% | **96.82%** |

View File

@@ -0,0 +1,90 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Capture video feed from a camera as raw images."""
import argparse
import datetime as dt
from pathlib import Path
import cv2
def display_and_save_video_stream(output_dir: Path, fps: int, width: int, height: int):
now = dt.datetime.now()
capture_dir = output_dir / f"{now:%Y-%m-%d}" / f"{now:%H-%M-%S}"
if not capture_dir.exists():
capture_dir.mkdir(parents=True, exist_ok=True)
# Opens the default webcam
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print("Error: Could not open video stream.")
return
cap.set(cv2.CAP_PROP_FPS, fps)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
frame_index = 0
while True:
ret, frame = cap.read()
if not ret:
print("Error: Could not read frame.")
break
cv2.imshow("Video Stream", frame)
cv2.imwrite(str(capture_dir / f"frame_{frame_index:06d}.png"), frame)
frame_index += 1
# Break the loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# Release the capture and destroy all windows
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--output-dir",
type=Path,
default=Path("outputs/cam_capture/"),
help="Directory where the capture images are written. A subfolder named with the current date & time will be created inside it for each capture.",
)
parser.add_argument(
"--fps",
type=int,
default=30,
help="Frames Per Second of the capture.",
)
parser.add_argument(
"--width",
type=int,
default=1280,
help="Width of the captured images.",
)
parser.add_argument(
"--height",
type=int,
default=720,
help="Height of the captured images.",
)
args = parser.parse_args()
display_and_save_video_stream(**vars(args))

View File

@@ -0,0 +1,490 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Assess the performance of video decoding in various configurations.
This script will benchmark different video encoding and decoding parameters.
See the provided README.md or run `python benchmark/video/run_video_benchmark.py --help` for usage info.
"""
import argparse
import datetime as dt
import random
import shutil
from collections import OrderedDict
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
import einops
import numpy as np
import pandas as pd
import PIL
import torch
from skimage.metrics import mean_squared_error, peak_signal_noise_ratio, structural_similarity
from tqdm import tqdm
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.datasets.video_utils import (
decode_video_frames_torchvision,
encode_video_frames,
)
from lerobot.common.utils.benchmark import TimeBenchmark
BASE_ENCODING = OrderedDict(
[
("vcodec", "libx264"),
("pix_fmt", "yuv444p"),
("g", 2),
("crf", None),
# TODO(aliberts): Add fastdecode
# ("fastdecode", 0),
]
)
# TODO(rcadene, aliberts): move to `utils.py` folder when we want to refactor
def parse_int_or_none(value) -> int | None:
if value.lower() == "none":
return None
try:
return int(value)
except ValueError as e:
raise argparse.ArgumentTypeError(f"Invalid int or None: {value}") from e
def check_datasets_formats(repo_ids: list) -> None:
for repo_id in repo_ids:
dataset = LeRobotDataset(repo_id)
if dataset.video:
raise ValueError(
f"Use only image dataset for running this benchmark. Video dataset provided: {repo_id}"
)
def get_directory_size(directory: Path) -> int:
total_size = 0
for item in directory.rglob("*"):
if item.is_file():
total_size += item.stat().st_size
return total_size
def load_original_frames(imgs_dir: Path, timestamps: list[float], fps: int) -> torch.Tensor:
frames = []
for ts in timestamps:
idx = int(ts * fps)
frame = PIL.Image.open(imgs_dir / f"frame_{idx:06d}.png")
frame = torch.from_numpy(np.array(frame))
frame = frame.type(torch.float32) / 255
frame = einops.rearrange(frame, "h w c -> c h w")
frames.append(frame)
return torch.stack(frames)
def save_decoded_frames(
imgs_dir: Path, save_dir: Path, frames: torch.Tensor, timestamps: list[float], fps: int
) -> None:
if save_dir.exists() and len(list(save_dir.glob("frame_*.png"))) == len(timestamps):
return
save_dir.mkdir(parents=True, exist_ok=True)
for i, ts in enumerate(timestamps):
idx = int(ts * fps)
frame_hwc = (frames[i].permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy()
PIL.Image.fromarray(frame_hwc).save(save_dir / f"frame_{idx:06d}_decoded.png")
shutil.copyfile(imgs_dir / f"frame_{idx:06d}.png", save_dir / f"frame_{idx:06d}_original.png")
def save_first_episode(imgs_dir: Path, dataset: LeRobotDataset) -> None:
ep_num_images = dataset.episode_data_index["to"][0].item()
if imgs_dir.exists() and len(list(imgs_dir.glob("frame_*.png"))) == ep_num_images:
return
imgs_dir.mkdir(parents=True, exist_ok=True)
hf_dataset = dataset.hf_dataset.with_format(None)
# We only save images from the first camera
img_keys = [key for key in hf_dataset.features if key.startswith("observation.image")]
imgs_dataset = hf_dataset.select_columns(img_keys[0])
for i, item in enumerate(
tqdm(imgs_dataset, desc=f"saving {dataset.repo_id} first episode images", leave=False)
):
img = item[img_keys[0]]
img.save(str(imgs_dir / f"frame_{i:06d}.png"), quality=100)
if i >= ep_num_images - 1:
break
def sample_timestamps(timestamps_mode: str, ep_num_images: int, fps: int) -> list[float]:
# Start at 5 to allow for 2_frames_4_space and 6_frames
idx = random.randint(5, ep_num_images - 1)
match timestamps_mode:
case "1_frame":
frame_indexes = [idx]
case "2_frames":
frame_indexes = [idx - 1, idx]
case "2_frames_4_space":
frame_indexes = [idx - 5, idx]
case "6_frames":
frame_indexes = [idx - i for i in range(6)][::-1]
case _:
raise ValueError(timestamps_mode)
return [idx / fps for idx in frame_indexes]
def decode_video_frames(
video_path: str,
timestamps: list[float],
tolerance_s: float,
backend: str,
) -> torch.Tensor:
if backend in ["pyav", "video_reader"]:
return decode_video_frames_torchvision(video_path, timestamps, tolerance_s, backend)
else:
raise NotImplementedError(backend)
def benchmark_decoding(
imgs_dir: Path,
video_path: Path,
timestamps_mode: str,
backend: str,
ep_num_images: int,
fps: int,
num_samples: int = 50,
num_workers: int = 4,
save_frames: bool = False,
) -> dict:
def process_sample(sample: int):
time_benchmark = TimeBenchmark()
timestamps = sample_timestamps(timestamps_mode, ep_num_images, fps)
num_frames = len(timestamps)
result = {
"psnr_values": [],
"ssim_values": [],
"mse_values": [],
}
with time_benchmark:
frames = decode_video_frames(video_path, timestamps=timestamps, tolerance_s=5e-1, backend=backend)
result["load_time_video_ms"] = time_benchmark.result_ms / num_frames
with time_benchmark:
original_frames = load_original_frames(imgs_dir, timestamps, fps)
result["load_time_images_ms"] = time_benchmark.result_ms / num_frames
frames_np, original_frames_np = frames.numpy(), original_frames.numpy()
for i in range(num_frames):
result["mse_values"].append(mean_squared_error(original_frames_np[i], frames_np[i]))
result["psnr_values"].append(
peak_signal_noise_ratio(original_frames_np[i], frames_np[i], data_range=1.0)
)
result["ssim_values"].append(
structural_similarity(original_frames_np[i], frames_np[i], data_range=1.0, channel_axis=0)
)
if save_frames and sample == 0:
save_dir = video_path.with_suffix("") / f"{timestamps_mode}_{backend}"
save_decoded_frames(imgs_dir, save_dir, frames, timestamps, fps)
return result
load_times_video_ms = []
load_times_images_ms = []
mse_values = []
psnr_values = []
ssim_values = []
# A sample is a single set of decoded frames specified by timestamps_mode (e.g. a single frame, 2 frames, etc.).
# For each sample, we record metrics (loading time and quality metrics) which are then averaged over all samples.
# As these samples are independent, we run them in parallel threads to speed up the benchmark.
with ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = [executor.submit(process_sample, i) for i in range(num_samples)]
for future in tqdm(as_completed(futures), total=num_samples, desc="samples", leave=False):
result = future.result()
load_times_video_ms.append(result["load_time_video_ms"])
load_times_images_ms.append(result["load_time_images_ms"])
psnr_values.extend(result["psnr_values"])
ssim_values.extend(result["ssim_values"])
mse_values.extend(result["mse_values"])
avg_load_time_video_ms = float(np.array(load_times_video_ms).mean())
avg_load_time_images_ms = float(np.array(load_times_images_ms).mean())
video_images_load_time_ratio = avg_load_time_video_ms / avg_load_time_images_ms
return {
"avg_load_time_video_ms": avg_load_time_video_ms,
"avg_load_time_images_ms": avg_load_time_images_ms,
"video_images_load_time_ratio": video_images_load_time_ratio,
"avg_mse": float(np.mean(mse_values)),
"avg_psnr": float(np.mean(psnr_values)),
"avg_ssim": float(np.mean(ssim_values)),
}
def benchmark_encoding_decoding(
dataset: LeRobotDataset,
video_path: Path,
imgs_dir: Path,
encoding_cfg: dict,
decoding_cfg: dict,
num_samples: int,
num_workers: int,
save_frames: bool,
overwrite: bool = False,
seed: int = 1337,
) -> list[dict]:
fps = dataset.fps
if overwrite or not video_path.is_file():
tqdm.write(f"encoding {video_path}")
encode_video_frames(
imgs_dir=imgs_dir,
video_path=video_path,
fps=fps,
vcodec=encoding_cfg["vcodec"],
pix_fmt=encoding_cfg["pix_fmt"],
g=encoding_cfg.get("g"),
crf=encoding_cfg.get("crf"),
# fast_decode=encoding_cfg.get("fastdecode"),
overwrite=True,
)
ep_num_images = dataset.episode_data_index["to"][0].item()
width, height = tuple(dataset[0][dataset.camera_keys[0]].shape[-2:])
num_pixels = width * height
video_size_bytes = video_path.stat().st_size
images_size_bytes = get_directory_size(imgs_dir)
video_images_size_ratio = video_size_bytes / images_size_bytes
random.seed(seed)
benchmark_table = []
for timestamps_mode in tqdm(
decoding_cfg["timestamps_modes"], desc="decodings (timestamps_modes)", leave=False
):
for backend in tqdm(decoding_cfg["backends"], desc="decodings (backends)", leave=False):
benchmark_row = benchmark_decoding(
imgs_dir,
video_path,
timestamps_mode,
backend,
ep_num_images,
fps,
num_samples,
num_workers,
save_frames,
)
benchmark_row.update(
**{
"repo_id": dataset.repo_id,
"resolution": f"{width} x {height}",
"num_pixels": num_pixels,
"video_size_bytes": video_size_bytes,
"images_size_bytes": images_size_bytes,
"video_images_size_ratio": video_images_size_ratio,
"timestamps_mode": timestamps_mode,
"backend": backend,
},
**encoding_cfg,
)
benchmark_table.append(benchmark_row)
return benchmark_table
def main(
output_dir: Path,
repo_ids: list[str],
vcodec: list[str],
pix_fmt: list[str],
g: list[int],
crf: list[int],
# fastdecode: list[int],
timestamps_modes: list[str],
backends: list[str],
num_samples: int,
num_workers: int,
save_frames: bool,
):
check_datasets_formats(repo_ids)
encoding_benchmarks = {
"g": g,
"crf": crf,
# "fastdecode": fastdecode,
}
decoding_benchmarks = {
"timestamps_modes": timestamps_modes,
"backends": backends,
}
headers = ["repo_id", "resolution", "num_pixels"]
headers += list(BASE_ENCODING.keys())
headers += [
"timestamps_mode",
"backend",
"video_size_bytes",
"images_size_bytes",
"video_images_size_ratio",
"avg_load_time_video_ms",
"avg_load_time_images_ms",
"video_images_load_time_ratio",
"avg_mse",
"avg_psnr",
"avg_ssim",
]
file_paths = []
for video_codec in tqdm(vcodec, desc="encodings (vcodec)"):
for pixel_format in tqdm(pix_fmt, desc="encodings (pix_fmt)", leave=False):
benchmark_table = []
for repo_id in tqdm(repo_ids, desc="encodings (datasets)", leave=False):
dataset = LeRobotDataset(repo_id)
imgs_dir = output_dir / "images" / dataset.repo_id.replace("/", "_")
# We only use the first episode
save_first_episode(imgs_dir, dataset)
for key, values in tqdm(encoding_benchmarks.items(), desc="encodings (g, crf)", leave=False):
for value in tqdm(values, desc=f"encodings ({key})", leave=False):
encoding_cfg = BASE_ENCODING.copy()
encoding_cfg["vcodec"] = video_codec
encoding_cfg["pix_fmt"] = pixel_format
encoding_cfg[key] = value
args_path = Path("_".join(str(value) for value in encoding_cfg.values()))
video_path = output_dir / "videos" / args_path / f"{repo_id.replace('/', '_')}.mp4"
benchmark_table += benchmark_encoding_decoding(
dataset,
video_path,
imgs_dir,
encoding_cfg,
decoding_benchmarks,
num_samples,
num_workers,
save_frames,
)
# Save intermediate results
benchmark_df = pd.DataFrame(benchmark_table, columns=headers)
now = dt.datetime.now()
csv_path = (
output_dir
/ f"{now:%Y-%m-%d}_{now:%H-%M-%S}_{video_codec}_{pixel_format}_{num_samples}-samples.csv"
)
benchmark_df.to_csv(csv_path, header=True, index=False)
file_paths.append(csv_path)
del benchmark_df
# Concatenate all results
df_list = [pd.read_csv(csv_path) for csv_path in file_paths]
concatenated_df = pd.concat(df_list, ignore_index=True)
concatenated_path = output_dir / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_all_{num_samples}-samples.csv"
concatenated_df.to_csv(concatenated_path, header=True, index=False)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--output-dir",
type=Path,
default=Path("outputs/video_benchmark"),
help="Directory where the video benchmark outputs are written.",
)
parser.add_argument(
"--repo-ids",
type=str,
nargs="*",
default=[
"lerobot/pusht_image",
"aliberts/aloha_mobile_shrimp_image",
"aliberts/paris_street",
"aliberts/kitchen",
],
help="Datasets repo-ids to test against. First episodes only are used. Must be images.",
)
parser.add_argument(
"--vcodec",
type=str,
nargs="*",
default=["libx264", "libx265", "libsvtav1"],
help="Video codecs to be tested",
)
parser.add_argument(
"--pix-fmt",
type=str,
nargs="*",
default=["yuv444p", "yuv420p"],
help="Pixel formats (chroma subsampling) to be tested",
)
parser.add_argument(
"--g",
type=parse_int_or_none,
nargs="*",
default=[1, 2, 3, 4, 5, 6, 10, 15, 20, 40, 100, None],
help="Group of pictures sizes to be tested.",
)
parser.add_argument(
"--crf",
type=parse_int_or_none,
nargs="*",
default=[0, 5, 10, 15, 20, 25, 30, 40, 50, None],
help="Constant rate factors to be tested.",
)
# parser.add_argument(
# "--fastdecode",
# type=int,
# nargs="*",
# default=[0, 1],
# help="Use the fastdecode tuning option. 0 disables it. "
# "For libx264 and libx265, only 1 is possible. "
# "For libsvtav1, 1, 2 or 3 are possible values with a higher number meaning a faster decoding optimization",
# )
parser.add_argument(
"--timestamps-modes",
type=str,
nargs="*",
default=[
"1_frame",
"2_frames",
"2_frames_4_space",
"6_frames",
],
help="Timestamps scenarios to be tested.",
)
parser.add_argument(
"--backends",
type=str,
nargs="*",
default=["pyav", "video_reader"],
help="Torchvision decoding backend to be tested.",
)
parser.add_argument(
"--num-samples",
type=int,
default=50,
help="Number of samples for each encoding x decoding config.",
)
parser.add_argument(
"--num-workers",
type=int,
default=10,
help="Number of processes for parallelized sample processing.",
)
parser.add_argument(
"--save-frames",
type=int,
default=0,
help="Whether to save decoded frames or not. Enter a non-zero number for true.",
)
args = parser.parse_args()
main(**vars(args))

View File

@@ -8,7 +8,8 @@ ARG DEBIAN_FRONTEND=noninteractive
# Install apt dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake \
libglib2.0-0 libgl1-mesa-glx libegl1-mesa \
libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
speech-dispatcher \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Create virtual environment
@@ -21,7 +22,7 @@ RUN echo "source /opt/venv/bin/activate" >> /root/.bashrc
COPY . /lerobot
WORKDIR /lerobot
RUN pip install --upgrade --no-cache-dir pip
RUN pip install --no-cache-dir ".[test, aloha, xarm, pusht]" \
RUN pip install --no-cache-dir ".[test, aloha, xarm, pusht, dynamixel]" \
--extra-index-url https://download.pytorch.org/whl/cpu
# Set EGL as the rendering backend for MuJoCo

View File

@@ -0,0 +1,68 @@
FROM nvidia/cuda:12.2.2-devel-ubuntu22.04
# Configure image
ARG PYTHON_VERSION=3.10
ARG DEBIAN_FRONTEND=noninteractive
# Install apt dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake \
git git-lfs openssh-client \
nano vim less util-linux tree \
htop atop nvtop \
sed gawk grep curl wget zip unzip \
tcpdump sysstat screen tmux \
libglib2.0-0 libgl1-mesa-glx libegl1-mesa \
speech-dispatcher \
python${PYTHON_VERSION} python${PYTHON_VERSION}-venv \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Install ffmpeg build dependencies. See:
# https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
# TODO(aliberts): create image to build dependencies from source instead
RUN apt-get update && apt-get install -y --no-install-recommends \
autoconf automake yasm \
libass-dev \
libfreetype6-dev \
libgnutls28-dev \
libunistring-dev \
libmp3lame-dev \
libtool \
libvorbis-dev \
meson \
ninja-build \
pkg-config \
texinfo \
yasm \
zlib1g-dev \
nasm \
libx264-dev \
libx265-dev libnuma-dev \
libvpx-dev \
libfdk-aac-dev \
libopus-dev \
libsvtav1-dev libsvtav1enc-dev libsvtav1dec-dev \
libdav1d-dev
# Install gh cli tool
RUN (type -p wget >/dev/null || (apt update && apt-get install wget -y)) \
&& mkdir -p -m 755 /etc/apt/keyrings \
&& wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
&& chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& apt update \
&& apt install gh -y \
&& apt clean && rm -rf /var/lib/apt/lists/*
# Setup `python`
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install poetry
RUN curl -sSL https://install.python-poetry.org | python -
ENV PATH="/root/.local/bin:$PATH"
RUN echo 'if [ "$HOME" != "/root" ]; then ln -sf /root/.local/bin/poetry $HOME/.local/bin/poetry; fi' >> /root/.bashrc
RUN poetry config virtualenvs.create false
RUN poetry config virtualenvs.in-project true
# Set EGL as the rendering backend for MuJoCo
ENV MUJOCO_GL="egl"

View File

@@ -4,18 +4,16 @@ FROM nvidia/cuda:12.4.1-base-ubuntu22.04
ARG PYTHON_VERSION=3.10
ARG DEBIAN_FRONTEND=noninteractive
# Install apt dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake \
git git-lfs openssh-client \
nano vim ffmpeg \
htop atop nvtop \
sed gawk grep curl wget \
tcpdump sysstat screen \
libglib2.0-0 libgl1-mesa-glx libegl1-mesa \
python${PYTHON_VERSION} python${PYTHON_VERSION}-venv \
libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
speech-dispatcher \
python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Create virtual environment
RUN ln -s /usr/bin/python${PYTHON_VERSION} /usr/bin/python
RUN python -m venv /opt/venv
@@ -23,11 +21,10 @@ ENV PATH="/opt/venv/bin:$PATH"
RUN echo "source /opt/venv/bin/activate" >> /root/.bashrc
# Install LeRobot
RUN git lfs install
RUN git clone https://github.com/huggingface/lerobot.git
COPY . /lerobot
WORKDIR /lerobot
RUN pip install --upgrade --no-cache-dir pip
RUN pip install --no-cache-dir ".[test, aloha, xarm, pusht]"
RUN pip install --no-cache-dir ".[test, aloha, xarm, pusht, dynamixel]"
# Set EGL as the rendering backend for MuJoCo
ENV MUJOCO_GL="egl"

BIN
examples.zip Normal file

Binary file not shown.

280
examples/10_use_so100.md Normal file
View File

@@ -0,0 +1,280 @@
This tutorial explains how to use [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot.
## Source the parts
Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already.
**Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
## Install LeRobot
On your computer:
1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
```
2. Restart shell or `source ~/.bashrc`
3. Create and activate a fresh conda environment for lerobot
```bash
conda create -y -n lerobot python=3.10 && conda activate lerobot
```
4. Clone LeRobot:
```bash
git clone https://github.com/huggingface/lerobot.git ~/lerobot
```
5. Install LeRobot with dependencies for the feetech motors:
```bash
cd ~/lerobot && pip install -e ".[feetech]"
```
For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
```
## Configure the motors
Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the use of our scripts below.
**Find USB ports associated to your arms**
To find the correct ports for each arm, run the utility script twice:
```bash
python lerobot/scripts/find_motors_bus_port.py
```
Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
[...Disconnect leader arm and press Enter...]
The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
Reconnect the usb cable.
```
Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
[...Disconnect follower arm and press Enter...]
The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the usb cable.
```
Troubleshooting: On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1
```
**Configure your motors**
Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate:
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 1
```
Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).
Then unplug your motor and plug the second motor and set its ID to 2.
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 2
```
Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.
**Remove the gears of the 6 leader motors**
Follow step 2 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
**Add motor horn to the motors**
Follow step 3 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30.
Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated.
## Assemble the arms
Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.
## Calibrate
Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one SO-100 robot to work on another.
**Auto-calibration of follower arm**
Follow step 5 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the auto-calibration of the follower arm. You first need to manually move your follower arm to this initial position:
<div style="text-align:center;">
<img src="../media/so100/follower_initial.webp?raw=true" alt="SO-100 follower arm initial position" title="SO-100 follower arm initial position" width="50%">
</div>
Then run this script to launch auto-calibration:
```bash
python lerobot/scripts/control_robot.py calibrate \
--robot-path lerobot/configs/robot/so100.yaml \
--robot-overrides '~cameras' --arms main_follower
```
Note: You can't run auto-calibration for the leader arm, since we removed the gears. Thus, you will need to manually calibrate the leader arm. It's less precise than auto-calibration, but precision is not as critical for the leader arm.
**Manual calibration of leader arm**
Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
| 1. Zero position | 2. Rotated position | 3. Rest position |
|---|---|---|
| <img src="../media/so100/leader_zero.webp?raw=true" alt="SO-100 leader arm zero position" title="SO-100 leader arm zero position" style="width:100%;"> | <img src="../media/so100/leader_rotated.webp?raw=true" alt="SO-100 leader arm rotated position" title="SO-100 leader arm rotated position" style="width:100%;"> | <img src="../media/so100/leader_rest.webp?raw=true" alt="SO-100 leader arm rest position" title="SO-100 leader arm rest position" style="width:100%;"> |
Run this script to launch manual calibration:
```bash
python lerobot/scripts/control_robot.py calibrate \
--robot-path lerobot/configs/robot/so100.yaml \
--robot-overrides '~cameras' --arms main_leader
```
## Teleoperate
**Simple teleop**
Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/so100.yaml \
--robot-overrides '~cameras' \
--display-cameras 0
```
**Teleop with displaying cameras**
Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/so100.yaml
```
## Record a dataset
Once you're familiar with teleoperation, you can record your first dataset with SO-100.
If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Store your Hugging Face repository name in a variable to run these commands:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```
Record 2 episodes and upload your dataset to the hub:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/so100.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/so100_test \
--tags so100 tutorial \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 2 \
--push-to-hub 1
```
## Visualize a dataset
If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
echo ${HF_USER}/so100_test
```
If you didn't upload with `--push-to-hub 0`, you can also visualize it locally with:
```bash
python lerobot/scripts/visualize_dataset_html.py \
--root data \
--repo-id ${HF_USER}/so100_test
```
## Replay an episode
Now try to replay the first episode on your robot:
```bash
DATA_DIR=data python lerobot/scripts/control_robot.py replay \
--robot-path lerobot/configs/robot/so100.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/so100_test \
--episode 0
```
## Train a policy
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
DATA_DIR=data python lerobot/scripts/train.py \
dataset_repo_id=${HF_USER}/so100_test \
policy=act_so100_real \
env=so100_real \
hydra.run.dir=outputs/train/act_so100_test \
hydra.job.name=act_so100_test \
device=cuda \
wandb.enable=true
```
Let's explain it:
1. We provided the dataset as argument with `dataset_repo_id=${HF_USER}/so100_test`.
2. We provided the policy with `policy=act_so100_real`. This loads configurations from [`lerobot/configs/policy/act_so100_real.yaml`](../lerobot/configs/policy/act_so100_real.yaml). Importantly, this policy uses 2 cameras as input `laptop`, `phone`.
3. We provided an environment as argument with `env=so100_real`. This loads configurations from [`lerobot/configs/env/so100_real.yaml`](../lerobot/configs/env/so100_real.yaml).
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
## Evaluate your policy
You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/so100.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/eval_act_so100_test \
--tags so100 tutorial eval \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 10 \
-p outputs/train/act_so100_test/checkpoints/last/pretrained_model
```
As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_so100_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_so100_test`).
## More
Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.
If you have any question or need help, please reach out on Discord in the channel [`#so100-arm`](https://discord.com/channels/1216765309076115607/1237741463832363039).

280
examples/11_use_moss.md Normal file
View File

@@ -0,0 +1,280 @@
This tutorial explains how to use [Moss v1](https://github.com/jess-moss/moss-robot-arms) with LeRobot.
## Source the parts
Follow this [README](https://github.com/jess-moss/moss-robot-arms). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already.
**Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
## Install LeRobot
On your computer:
1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
```
2. Restart shell or `source ~/.bashrc`
3. Create and activate a fresh conda environment for lerobot
```bash
conda create -y -n lerobot python=3.10 && conda activate lerobot
```
4. Clone LeRobot:
```bash
git clone https://github.com/huggingface/lerobot.git ~/lerobot
```
5. Install LeRobot with dependencies for the feetech motors:
```bash
cd ~/lerobot && pip install -e ".[feetech]"
```
For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
```
## Configure the motors
Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the use of our scripts below.
**Find USB ports associated to your arms**
To find the correct ports for each arm, run the utility script twice:
```bash
python lerobot/scripts/find_motors_bus_port.py
```
Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
[...Disconnect leader arm and press Enter...]
The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
Reconnect the usb cable.
```
Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
[...Disconnect follower arm and press Enter...]
The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the usb cable.
```
Troubleshooting: On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1
```
**Configure your motors**
Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate:
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 1
```
Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).
Then unplug your motor and plug the second motor and set its ID to 2.
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 2
```
Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.
**Remove the gears of the 6 leader motors**
Follow step 2 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
**Add motor horn to the motors**
Follow step 3 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). For Moss v1, you need to align the holes on the motor horn to the motor spline to be approximately 3, 6, 9 and 12 o'clock.
Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated.
## Assemble the arms
Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.
## Calibrate
Next, you'll need to calibrate your Moss v1 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one Moss v1 robot to work on another.
**Auto-calibration of follower arm**
Follow step 5 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the auto-calibration of the follower arm. You first need to manually move your follower arm to this initial position:
<div style="text-align:center;">
<img src="../media/moss/follower_initial.webp?raw=true" alt="Moss v1 follower arm initial position" title="Moss v1 follower arm initial position" width="50%">
</div>
Then run this script to launch auto-calibration:
```bash
python lerobot/scripts/control_robot.py calibrate \
--robot-path lerobot/configs/robot/moss.yaml \
--robot-overrides '~cameras' --arms main_follower
```
Note: You can't run auto-calibration for the leader arm, since we removed the gears. Thus, you will need to manually calibrate the leader arm. It's less precise than auto-calibration, but precision is not as critical for the leader arm.
**Manual calibration of leader arm**
Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
| 1. Zero position | 2. Rotated position | 3. Rest position |
|---|---|---|
| <img src="../media/moss/leader_zero.webp?raw=true" alt="Moss v1 leader arm zero position" title="Moss v1 leader arm zero position" style="width:100%;"> | <img src="../media/moss/leader_rotated.webp?raw=true" alt="Moss v1 leader arm rotated position" title="Moss v1 leader arm rotated position" style="width:100%;"> | <img src="../media/moss/leader_rest.webp?raw=true" alt="Moss v1 leader arm rest position" title="Moss v1 leader arm rest position" style="width:100%;"> |
Run this script to launch manual calibration:
```bash
python lerobot/scripts/control_robot.py calibrate \
--robot-path lerobot/configs/robot/moss.yaml \
--robot-overrides '~cameras' --arms main_leader
```
## Teleoperate
**Simple teleop**
Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/moss.yaml \
--robot-overrides '~cameras' \
--display-cameras 0
```
**Teleop with displaying cameras**
Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/moss.yaml
```
## Record a dataset
Once you're familiar with teleoperation, you can record your first dataset with Moss v1.
If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Store your Hugging Face repository name in a variable to run these commands:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```
Record 2 episodes and upload your dataset to the hub:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/moss.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/moss_test \
--tags moss tutorial \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 2 \
--push-to-hub 1
```
## Visualize a dataset
If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
echo ${HF_USER}/moss_test
```
If you didn't upload with `--push-to-hub 0`, you can also visualize it locally with:
```bash
python lerobot/scripts/visualize_dataset_html.py \
--root data \
--repo-id ${HF_USER}/moss_test
```
## Replay an episode
Now try to replay the first episode on your robot:
```bash
DATA_DIR=data python lerobot/scripts/control_robot.py replay \
--robot-path lerobot/configs/robot/moss.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/moss_test \
--episode 0
```
## Train a policy
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
DATA_DIR=data python lerobot/scripts/train.py \
dataset_repo_id=${HF_USER}/moss_test \
policy=act_moss_real \
env=moss_real \
hydra.run.dir=outputs/train/act_moss_test \
hydra.job.name=act_moss_test \
device=cuda \
wandb.enable=true
```
Let's explain it:
1. We provided the dataset as argument with `dataset_repo_id=${HF_USER}/moss_test`.
2. We provided the policy with `policy=act_moss_real`. This loads configurations from [`lerobot/configs/policy/act_moss_real.yaml`](../lerobot/configs/policy/act_moss_real.yaml). Importantly, this policy uses 2 cameras as input `laptop`, `phone`.
3. We provided an environment as argument with `env=moss_real`. This loads configurations from [`lerobot/configs/env/moss_real.yaml`](../lerobot/configs/env/moss_real.yaml).
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
Training should take several hours. You will find checkpoints in `outputs/train/act_moss_test/checkpoints`.
## Evaluate your policy
You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/moss.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/eval_act_moss_test \
--tags moss tutorial eval \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 10 \
-p outputs/train/act_moss_test/checkpoints/last/pretrained_model
```
As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_moss_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_moss_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_moss_test`).
## More
Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.
If you have any question or need help, please reach out on Discord in the channel [`#moss-arm`](https://discord.com/channels/1216765309076115607/1275374638985252925).

View File

@@ -18,8 +18,6 @@ from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
output_directory = Path("outputs/eval/example_pusht_diffusion")
output_directory.mkdir(parents=True, exist_ok=True)
device = torch.device("cuda")
# Download the diffusion policy for pusht environment
pretrained_policy_path = Path(snapshot_download("lerobot/diffusion_pusht"))
# OR uncomment the following to evaluate a policy from the local outputs/train folder.
@@ -27,6 +25,17 @@ pretrained_policy_path = Path(snapshot_download("lerobot/diffusion_pusht"))
policy = DiffusionPolicy.from_pretrained(pretrained_policy_path)
policy.eval()
# Check if GPU is available
if torch.cuda.is_available():
device = torch.device("cuda")
print("GPU is available. Device set to:", device)
else:
device = torch.device("cpu")
print(f"GPU is not available. Device set to: {device}. Inference will be slower than on GPU.")
# Decrease the number of reverse-diffusion steps (trades off a bit of quality for 10x speed)
policy.diffusion.num_inference_steps = 10
policy.to(device)
# Initialize evaluation environment to render two observation types:

View File

@@ -0,0 +1,213 @@
This tutorial will explain the training script, how to use it, and particularly the use of Hydra to configure everything needed for the training run.
## The training script
LeRobot offers a training script at [`lerobot/scripts/train.py`](../../lerobot/scripts/train.py). At a high level it does the following:
- Loads a Hydra configuration file for the following steps (more on Hydra in a moment).
- Makes a simulation environment.
- Makes a dataset corresponding to that simulation environment.
- Makes a policy.
- Runs a standard training loop with forward pass, backward pass, optimization step, and occasional logging, evaluation (of the policy on the environment), and checkpointing.
## Basics of how we use Hydra
Explaining the ins and outs of [Hydra](https://hydra.cc/docs/intro/) is beyond the scope of this document, but here we'll share the main points you need to know.
First, `lerobot/configs` has a directory structure like this:
```
.
├── default.yaml
├── env
│ ├── aloha.yaml
│ ├── pusht.yaml
│ └── xarm.yaml
└── policy
├── act.yaml
├── diffusion.yaml
└── tdmpc.yaml
```
**_For brevity, in the rest of this document we'll drop the leading `lerobot/configs` path. So `default.yaml` really refers to `lerobot/configs/default.yaml`._**
When you run the training script with
```python
python lerobot/scripts/train.py
```
Hydra is set up to read `default.yaml` (via the `@hydra.main` decorator). If you take a look at the `@hydra.main`'s arguments you will see `config_path="../configs", config_name="default"`. At the top of `default.yaml`, is a `defaults` section which looks likes this:
```yaml
defaults:
- _self_
- env: pusht
- policy: diffusion
```
This logic tells Hydra to incorporate configuration parameters from `env/pusht.yaml` and `policy/diffusion.yaml`. _Note: Be aware of the order as any configuration parameters with the same name will be overidden. Thus, `default.yaml` is overridden by `env/pusht.yaml` which is overidden by `policy/diffusion.yaml`_.
Then, `default.yaml` also contains common configuration parameters such as `device: cuda` or `use_amp: false` (for enabling fp16 training). Some other parameters are set to `???` which indicates that they are expected to be set in additional yaml files. For instance, `training.offline_steps: ???` in `default.yaml` is set to `200000` in `diffusion.yaml`.
Thanks to this `defaults` section in `default.yaml`, if you want to train Diffusion Policy with PushT, you really only need to run:
```bash
python lerobot/scripts/train.py
```
However, you can be more explicit and launch the exact same Diffusion Policy training on PushT with:
```bash
python lerobot/scripts/train.py policy=diffusion env=pusht
```
This way of overriding defaults via the CLI is especially useful when you want to change the policy and/or environment. For instance, you can train ACT on the default Aloha environment with:
```bash
python lerobot/scripts/train.py policy=act env=aloha
```
There are two things to note here:
- Config overrides are passed as `param_name=param_value`.
- Here we have overridden the defaults section. `policy=act` tells Hydra to use `policy/act.yaml`, and `env=aloha` tells Hydra to use `env/aloha.yaml`.
_As an aside: we've set up all of our configurations so that they reproduce state-of-the-art results from papers in the literature._
## Overriding configuration parameters in the CLI
Now let's say that we want to train on a different task in the Aloha environment. If you look in `env/aloha.yaml` you will see something like:
```yaml
# lerobot/configs/env/aloha.yaml
env:
task: AlohaInsertion-v0
```
And if you look in `policy/act.yaml` you will see something like:
```yaml
# lerobot/configs/policy/act.yaml
dataset_repo_id: lerobot/aloha_sim_insertion_human
```
But our Aloha environment actually supports a cube transfer task as well. To train for this task, you could manually modify the two yaml configuration files respectively.
First, we'd need to switch to using the cube transfer task for the ALOHA environment.
```diff
# lerobot/configs/env/aloha.yaml
env:
- task: AlohaInsertion-v0
+ task: AlohaTransferCube-v0
```
Then, we'd also need to switch to using the cube transfer dataset.
```diff
# lerobot/configs/policy/act.yaml
-dataset_repo_id: lerobot/aloha_sim_insertion_human
+dataset_repo_id: lerobot/aloha_sim_transfer_cube_human
```
Then, you'd be able to run:
```bash
python lerobot/scripts/train.py policy=act env=aloha
```
and you'd be training and evaluating on the cube transfer task.
An alternative approach to editing the yaml configuration files, would be to override the defaults via the command line:
```bash
python lerobot/scripts/train.py \
policy=act \
dataset_repo_id=lerobot/aloha_sim_transfer_cube_human \
env=aloha \
env.task=AlohaTransferCube-v0
```
There's something new here. Notice the `.` delimiter used to traverse the configuration hierarchy. _But be aware that the `defaults` section is an exception. As you saw above, we didn't need to write `defaults.policy=act` in the CLI. `policy=act` was enough._
Putting all that knowledge together, here's the command that was used to train https://huggingface.co/lerobot/act_aloha_sim_transfer_cube_human.
```bash
python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/act_aloha_sim_transfer_cube_human \
device=cuda
env=aloha \
env.task=AlohaTransferCube-v0 \
dataset_repo_id=lerobot/aloha_sim_transfer_cube_human \
policy=act \
training.eval_freq=10000 \
training.log_freq=250 \
training.offline_steps=100000 \
training.save_model=true \
training.save_freq=25000 \
eval.n_episodes=50 \
eval.batch_size=50 \
wandb.enable=false \
```
There's one new thing here: `hydra.run.dir=outputs/train/act_aloha_sim_transfer_cube_human`, which specifies where to save the training output.
## Using a configuration file not in `lerobot/configs`
Above we discusses the our training script is set up such that Hydra looks for `default.yaml` in `lerobot/configs`. But, if you have a configuration file elsewhere in your filesystem you may use:
```bash
python lerobot/scripts/train.py --config-dir PARENT/PATH --config-name FILE_NAME_WITHOUT_EXTENSION
```
Note: here we use regular syntax for providing CLI arguments to a Python script, not Hydra's `param_name=param_value` syntax.
As a concrete example, this becomes particularly handy when you have a folder with training outputs, and would like to re-run the training. For example, say you previously ran the training script with one of the earlier commands and have `outputs/train/my_experiment/checkpoints/pretrained_model/config.yaml`. This `config.yaml` file will have the full set of configuration parameters within it. To run the training with the same configuration again, do:
```bash
python lerobot/scripts/train.py --config-dir outputs/train/my_experiment/checkpoints/last/pretrained_model --config-name config
```
Note that you may still use the regular syntax for config parameter overrides (eg: by adding `training.offline_steps=200000`).
## Typical logs and metrics
When you start the training process, you will first see your full configuration being printed in the terminal. You can check it to make sure that you config it correctly and your config is not overrided by other files. The final configuration will also be saved with the checkpoint.
After that, you will see training log like this one:
```
INFO 2024-08-14 13:35:12 ts/train.py:192 step:0 smpl:64 ep:1 epch:0.00 loss:1.112 grdn:15.387 lr:2.0e-07 updt_s:1.738 data_s:4.774
```
or evaluation log like:
```
INFO 2024-08-14 13:38:45 ts/train.py:226 step:100 smpl:6K ep:52 epch:0.25 ∑rwrd:20.693 success:0.0% eval_s:120.266
```
These logs will also be saved in wandb if `wandb.enable` is set to `true`. Here are the meaning of some abbreviations:
- `smpl`: number of samples seen during training.
- `ep`: number of episodes seen during training. An episode contains multiple samples in a complete manipulation task.
- `epch`: number of time all unique samples are seen (epoch).
- `grdn`: gradient norm.
- `∑rwrd`: compute the sum of rewards in every evaluation episode and then take an average of them.
- `success`: average success rate of eval episodes. Reward and success are usually different except for the sparsing reward setting, where reward=1 only when the task is completed successfully.
- `eval_s`: time to evaluate the policy in the environment, in second.
- `updt_s`: time to update the network parameters, in second.
- `data_s`: time to load a batch of data, in second.
Some metrics are useful for initial performance profiling. For example, if you find the current GPU utilization is low via the `nvidia-smi` command and `data_s` sometimes is too high, you may need to modify batch size or number of dataloading workers to accelerate dataloading. We also recommend [pytorch profiler](https://github.com/huggingface/lerobot?tab=readme-ov-file#improve-your-code-with-profiling) for detailed performance probing.
---
So far we've seen how to train Diffusion Policy for PushT and ACT for ALOHA. Now, what if we want to train ACT for PushT? Well, there are aspects of the ACT configuration that are specific to the ALOHA environments, and these happen to be incompatible with PushT. Therefore, trying to run the following will almost certainly raise an exception of sorts (eg: feature dimension mismatch):
```bash
python lerobot/scripts/train.py policy=act env=pusht dataset_repo_id=lerobot/pusht
```
Please, head on over to our [advanced tutorial on adapting policy configuration to various environments](./advanced/train_act_pusht/train_act_pusht.md) to learn more.
Or in the meantime, happy coding! 🤗

View File

@@ -0,0 +1,37 @@
This tutorial explains how to resume a training run that you've started with the training script. If you don't know how our training script and configuration system works, please read [4_train_policy_with_script.md](./4_train_policy_with_script.md) first.
## Basic training resumption
Let's consider the example of training ACT for one of the ALOHA tasks. Here's a command that can achieve that:
```bash
python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/run_resumption \
policy=act \
dataset_repo_id=lerobot/aloha_sim_transfer_cube_human \
env=aloha \
env.task=AlohaTransferCube-v0 \
training.log_freq=25 \
training.save_checkpoint=true \
training.save_freq=100
```
Here we're using the default dataset and environment for ACT, and we've taken care to set up the log frequency and checkpointing frequency to low numbers so we can test resumption. You should be able to see some logging and have a first checkpoint within 1 minute. Please interrupt the training after the first checkpoint.
To resume, all that we have to do is run the training script, providing the run directory, and the resume option:
```bash
python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/run_resumption \
resume=true
```
You should see from the logging that your training picks up from where it left off.
Note that with `resume=true`, the configuration file from the last checkpoint in the training output directory is loaded. So it doesn't matter that we haven't provided all the other configuration parameters from our previous command (although there may be warnings to notify you that your command has a different configuration than than the checkpoint).
---
Now you should know how to resume your training run in case it gets interrupted or you want to extend a finished training run.
Happy coding! 🤗

View File

@@ -0,0 +1,52 @@
"""
This script demonstrates how to use torchvision's image transformation with LeRobotDataset for data
augmentation purposes. The transformations are passed to the dataset as an argument upon creation, and
transforms are applied to the observation images before they are returned in the dataset's __get_item__.
"""
from pathlib import Path
from torchvision.transforms import ToPILImage, v2
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
dataset_repo_id = "lerobot/aloha_static_tape"
# Create a LeRobotDataset with no transformations
dataset = LeRobotDataset(dataset_repo_id)
# This is equivalent to `dataset = LeRobotDataset(dataset_repo_id, image_transforms=None)`
# Get the index of the first observation in the first episode
first_idx = dataset.episode_data_index["from"][0].item()
# Get the frame corresponding to the first camera
frame = dataset[first_idx][dataset.camera_keys[0]]
# Define the transformations
transforms = v2.Compose(
[
v2.ColorJitter(brightness=(0.5, 1.5)),
v2.ColorJitter(contrast=(0.5, 1.5)),
v2.RandomAdjustSharpness(sharpness_factor=2, p=1),
]
)
# Create another LeRobotDataset with the defined transformations
transformed_dataset = LeRobotDataset(dataset_repo_id, image_transforms=transforms)
# Get a frame from the transformed dataset
transformed_frame = transformed_dataset[first_idx][transformed_dataset.camera_keys[0]]
# Create a directory to store output images
output_dir = Path("outputs/image_transforms")
output_dir.mkdir(parents=True, exist_ok=True)
# Save the original frame
to_pil = ToPILImage()
to_pil(frame).save(output_dir / "original_frame.png", quality=100)
print(f"Original frame saved to {output_dir / 'original_frame.png'}.")
# Save the transformed frame
to_pil(transformed_frame).save(output_dir / "transformed_frame.png", quality=100)
print(f"Transformed frame saved to {output_dir / 'transformed_frame.png'}.")

File diff suppressed because it is too large Load Diff

158
examples/8_use_stretch.md Normal file
View File

@@ -0,0 +1,158 @@
This tutorial explains how to use [Stretch 3](https://hello-robot.com/stretch-3-product) with LeRobot.
## Setup
Familiarize yourself with Stretch by following its [tutorials](https://docs.hello-robot.com/0.3/getting_started/hello_robot/) (recommended).
To use LeRobot on Stretch, 3 options are available:
- [tethered setup](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#tethered-setup)
- [untethered setup](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#untethered-setup)
- ssh directly into Stretch (you will first need to install and configure openssh-server on stretch using one of the two above setups)
## Install LeRobot
On Stretch's CLI, follow these steps:
1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
```
2. Comment out these lines in `~/.profile` (this can mess up paths used by conda and ~/.local/bin should already be in your PATH)
```
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
PATH="$HOME/.local/bin:$PATH"
fi
```
3. Restart shell or `source ~/.bashrc`
4. Create and activate a fresh conda environment for lerobot
```bash
conda create -y -n lerobot python=3.10 && conda activate lerobot
```
5. Clone LeRobot:
```bash
git clone https://github.com/huggingface/lerobot.git ~/lerobot
```
6. Install LeRobot with stretch dependencies:
```bash
cd ~/lerobot && pip install -e ".[stretch]"
```
> **Note:** If you get this message, you can ignore it: `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.`
For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
```
7. Run a [system check](https://docs.hello-robot.com/0.3/getting_started/stretch_hardware_overview/#system-check) to make sure your robot is ready:
```bash
stretch_system_check.py
```
> **Note:** You may need to free the "robot process" after booting Stretch by running `stretch_free_robot_process.py`. For more info this Stretch's [doc](https://docs.hello-robot.com/0.3/getting_started/stretch_hardware_overview/#turning-off-gamepad-teleoperation).
You should get something like this:
```bash
For use with S T R E T C H (R) from Hello Robot Inc.
---------------------------------------------------------------------
Model = Stretch 3
Tool = DexWrist 3 w/ Gripper
Serial Number = stretch-se3-3054
---- Checking Hardware ----
[Pass] Comms are ready
[Pass] Actuators are ready
[Warn] Sensors not ready (IMU AZ = -10.19 out of range -10.1 to -9.5)
[Pass] Battery voltage is 13.6 V
---- Checking Software ----
[Pass] Ubuntu 22.04 is ready
[Pass] All APT pkgs are setup correctly
[Pass] Firmware is up-to-date
[Pass] Python pkgs are up-to-date
[Pass] ROS2 Humble is ready
```
## Teleoperate, record a dataset and run a policy
**Calibrate (Optional)**
Before operating Stretch, you need to [home](https://docs.hello-robot.com/0.3/getting_started/stretch_hardware_overview/#homing) it first. Be mindful about giving Stretch some space as this procedure will move the robot's arm and gripper. Now run this command:
```bash
python lerobot/scripts/control_robot.py calibrate \
--robot-path lerobot/configs/robot/stretch.yaml
```
This is equivalent to running `stretch_robot_home.py`
> **Note:** If you run any of the LeRobot scripts below and Stretch is not poperly homed, it will automatically home/calibrate first.
**Teleoperate**
Before trying teleoperation, you need activate the gamepad controller by pressing the middle button. For more info, see Stretch's [doc](https://docs.hello-robot.com/0.3/getting_started/hello_robot/#gamepad-teleoperation).
Now try out teleoperation (see above documentation to learn about the gamepad controls):
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/stretch.yaml
```
This is essentially the same as running `stretch_gamepad_teleop.py`
**Record a dataset**
Once you're familiar with the gamepad controls and after a bit of practice, you can try to record your first dataset with Stretch.
If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Store your Hugging Face repository name in a variable to run these commands:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```
Record one episode:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/stretch.yaml \
--fps 20 \
--root data \
--repo-id ${HF_USER}/stretch_test \
--tags stretch tutorial \
--warmup-time-s 3 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 1 \
--push-to-hub 0
```
> **Note:** If you're using ssh to connect to Stretch and run this script, you won't be able to visualize its cameras feed (though they will still be recording). To see the cameras stream, use [tethered](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#tethered-setup) or [untethered setup](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#untethered-setup).
**Replay an episode**
Now try to replay this episode (make sure the robot's initial position is the same):
```bash
python lerobot/scripts/control_robot.py replay \
--robot-path lerobot/configs/robot/stretch.yaml \
--fps 20 \
--root data \
--repo-id ${HF_USER}/stretch_test \
--episode 0
```
Follow [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) to train a policy on your data and run inference on your robot. You will need to adapt the code for Stretch.
> TODO(rcadene, aliberts): Add already setup environment and policy yaml configuration files
If you need help, please reach out on Discord in the channel `#stretch3-mobile-arm`.

179
examples/9_use_aloha.md Normal file
View File

@@ -0,0 +1,179 @@
This tutorial explains how to use [Aloha and Aloha 2 stationary](https://www.trossenrobotics.com/aloha-stationary) with LeRobot.
## Setup
Follow the [documentation from Trossen Robotics](https://docs.trossenrobotics.com/aloha_docs/getting_started/stationary/hardware_setup.html) for setting up the hardware and plugging the 4 arms and 4 cameras to your computer.
## Install LeRobot
On your computer:
1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
```
2. Restart shell or `source ~/.bashrc`
3. Create and activate a fresh conda environment for lerobot
```bash
conda create -y -n lerobot python=3.10 && conda activate lerobot
```
4. Clone LeRobot:
```bash
git clone https://github.com/huggingface/lerobot.git ~/lerobot
```
5. Install LeRobot with dependencies for the Aloha motors (dynamixel) and cameras (intelrealsense):
```bash
cd ~/lerobot && pip install -e ".[dynamixel, intelrealsense]"
```
For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
```
## Teleoperate
**/!\ FOR SAFETY, READ THIS /!\**
Teleoperation consists in manually operating the leader arms to move the follower arms. Importantly:
1. Make sure your leader arms are in the same position as the follower arms, so that the follower arms don't move too fast to match the leader arms,
2. Our code assumes that your robot has been assembled following Trossen Robotics instructions. This allows us to skip calibration, as we use the pre-defined calibration files in `.cache/calibration/aloha_default`. If you replace a motor, make sure you follow the exact instructions from Trossen Robotics.
By running the following code, you can start your first **SAFE** teleoperation:
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/aloha.yaml \
--robot-overrides max_relative_target=5
```
By adding `--robot-overrides max_relative_target=5`, we override the default value for `max_relative_target` defined in `lerobot/configs/robot/aloha.yaml`. It is expected to be `5` to limit the magnitude of the movement for more safety, but the teloperation won't be smooth. When you feel confident, you can disable this limit by adding `--robot-overrides max_relative_target=null` to the command line:
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/aloha.yaml \
--robot-overrides max_relative_target=null
```
## Record a dataset
Once you're familiar with teleoperation, you can record your first dataset with Aloha.
If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Store your Hugging Face repository name in a variable to run these commands:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```
Record 2 episodes and upload your dataset to the hub:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/aloha.yaml \
--robot-overrides max_relative_target=null \
--fps 30 \
--root data \
--repo-id ${HF_USER}/aloha_test \
--tags aloha tutorial \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 2 \
--push-to-hub 1
```
## Visualize a dataset
If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
echo ${HF_USER}/aloha_test
```
If you didn't upload with `--push-to-hub 0`, you can also visualize it locally with:
```bash
python lerobot/scripts/visualize_dataset_html.py \
--root data \
--repo-id ${HF_USER}/aloha_test
```
## Replay an episode
**/!\ FOR SAFETY, READ THIS /!\**
Replay consists in automatically replaying the sequence of actions (i.e. goal positions for your motors) recorded in a given dataset episode. Make sure the current initial position of your robot is similar to the one in your episode, so that your follower arms don't move too fast to go to the first goal positions. For safety, you might want to add `--robot-overrides max_relative_target=5` to your command line as explained above.
Now try to replay the first episode on your robot:
```bash
python lerobot/scripts/control_robot.py replay \
--robot-path lerobot/configs/robot/aloha.yaml \
--robot-overrides max_relative_target=null \
--fps 30 \
--root data \
--repo-id ${HF_USER}/aloha_test \
--episode 0
```
## Train a policy
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
DATA_DIR=data python lerobot/scripts/train.py \
dataset_repo_id=${HF_USER}/aloha_test \
policy=act_aloha_real \
env=aloha_real \
hydra.run.dir=outputs/train/act_aloha_test \
hydra.job.name=act_aloha_test \
device=cuda \
wandb.enable=true
```
Let's explain it:
1. We provided the dataset as argument with `dataset_repo_id=${HF_USER}/aloha_test`.
2. We provided the policy with `policy=act_aloha_real`. This loads configurations from [`lerobot/configs/policy/act_aloha_real.yaml`](../lerobot/configs/policy/act_aloha_real.yaml). Importantly, this policy uses 4 cameras as input `cam_right_wrist`, `cam_left_wrist`, `cam_high`, and `cam_low`.
3. We provided an environment as argument with `env=aloha_real`. This loads configurations from [`lerobot/configs/env/aloha_real.yaml`](../lerobot/configs/env/aloha_real.yaml). Note: this yaml defines 18 dimensions for the `state_dim` and `action_dim`, corresponding to 18 motors, not 14 motors as used in previous Aloha work. This is because, we include the `shoulder_shadow` and `elbow_shadow` motors for simplicity.
4. We provided `device=cuda` since we are training on a Nvidia GPU.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
Training should take several hours. You will find checkpoints in `outputs/train/act_aloha_test/checkpoints`.
## Evaluate your policy
You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/aloha.yaml \
--robot-overrides max_relative_target=null \
--fps 30 \
--root data \
--repo-id ${HF_USER}/eval_act_aloha_test \
--tags aloha tutorial eval \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 10 \
--num-image-writer-processes 1 \
-p outputs/train/act_aloha_test/checkpoints/last/pretrained_model
```
As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_aloha_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_aloha_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_aloha_test`).
3. We use `--num-image-writer-processes 1` instead of the default value (`0`). On our computer, using a dedicated process to write images from the 4 cameras on disk allows to reach constent 30 fps during inference. Feel free to explore different values for `--num-image-writer-processes`.
## More
Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth explaination.
If you have any question or need help, please reach out on Discord in the channel `#aloha-arm`.

View File

@@ -0,0 +1,87 @@
# @package _global_
# Change the seed to match what PushT eval uses
# (to avoid evaluating on seeds used for generating the training data).
seed: 100000
# Change the dataset repository to the PushT one.
dataset_repo_id: lerobot/pusht
override_dataset_stats:
observation.image:
# stats from imagenet, since we use a pretrained vision model
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
training:
offline_steps: 80000
online_steps: 0
eval_freq: 10000
save_freq: 100000
log_freq: 250
save_model: true
batch_size: 8
lr: 1e-5
lr_backbone: 1e-5
weight_decay: 1e-4
grad_clip_norm: 10
online_steps_between_rollouts: 1
delta_timestamps:
action: "[i / ${fps} for i in range(${policy.chunk_size})]"
eval:
n_episodes: 50
batch_size: 50
# See `configuration_act.py` for more details.
policy:
name: act
# Input / output structure.
n_obs_steps: 1
chunk_size: 100 # chunk_size
n_action_steps: 100
input_shapes:
observation.image: [3, 96, 96]
observation.state: ["${env.state_dim}"]
output_shapes:
action: ["${env.action_dim}"]
# Normalization / Unnormalization
input_normalization_modes:
observation.image: mean_std
# Use min_max normalization just because it's more standard.
observation.state: min_max
output_normalization_modes:
# Use min_max normalization just because it's more standard.
action: min_max
# Architecture.
# Vision backbone.
vision_backbone: resnet18
pretrained_backbone_weights: ResNet18_Weights.IMAGENET1K_V1
replace_final_stride_with_dilation: false
# Transformer layers.
pre_norm: false
dim_model: 512
n_heads: 8
dim_feedforward: 3200
feedforward_activation: relu
n_encoder_layers: 4
# Note: Although the original ACT implementation has 7 for `n_decoder_layers`, there is a bug in the code
# that means only the first layer is used. Here we match the original implementation by setting this to 1.
# See this issue https://github.com/tonyzhaozh/act/issues/25#issue-2258740521.
n_decoder_layers: 1
# VAE.
use_vae: true
latent_dim: 32
n_vae_encoder_layers: 4
# Inference.
temporal_ensemble_coeff: null
# Training and loss computation.
dropout: 0.1
kl_weight: 10.0

View File

@@ -0,0 +1,70 @@
In this tutorial we will learn how to adapt a policy configuration to be compatible with a new environment and dataset. As a concrete example, we will adapt the default configuration for ACT to be compatible with the PushT environment and dataset.
If you haven't already read our tutorial on the [training script and configuration tooling](../4_train_policy_with_script.md) please do so prior to tackling this tutorial.
Let's get started!
Suppose we want to train ACT for PushT. Well, there are aspects of the ACT configuration that are specific to the ALOHA environments, and these happen to be incompatible with PushT. Therefore, trying to run the following will almost certainly raise an exception of sorts (eg: feature dimension mismatch):
```bash
python lerobot/scripts/train.py policy=act env=pusht dataset_repo_id=lerobot/pusht
```
We need to adapt the parameters of the ACT policy configuration to the PushT environment. The most important ones are the image keys.
ALOHA's datasets and environments typically use a variable number of cameras. In `lerobot/configs/policy/act.yaml` you may notice two relevant sections. Here we show you the minimal diff needed to adjust to PushT:
```diff
override_dataset_stats:
- observation.images.top:
+ observation.image:
# stats from imagenet, since we use a pretrained vision model
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
policy:
input_shapes:
- observation.images.top: [3, 480, 640]
+ observation.image: [3, 96, 96]
observation.state: ["${env.state_dim}"]
output_shapes:
action: ["${env.action_dim}"]
input_normalization_modes:
- observation.images.top: mean_std
+ observation.image: mean_std
observation.state: min_max
output_normalization_modes:
action: min_max
```
Here we've accounted for the following:
- PushT uses "observation.image" for its image key.
- PushT provides smaller images.
_Side note: technically we could override these via the CLI, but with many changes it gets a bit messy, and we also have a bit of a challenge in that we're using `.` in our observation keys which is treated by Hydra as a hierarchical separator_.
For your convenience, we provide [`act_pusht.yaml`](./act_pusht.yaml) in this directory. It contains the diff above, plus some other (optional) ones that are explained within. Please copy it into `lerobot/configs/policy` with:
```bash
cp examples/advanced/1_train_act_pusht/act_pusht.yaml lerobot/configs/policy/act_pusht.yaml
```
(remember from a [previous tutorial](../4_train_policy_with_script.md) that Hydra will look in the `lerobot/configs` directory). Now try running the following.
<!-- Note to contributor: are you changing this command? Note that it's tested in `Makefile`, so change it there too! -->
```bash
python lerobot/scripts/train.py policy=act_pusht env=pusht
```
Notice that this is much the same as the command that failed at the start of the tutorial, only:
- Now we are using `policy=act_pusht` to point to our new configuration file.
- We can drop `dataset_repo_id=lerobot/pusht` as the change is incorporated in our new configuration file.
Hurrah! You're now training ACT for the PushT environment.
---
The bottom line of this tutorial is that when training policies for different environments and datasets you will need to understand what parts of the policy configuration are specific to those and make changes accordingly.
Happy coding! 🤗

1
examples/hopejr/BOM Normal file
View File

@@ -0,0 +1 @@
Bearings - https://amzn.eu/d/8Xz7m4C - https://amzn.eu/d/1xOo8re - https://amzn.eu/d/9LXO205 (17x) - https://amzn.eu/d/eKGj9gf (2x) Bike Components - https://amzn.eu/d/cNiQi0O (1x) Accessories - https://amzn.eu/d/ipjCq1R (1x) - https://amzn.eu/d/0ZMzC3G (1x) Screws - https://amzn.eu/d/dzNhSkJ - https://amzn.eu/d/41AhVIU - https://amzn.eu/d/8G91txy - https://amzn.eu/d/9xu0pLa - https://amzn.eu/d/c5xaClV - https://amzn.eu/d/7kudpAo - https://amzn.eu/d/2BEgJFc - https://amzn.eu/d/4q9RNby - https://amzn.eu/d/4RE2lPV - https://amzn.eu/d/63YU0l1 Inserts - https://amzn.eu/d/7fjOtOC

624
examples/hopejr/README.md Normal file
View File

@@ -0,0 +1,624 @@
# Using the [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot
## Table of Contents
- [A. Source the parts](#a-source-the-parts)
- [B. Install LeRobot](#b-install-lerobot)
- [C. Configure the Motors](#c-configure-the-motors)
- [D. Step-by-Step Assembly Instructions](#d-step-by-step-assembly-instructions)
- [E. Calibrate](#e-calibrate)
- [F. Teleoperate](#f-teleoperate)
- [G. Record a dataset](#g-record-a-dataset)
- [H. Visualize a dataset](#h-visualize-a-dataset)
- [I. Replay an episode](#i-replay-an-episode)
- [J. Train a policy](#j-train-a-policy)
- [K. Evaluate your policy](#k-evaluate-your-policy)
- [L. More Information](#l-more-information)
## A. Source the parts
Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts,
and advice if it's your first time printing or if you don't own a 3D printer.
Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
## B. Install LeRobot
> [!TIP]
> We use the Command Prompt (cmd) quite a lot. If you are not comfortable using the cmd or want to brush up using the command line you can have a look here: [Command line crash course](https://developer.mozilla.org/en-US/docs/Learn_web_development/Getting_started/Environment_setup/Command_line)
On your computer:
#### 1. [Install Miniconda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install):
#### 2. Restart shell
Copy paste in your shell: `source ~/.bashrc` or for Mac: `source ~/.bash_profile` or `source ~/.zshrc` if you're using zshell
#### 3. Create and activate a fresh conda environment for lerobot
<details>
<summary><strong>Video install instructions</strong></summary>
<video src="https://github.com/user-attachments/assets/17172d3b-3b64-4b80-9cf1-b2b7c5cbd236"></video>
</details>
```bash
conda create -y -n lerobot python=3.10
```
Then activate your conda environment (do this each time you open a shell to use lerobot!):
```bash
conda activate lerobot
```
#### 4. Clone LeRobot:
```bash
git clone https://github.com/huggingface/lerobot.git ~/lerobot
```
#### 5. Install ffmpeg in your environment:
When using `miniconda`, install `ffmpeg` in your environment:
```bash
conda install ffmpeg -c conda-forge
```
#### 6. Install LeRobot with dependencies for the feetech motors:
```bash
cd ~/lerobot && pip install -e ".[feetech]"
```
Great :hugs:! You are now done installing LeRobot and we can begin assembling the SO100 arms :robot:.
Every time you now want to use LeRobot you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
## C. Configure the motors
> [!NOTE]
> Throughout this tutorial you will find videos on how to do the steps, the full video tutorial can be found here: [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I).
### 1. Find the USB ports associated to each arm
Designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. It's convenient to label them and write on each motor if it's for the follower `F` or for the leader `L` and it's ID from 1 to 6 (F1...F6 and L1...L6).
#### a. Run the script to find port
<details>
<summary><strong>Video finding port</strong></summary>
<video src="https://github.com/user-attachments/assets/4a21a14d-2046-4805-93c4-ee97a30ba33f"></video>
<video src="https://github.com/user-attachments/assets/1cc3aecf-c16d-4ff9-aec7-8c175afbbce2"></video>
</details>
To find the port for each bus servo adapter, run the utility script:
```bash
python lerobot/scripts/find_motors_bus_port.py
```
#### b. Example outputs
Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the usb cable from your MotorsBus and press Enter when done.
[...Disconnect leader arm and press Enter...]
The port of this MotorsBus is /dev/tty.usbmodem575E0031751
Reconnect the usb cable.
```
Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the usb cable from your MotorsBus and press Enter when done.
[...Disconnect follower arm and press Enter...]
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the usb cable.
```
#### c. Troubleshooting
On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1
```
#### d. Update config file
IMPORTANTLY: Now that you have your ports, update the **port** default values of [`SO100RobotConfig`](../lerobot/common/robot_devices/robots/configs.py). You will find something like:
```python
@RobotConfig.register_subclass("so100")
@dataclass
class So100RobotConfig(ManipulatorRobotConfig):
calibration_dir: str = ".cache/calibration/so100"
# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: int | None = None
leader_arms: dict[str, MotorsBusConfig] = field(
default_factory=lambda: {
"main": FeetechMotorsBusConfig(
port="/dev/tty.usbmodem58760431091", <-- UPDATE HERE
motors={
# name: (index, model)
"shoulder_pan": [1, "sts3215"],
"shoulder_lift": [2, "sts3215"],
"elbow_flex": [3, "sts3215"],
"wrist_flex": [4, "sts3215"],
"wrist_roll": [5, "sts3215"],
"gripper": [6, "sts3215"],
},
),
}
)
follower_arms: dict[str, MotorsBusConfig] = field(
default_factory=lambda: {
"main": FeetechMotorsBusConfig(
port="/dev/tty.usbmodem585A0076891", <-- UPDATE HERE
motors={
# name: (index, model)
"shoulder_pan": [1, "sts3215"],
"shoulder_lift": [2, "sts3215"],
"elbow_flex": [3, "sts3215"],
"wrist_flex": [4, "sts3215"],
"wrist_roll": [5, "sts3215"],
"gripper": [6, "sts3215"],
},
),
}
)
```
### 2. Assembling the Base
Let's begin with assembling the follower arm base
#### a. Set IDs for all 12 motors
<details>
<summary><strong>Video configuring motor</strong></summary>
<video src="https://github.com/user-attachments/assets/ef9b3317-2e11-4858-b9d3-f0a02fb48ecf"></video>
<video src="https://github.com/user-attachments/assets/f36b5ed5-c803-4ebe-8947-b39278776a0d"></video>
</details>
Plug your first motor F1 and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate. Replace the text after --port to the corresponding follower control board port and run this command in cmd:
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 1
```
> [!NOTE]
> These motors are currently limited. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).
Then unplug your motor and plug the second motor and set its ID to 2.
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 2
```
Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.
#### b. Remove the gears of the 6 leader motors
<details>
<summary><strong>Video removing gears</strong></summary>
<video src="https://github.com/user-attachments/assets/0c95b88c-5b85-413d-ba19-aee2f864f2a7"></video>
</details>
Follow the video for removing gears. You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
## D. Step-by-Step Assembly Instructions
**Step 1: Clean Parts**
- Remove all support material from the 3D-printed parts.
---
### Additional Guidance
<details>
<summary><strong>Video assembling arms</strong></summary>
<video src="https://github.com/user-attachments/assets/488a39de-0189-4461-9de3-05b015f90cca"></video>
</details>
**Note:**
This video provides visual guidance for assembling the arms, but it doesn't specify when or how to do the wiring. Inserting the cables beforehand is much easier than doing it afterward. The first arm may take a bit more than 1 hour to assemble, but once you get used to it, you can assemble the second arm in under 1 hour.
---
### First Motor
**Step 2: Insert Wires**
- Insert two wires into the first motor.
<img src="../media/tutorial/img1.jpg" style="height:300px;">
**Step 3: Install in Base**
- Place the first motor into the base.
<img src="../media/tutorial/img2.jpg" style="height:300px;">
**Step 4: Secure Motor**
- Fasten the motor with 4 screws. Two from the bottom and two from top.
**Step 5: Attach Motor Holder**
- Slide over the first motor holder and fasten it using two screws (one on each side).
<img src="../media/tutorial/img4.jpg" style="height:300px;">
**Step 6: Attach Motor Horns**
- Install both motor horns, securing the top horn with a screw. Try not to move the motor position when attaching the motor horn, especially for the leader arms, where we removed the gears.
<img src="../media/tutorial/img5.jpg" style="height:300px;">
<details>
<summary><strong>Video adding motor horn</strong></summary>
<video src="https://github.com/user-attachments/assets/ef3391a4-ad05-4100-b2bd-1699bf86c969"></video>
</details>
**Step 7: Attach Shoulder Part**
- Route one wire to the back of the robot and the other to the left or in photo towards you (see photo).
- Attach the shoulder part.
<img src="../media/tutorial/img6.jpg" style="height:300px;">
**Step 8: Secure Shoulder**
- Tighten the shoulder part with 4 screws on top and 4 on the bottom
*(access bottom holes by turning the shoulder).*
---
### Second Motor Assembly
**Step 9: Install Motor 2**
- Slide the second motor in from the top and link the wire from motor 1 to motor 2.
<img src="../media/tutorial/img8.jpg" style="height:300px;">
**Step 10: Attach Shoulder Holder**
- Add the shoulder motor holder.
- Ensure the wire from motor 1 to motor 2 goes behind the holder while the other wire is routed upward (see photo).
- This part can be tight to assemble, you can use a workbench like the image or a similar setup to push the part around the motor.
<div style="display: flex;">
<img src="../media/tutorial/img9.jpg" style="height:250px;">
<img src="../media/tutorial/img10.jpg" style="height:250px;">
<img src="../media/tutorial/img12.jpg" style="height:250px;">
</div>
**Step 11: Secure Motor 2**
- Fasten the second motor with 4 screws.
**Step 12: Attach Motor Horn**
- Attach both motor horns to motor 2, again use the horn screw.
**Step 13: Attach Base**
- Install the base attachment using 2 screws.
<img src="../media/tutorial/img11.jpg" style="height:300px;">
**Step 14: Attach Upper Arm**
- Attach the upper arm with 4 screws on each side.
<img src="../media/tutorial/img13.jpg" style="height:300px;">
---
### Third Motor Assembly
**Step 15: Install Motor 3**
- Route the motor cable from motor 2 through the cable holder to motor 3, then secure motor 3 with 4 screws.
**Step 16: Attach Motor Horn**
- Attach both motor horns to motor 3 and secure one again with a horn screw.
<img src="../media/tutorial/img14.jpg" style="height:300px;">
**Step 17: Attach Forearm**
- Connect the forearm to motor 3 using 4 screws on each side.
<img src="../media/tutorial/img15.jpg" style="height:300px;">
---
### Fourth Motor Assembly
**Step 18: Install Motor 4**
- Slide in motor 4, attach the cable from motor 3, and secure the cable in its holder with a screw.
<div style="display: flex;">
<img src="../media/tutorial/img16.jpg" style="height:300px;">
<img src="../media/tutorial/img19.jpg" style="height:300px;">
</div>
**Step 19: Attach Motor Holder 4**
- Install the fourth motor holder (a tight fit). Ensure one wire is routed upward and the wire from motor 3 is routed downward (see photo).
<img src="../media/tutorial/img17.jpg" style="height:300px;">
**Step 20: Secure Motor 4 & Attach Horn**
- Fasten motor 4 with 4 screws and attach its motor horns, use for one a horn screw.
<img src="../media/tutorial/img18.jpg" style="height:300px;">
---
### Wrist Assembly
**Step 21: Install Motor 5**
- Insert motor 5 into the wrist holder and secure it with 2 front screws.
<img src="../media/tutorial/img20.jpg" style="height:300px;">
**Step 22: Attach Wrist**
- Connect the wire from motor 4 to motor 5. And already insert the other wire for the gripper.
- Secure the wrist to motor 4 using 4 screws on both sides.
<img src="../media/tutorial/img22.jpg" style="height:300px;">
**Step 23: Attach Wrist Horn**
- Install only one motor horn on the wrist motor and secure it with a horn screw.
<img src="../media/tutorial/img23.jpg" style="height:300px;">
---
### Follower Configuration
**Step 24: Attach Gripper**
- Attach the gripper to motor 5.
<img src="../media/tutorial/img24.jpg" style="height:300px;">
**Step 25: Install Gripper Motor**
- Insert the gripper motor, connect the motor wire from motor 5 to motor 6, and secure it with 3 screws on each side.
<img src="../media/tutorial/img25.jpg" style="height:300px;">
**Step 26: Attach Gripper Horn & Claw**
- Attach the motor horns and again use a horn screw.
- Install the gripper claw and secure it with 4 screws on both sides.
<img src="../media/tutorial/img26.jpg" style="height:300px;">
**Step 27: Mount Controller**
- Attach the motor controller on the back.
<div style="display: flex;">
<img src="../media/tutorial/img27.jpg" style="height:300px;">
<img src="../media/tutorial/img28.jpg" style="height:300px;">
</div>
*Assembly complete proceed to Leader arm assembly.*
---
### Leader Configuration
For the leader configuration, perform **Steps 123**. Make sure that you removed the motor gears from the motors.
**Step 24: Attach Leader Holder**
- Mount the leader holder onto the wrist and secure it with a screw.
<img src="../media/tutorial/img29.jpg" style="height:300px;">
**Step 25: Attach Handle**
- Attach the handle to motor 5 using 4 screws.
<img src="../media/tutorial/img30.jpg" style="height:300px;">
**Step 26: Install Gripper Motor**
- Insert the gripper motor, secure it with 3 screws on each side, attach a motor horn using a horn screw, and connect the motor wire.
<img src="../media/tutorial/img31.jpg" style="height:300px;">
**Step 27: Attach Trigger**
- Attach the follower trigger with 4 screws.
<img src="../media/tutorial/img32.jpg" style="height:300px;">
**Step 28: Mount Controller**
- Attach the motor controller on the back.
<div style="display: flex;">
<img src="../media/tutorial/img27.jpg" style="height:300px;">
<img src="../media/tutorial/img28.jpg" style="height:300px;">
</div>
*Assembly complete proceed to calibration.*
## E. Calibrate
Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one SO-100 robot to work on another.
#### a. Manual calibration of follower arm
> [!IMPORTANT]
> Contrarily to step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the auto calibration, we will actually do manual calibration of follower for now.
You will need to move the follower arm to these positions sequentially:
| 1. Zero position | 2. Rotated position | 3. Rest position |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| <img src="../media/so100/follower_zero.webp?raw=true" alt="SO-100 follower arm zero position" title="SO-100 follower arm zero position" style="width:100%;"> | <img src="../media/so100/follower_rotated.webp?raw=true" alt="SO-100 follower arm rotated position" title="SO-100 follower arm rotated position" style="width:100%;"> | <img src="../media/so100/follower_rest.webp?raw=true" alt="SO-100 follower arm rest position" title="SO-100 follower arm rest position" style="width:100%;"> |
Make sure both arms are connected and run this script to launch manual calibration:
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--robot.cameras='{}' \
--control.type=calibrate \
--control.arms='["main_follower"]'
```
#### b. Manual calibration of leader arm
Follow step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
| 1. Zero position | 2. Rotated position | 3. Rest position |
| ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| <img src="../media/so100/leader_zero.webp?raw=true" alt="SO-100 leader arm zero position" title="SO-100 leader arm zero position" style="width:100%;"> | <img src="../media/so100/leader_rotated.webp?raw=true" alt="SO-100 leader arm rotated position" title="SO-100 leader arm rotated position" style="width:100%;"> | <img src="../media/so100/leader_rest.webp?raw=true" alt="SO-100 leader arm rest position" title="SO-100 leader arm rest position" style="width:100%;"> |
Run this script to launch manual calibration:
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--robot.cameras='{}' \
--control.type=calibrate \
--control.arms='["main_leader"]'
```
## F. Teleoperate
**Simple teleop**
Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--robot.cameras='{}' \
--control.type=teleoperate
```
#### a. Teleop with displaying cameras
Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=teleoperate
```
## G. Record a dataset
Once you're familiar with teleoperation, you can record your first dataset with SO-100.
If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Store your Hugging Face repository name in a variable to run these commands:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```
Record 2 episodes and upload your dataset to the hub:
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=record \
--control.fps=30 \
--control.single_task="Grasp a lego block and put it in the bin." \
--control.repo_id=${HF_USER}/so100_test \
--control.tags='["so100","tutorial"]' \
--control.warmup_time_s=5 \
--control.episode_time_s=30 \
--control.reset_time_s=30 \
--control.num_episodes=2 \
--control.push_to_hub=true
```
Note: You can resume recording by adding `--control.resume=true`.
## H. Visualize a dataset
If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
echo ${HF_USER}/so100_test
```
If you didn't upload with `--control.push_to_hub=false`, you can also visualize it locally with (a window can be opened in the browser `http://127.0.0.1:9090` with the visualization tool):
```bash
python lerobot/scripts/visualize_dataset_html.py \
--repo-id ${HF_USER}/so100_test \
--local-files-only 1
```
## I. Replay an episode
Now try to replay the first episode on your robot:
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=replay \
--control.fps=30 \
--control.repo_id=${HF_USER}/so100_test \
--control.episode=0
```
## J. Train a policy
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
python lerobot/scripts/train.py \
--dataset.repo_id=${HF_USER}/so100_test \
--policy.type=act \
--output_dir=outputs/train/act_so100_test \
--job_name=act_so100_test \
--policy.device=cuda \
--wandb.enable=true
```
Let's explain it:
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so100_test`.
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so100_test` policy:
```bash
python lerobot/scripts/train.py \
--config_path=outputs/train/act_so100_test/checkpoints/last/pretrained_model/train_config.json \
--resume=true
```
## K. Evaluate your policy
You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=record \
--control.fps=30 \
--control.single_task="Grasp a lego block and put it in the bin." \
--control.repo_id=${HF_USER}/eval_act_so100_test \
--control.tags='["tutorial"]' \
--control.warmup_time_s=5 \
--control.episode_time_s=30 \
--control.reset_time_s=30 \
--control.num_episodes=10 \
--control.push_to_hub=true \
--control.policy.path=outputs/train/act_so100_test/checkpoints/last/pretrained_model
```
As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so100_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so100_test`).
## L. More Information
Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.
> [!TIP]
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb) in the channel [`#so100-arm`](https://discord.com/channels/1216765309076115607/1237741463832363039).

45
examples/hopejr/agugu.py Normal file
View File

@@ -0,0 +1,45 @@
import time
import numpy as np
from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus, CalibrationMode
@staticmethod
def degps_to_raw(degps: float) -> int:
steps_per_deg = 4096.0 / 360.0
speed_in_steps = abs(degps) * steps_per_deg
speed_int = int(round(speed_in_steps))
if speed_int > 0x7FFF:
speed_int = 0x7FFF
if degps < 0:
return speed_int | 0x8000
else:
return speed_int & 0x7FFF
@staticmethod
def raw_to_degps(raw_speed: int) -> float:
steps_per_deg = 4096.0 / 360.0
magnitude = raw_speed & 0x7FFF
degps = magnitude / steps_per_deg
if raw_speed & 0x8000:
degps = -degps
return degps
def main():
# Instantiate the bus for a single motor on port /dev/ttyACM0.
arm_bus = FeetechMotorsBus(
port="/dev/ttyACM0",
motors={"wrist_pitch": [1, "scs0009"]},
protocol_version=1,
group_sync_read=False, # using individual read calls
)
arm_bus.connect()
# Read the current raw motor position.
# Note that "Present_Position" is in the raw units.
current_raw = arm_bus.read("Present_Position", ["wrist_pitch"])[0]
print("Current raw position:", current_raw)
arm_bus.write("Goal_Position", 1000)
arm_bus.disconnect()
exit()
if __name__ == "__main__":
main()

BIN
examples/hopejr/asd.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

View File

@@ -0,0 +1,46 @@
import serial
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from collections import deque
# Adjust this to match your actual serial port and baud rate
SERIAL_PORT = '/dev/ttyACM0' # or COM3 on Windows
BAUD_RATE = 115200
# Set up serial connection
ser = serial.Serial(SERIAL_PORT, BAUD_RATE)
# Buffers for real-time plot
buffer_len = 200
val1_buffer = deque([0]*buffer_len, maxlen=buffer_len)
val2_buffer = deque([0]*buffer_len, maxlen=buffer_len)
# Setup the plot
fig, ax = plt.subplots()
line1, = ax.plot([], [], label='Sensor 0')
line2, = ax.plot([], [], label='Sensor 1')
ax.set_ylim(0, 4096)
ax.set_xlim(0, buffer_len)
ax.legend()
def update(frame):
while ser.in_waiting:
line = ser.readline().decode('utf-8').strip()
parts = line.split()
if len(parts) >= 2:
try:
val1 = int(parts[0])
val2 = int(parts[1])
val1_buffer.append(val1)
val2_buffer.append(val2)
except ValueError:
pass # skip malformed lines
line1.set_ydata(val1_buffer)
line1.set_xdata(range(len(val1_buffer)))
line2.set_ydata(val2_buffer)
line2.set_xdata(range(len(val2_buffer)))
return line1, line2
ani = animation.FuncAnimation(fig, update, interval=50)
plt.show()

View File

@@ -0,0 +1,64 @@
import serial
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from collections import deque
# Config
SERIAL_PORT = '/dev/ttyACM1' # Change as needed
BAUD_RATE = 115200
BUFFER_LEN = 200
# Sensor names in order
sensor_names = [
"wrist_roll",
"wrist_pitch",
"wrist_yaw",
"elbow_flex",
"shoulder_roll",
"shoulder_yaw",
"shoulder_pitch"
]
# Initialize buffers
sensor_data = {
name: deque([0]*BUFFER_LEN, maxlen=BUFFER_LEN)
for name in sensor_names
}
# Setup plot
fig, axes = plt.subplots(len(sensor_names), 1, figsize=(8, 12), sharex=True)
fig.tight_layout(pad=3.0)
lines = {}
for i, name in enumerate(sensor_names):
axes[i].set_title(name)
axes[i].set_xlim(0, BUFFER_LEN)
axes[i].set_ylim(0, 4096)
line, = axes[i].plot([], [], label=name)
axes[i].legend()
lines[name] = line
# Connect to serial
ser = serial.Serial(SERIAL_PORT, BAUD_RATE)
# Update function
def update(frame):
while ser.in_waiting:
line = ser.readline().decode().strip()
parts = line.split()
if len(parts) != 7:
continue
try:
values = list(map(int, parts))
except ValueError:
continue
for i, name in enumerate(sensor_names):
sensor_data[name].append(values[i])
for name in sensor_names:
x = range(len(sensor_data[name]))
lines[name].set_data(x, sensor_data[name])
return lines.values()
# Animate
ani = animation.FuncAnimation(fig, update, interval=50, blit=False)
plt.show()

View File

@@ -0,0 +1,161 @@
import serial
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from collections import deque
# Adjust this to match your actual serial port and baud rate
SERIAL_PORT = '/dev/ttyACM0' # or COM3 on Windows
BAUD_RATE = 115200
# Set up serial connection
ser = serial.Serial(SERIAL_PORT, BAUD_RATE)
# How many data points to keep in the scrolling buffer
buffer_len = 200
# -------------------------------------------------------------------
# 1) Sensor buffers for existing sensors + new wrist_pitch, wrist_yaw
# -------------------------------------------------------------------
sensor_buffers = {
'wrist_roll': {
'val1': deque([0]*buffer_len, maxlen=buffer_len),
'val2': deque([0]*buffer_len, maxlen=buffer_len)
},
'elbow_pitch': {
'val1': deque([0]*buffer_len, maxlen=buffer_len),
'val2': deque([0]*buffer_len, maxlen=buffer_len)
},
'shoulder_pitch': {
'val1': deque([0]*buffer_len, maxlen=buffer_len),
'val2': deque([0]*buffer_len, maxlen=buffer_len)
},
'shoulder_yaw': {
'val1': deque([0]*buffer_len, maxlen=buffer_len),
'val2': deque([0]*buffer_len, maxlen=buffer_len)
},
'shoulder_roll': {
'val1': deque([0]*buffer_len, maxlen=buffer_len),
'val2': deque([0]*buffer_len, maxlen=buffer_len)
},
# --- New single-valued sensors ---
'wrist_pitch': {
'val1': deque([0]*buffer_len, maxlen=buffer_len) # Only one line
},
'wrist_yaw': {
'val1': deque([0]*buffer_len, maxlen=buffer_len) # Only one line
},
}
# -------------------------------------------------------------------
# 2) Figure with 7 subplots (was 5). We keep the original 5 + 2 new.
# -------------------------------------------------------------------
fig, axes = plt.subplots(7, 1, figsize=(8, 14), sharex=True)
fig.tight_layout(pad=3.0)
# We'll store line references in a dict so we can update them in update().
lines = {}
# -------------------------------------------------------------------
# 3) Define each subplot, including new ones at the end.
# -------------------------------------------------------------------
subplot_info = [
('wrist_roll', 'Wrist Roll (2,3)', axes[0]),
('elbow_pitch', 'Elbow Pitch (0,1)', axes[1]),
('shoulder_pitch', 'Shoulder Pitch (10,11)', axes[2]),
('shoulder_yaw', 'Shoulder Yaw (12,13)', axes[3]),
('shoulder_roll', 'Shoulder Roll (14,15)', axes[4]),
('wrist_pitch', 'Wrist Pitch (0)', axes[5]), # new
('wrist_yaw', 'Wrist Yaw (1)', axes[6]), # new
]
# Set up each subplot
for (sensor_name, label, ax) in subplot_info:
ax.set_title(label)
ax.set_xlim(0, buffer_len)
ax.set_ylim(0, 4096) # adjust if needed
# For existing sensors, plot 2 lines (val1, val2)
# For the new single-line sensors, plot just 1 line
if sensor_name in ['wrist_pitch', 'wrist_yaw']:
# Single-valued
line, = ax.plot([], [], label=f"{sensor_name}")
lines[sensor_name] = line
else:
# Pair of values
line1, = ax.plot([], [], label=f"{sensor_name} - val1")
line2, = ax.plot([], [], label=f"{sensor_name} - val2")
lines[sensor_name] = [line1, line2]
ax.legend()
def update(frame):
# Read all available lines from the serial buffer
while ser.in_waiting:
raw_line = ser.readline().decode('utf-8').strip()
parts = raw_line.split()
# We expect at least 16 values if all sensors are present
if len(parts) < 7:
continue
try:
values = list(map(int, parts))
except ValueError:
# If there's a parsing error, skip this line
continue
# Original code: extract the relevant values and append to the correct buffer
sensor_buffers['elbow_pitch']['val1'].append(values[13])
sensor_buffers['elbow_pitch']['val2'].append(values[13])
sensor_buffers['wrist_roll']['val1'].append(values[3])
sensor_buffers['wrist_roll']['val2'].append(values[3])
sensor_buffers['shoulder_pitch']['val1'].append(values[14])
sensor_buffers['shoulder_pitch']['val2'].append(values[14])
sensor_buffers['shoulder_yaw']['val1'].append(values[8])
sensor_buffers['shoulder_yaw']['val2'].append(values[8])
sensor_buffers['shoulder_roll']['val1'].append(values[10])
sensor_buffers['shoulder_roll']['val2'].append(values[10])
# -------------------------------------------------------------------
# 4) New code: also read wrist_pitch (index 0) and wrist_yaw (index 1)
# -------------------------------------------------------------------
sensor_buffers['wrist_yaw']['val1'].append(values[0])
sensor_buffers['wrist_pitch']['val1'].append(values[1])
# Update each line's data in each subplot
all_lines = []
for (sensor_name, _, ax) in subplot_info:
# x-values are just the index range of the buffer for val1
x_data = range(len(sensor_buffers[sensor_name]['val1']))
# If this sensor has two lines
if isinstance(lines[sensor_name], list):
# First line
lines[sensor_name][0].set_data(
x_data,
sensor_buffers[sensor_name]['val1']
)
# Second line
lines[sensor_name][1].set_data(
x_data,
sensor_buffers[sensor_name]['val2']
)
all_lines.extend(lines[sensor_name])
else:
# Single line only (wrist_pitch, wrist_yaw)
lines[sensor_name].set_data(
x_data,
sensor_buffers[sensor_name]['val1']
)
all_lines.append(lines[sensor_name])
return all_lines
# Create the animation
ani = animation.FuncAnimation(fig, update, interval=50, blit=False)
plt.show()

186
examples/hopejr/follower.py Normal file
View File

@@ -0,0 +1,186 @@
from lerobot.common.robot_devices.motors.feetech import (
CalibrationMode,
FeetechMotorsBus,
)
import yaml
class HopeJuniorRobot:
def __init__(self):
self.arm_port = "/dev/tty.usbserial-140"
self.hand_port = "/dev/tty.usbmodem58760436961"
self.arm_bus = FeetechMotorsBus(
port = self.arm_port,
motors={
# "motor1": (1, "sts3250"),
# "motor2": (2, "sts3250"),
# "motor3": (3, "sts3250"),
#"shoulder_pitch": [1, "sts3215"],
"shoulder_pitch": [1, "sm8512bl"],
"shoulder_yaw": [2, "sts3250"], # TODO: sts3250
"shoulder_roll": [3, "sts3250"], # TODO: sts3250
"elbow_flex": [4, "sts3250"],
"wrist_roll": [5, "sts3215"],
"wrist_yaw": [6, "sts3215"],
"wrist_pitch": [7, "sts3215"],
},
protocol_version=0,
)
self.hand_bus = FeetechMotorsBus(
port=self.hand_port,
motors = {
# Thumb
"thumb_basel_rotation": [1, "scs0009"],
"thumb_mcp": [3, "scs0009"],
"thumb_pip": [4, "scs0009"],
"thumb_dip": [13, "scs0009"],
# Index
"index_thumb_side": [5, "scs0009"],
"index_pinky_side": [6, "scs0009"],
"index_flexor": [16, "scs0009"],
# Middle
"middle_thumb_side": [8, "scs0009"],
"middle_pinky_side": [9, "scs0009"],
"middle_flexor": [2, "scs0009"],
# Ring
"ring_thumb_side": [11, "scs0009"],
"ring_pinky_side": [12, "scs0009"],
"ring_flexor": [7, "scs0009"],
# Pinky
"pinky_thumb_side": [14, "scs0009"],
"pinky_pinky_side": [15, "scs0009"],
"pinky_flexor": [10, "scs0009"],
},
protocol_version=1,#1
group_sync_read=False,
)
self.arm_calib_dict = self.get_arm_calibration()
self.hand_calib_dict = self.get_hand_calibration()
def apply_arm_config(self, config_file):
with open(config_file, "r") as file:
config = yaml.safe_load(file)
for param, value in config.get("robot", {}).get("arm_bus", {}).items():
self.arm_bus.write(param, value)
def apply_hand_config(config_file, robot):
with open(config_file, "r") as file:
config = yaml.safe_load(file)
for param, value in config.get("robot", {}).get("hand_bus", {}).items():
robot.arm_bus.write(param, value)
def get_hand_calibration(self):
homing_offset = [0] * len(self.hand_bus.motor_names)
drive_mode = [0] * len(self.hand_bus.motor_names)
start_pos = [
750, # thumb_basel_rotation
100, # thumb_mcp
700, # thumb_pip
100, # thumb_dip
800, # index_thumb_side
950, # index_pinky_side
0, # index_flexor
250, # middle_thumb_side
850, # middle_pinky_side
0, # middle_flexor
850, # ring_thumb_side
900, # ring_pinky_side
0, # ring_flexor
00, # pinky_thumb_side
950, # pinky_pinky_side
0, # pinky_flexor
]
end_pos = [
start_pos[0] - 550, # thumb_basel_rotation
start_pos[1] + 400, # thumb_mcp
start_pos[2] + 300, # thumb_pip
start_pos[3] + 200, # thumb_dip
start_pos[4] - 700, # index_thumb_side
start_pos[5] - 300, # index_pinky_side
start_pos[6] + 600, # index_flexor
start_pos[7] + 700, # middle_thumb_side
start_pos[8] - 400, # middle_pinky_side
start_pos[9] + 600, # middle_flexor
start_pos[10] - 600, # ring_thumb_side
start_pos[11] - 400, # ring_pinky_side
start_pos[12] + 600, # ring_flexor
start_pos[13] + 400, # pinky_thumb_side
start_pos[14] - 450, # pinky_pinky_side
start_pos[15] + 600, # pinky_flexor
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.hand_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.hand_bus.motor_names,
}
return calib_dict
def get_arm_calibration(self):
homing_offset = [0] * len(self.arm_bus.motor_names)
drive_mode = [0] * len(self.arm_bus.motor_names)
start_pos = [
1800, # shoulder_up
2800, # shoulder_forward
1800, # shoulder_roll
1200, # bend_elbow
700, # wrist_roll
1850, # wrist_yaw
1700, # wrist_pitch
]
end_pos = [
2800, # shoulder_up
3150, # shoulder_forward
400, #shoulder_roll
2300, # bend_elbow
2300, # wrist_roll
2150, # wrist_yaw
2300, # wrist_pitch
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.arm_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.arm_bus.motor_names,
}
return calib_dict
def connect_arm(self):
self.arm_bus.connect()
def connect_hand(self):
self.hand_bus.connect()

730
examples/hopejr/leader.py Normal file
View File

@@ -0,0 +1,730 @@
from lerobot.common.robot_devices.motors.feetech import (
CalibrationMode,
FeetechMotorsBus,
)
import serial
import threading
import time
from typing import Callable
import pickle
import cv2
import numpy as np
from collections import deque
import json
import os
LOWER_BOUND_LINEAR = -100
UPPER_BOUND_LINEAR = 200
class HomonculusArm:
def __init__(self, serial_port: str = "/dev/ttyACM1", baud_rate: int = 115200):
self.serial_port = serial_port
self.baud_rate = 115200
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
# Number of past values to keep in memory
self.buffer_size = 10
# Initialize a buffer (deque) for each joint
self.joint_buffer = {
"wrist_roll": deque(maxlen=self.buffer_size),
"wrist_pitch": deque(maxlen=self.buffer_size),
"wrist_yaw": deque(maxlen=self.buffer_size),
"elbow_flex": deque(maxlen=self.buffer_size),
"shoulder_roll": deque(maxlen=self.buffer_size),
"shoulder_yaw": deque(maxlen=self.buffer_size),
"shoulder_pitch": deque(maxlen=self.buffer_size),
}
# Start the reading thread
self.thread = threading.Thread(target=self.async_read, daemon=True)
self.thread.start()
# Last read dictionary
self.last_d = {
"wrist_roll": 100,
"wrist_pitch": 100,
"wrist_yaw": 100,
"elbow_flex": 100,
"shoulder_roll": 100,
"shoulder_yaw": 100,
"shoulder_pitch": 100,
}
self.calibration = None
# For adaptive EMA, we store a "previous smoothed" state per joint
self.adaptive_ema_state = {
"wrist_roll": None,
"wrist_pitch": None,
"wrist_yaw": None,
"elbow_flex": None,
"shoulder_roll": None,
"shoulder_yaw": None,
"shoulder_pitch": None,
}
self.kalman_state = {
joint: {"x": None, "P": None} for joint in self.joint_buffer.keys()
}
@property
def joint_names(self):
return list(self.last_d.keys())
def read(self, motor_names: list[str] | None = None):
"""
Return the most recent (single) values from self.last_d,
optionally applying calibration.
"""
if motor_names is None:
motor_names = self.joint_names
# Get raw (last) values
values = np.array([self.last_d[k] for k in motor_names])
#print(motor_names)
print(values)
# Apply calibration if available
if self.calibration is not None:
values = self.apply_calibration(values, motor_names)
print(values)
return values
def read_running_average(self, motor_names: list[str] | None = None, linearize=False):
"""
Return the AVERAGE of the most recent self.buffer_size (or fewer, if not enough data) readings
for each joint, optionally applying calibration.
"""
if motor_names is None:
motor_names = self.joint_names
# Gather averaged readings from buffers
smoothed_vals = []
for name in motor_names:
buf = self.joint_buffer[name]
if len(buf) == 0:
# If no data has been read yet, fall back to last_d
smoothed_vals.append(self.last_d[name])
else:
# Otherwise, average over the existing buffer
smoothed_vals.append(np.mean(buf))
smoothed_vals = np.array(smoothed_vals, dtype=np.float32)
# Apply calibration if available
if self.calibration is not None:
if False:
for i, joint_name in enumerate(motor_names):
# Re-use the same raw_min / raw_max from the calibration
calib_idx = self.calibration["motor_names"].index(joint_name)
min_reading = self.calibration["start_pos"][calib_idx]
max_reading = self.calibration["end_pos"][calib_idx]
B_value = smoothed_vals[i]
print(joint_name)
if joint_name == "elbow_flex":
print('elbow')
try:
smoothed_vals[i] = int(min_reading+(max_reading - min_reading)*np.arcsin((B_value-min_reading)/(max_reading-min_reading))/(np.pi / 2))
except:
print('not working')
print(smoothed_vals)
print('not working')
smoothed_vals = self.apply_calibration(smoothed_vals, motor_names)
return smoothed_vals
def read_kalman_filter(
self,
Q: float = 1.0,
R: float = 100.0,
motor_names: list[str] | None = None
) -> np.ndarray:
"""
Return a Kalman-filtered reading for each requested joint.
We store a separate Kalman filter (x, P) per joint. For each new measurement Z:
1) Predict:
x_pred = x (assuming no motion model)
P_pred = P + Q
2) Update:
K = P_pred / (P_pred + R)
x = x_pred + K * (Z - x_pred)
P = (1 - K) * P_pred
:param Q: Process noise. Larger Q means the estimate can change more freely.
:param R: Measurement noise. Larger R means we trust our sensor less.
:param motor_names: If not specified, all joints are filtered.
:return: Kalman-filtered positions as a numpy array.
"""
if motor_names is None:
motor_names = self.joint_names
current_vals = np.array([self.last_d[name] for name in motor_names], dtype=np.float32)
filtered_vals = np.zeros_like(current_vals)
for i, name in enumerate(motor_names):
# Retrieve the filter state for this joint
x = self.kalman_state[name]["x"]
P = self.kalman_state[name]["P"]
Z = current_vals[i]
# If this is the first reading, initialize
if x is None or P is None:
x = Z
P = 1.0 # or some large initial uncertainty
# 1) Predict step
x_pred = x # no velocity model, so x_pred = x
P_pred = P + Q
# 2) Update step
K = P_pred / (P_pred + R) # Kalman gain
x_new = x_pred + K * (Z - x_pred) # new state estimate
P_new = (1 - K) * P_pred # new covariance
# Save back
self.kalman_state[name]["x"] = x_new
self.kalman_state[name]["P"] = P_new
filtered_vals[i] = x_new
if self.calibration is not None:
filtered_vals = self.apply_calibration(filtered_vals, motor_names)
return filtered_vals
def async_read(self):
"""
Continuously read from the serial buffer in its own thread,
store into `self.last_d` and also append to the rolling buffer (joint_buffer).
"""
while True:
if self.serial.in_waiting > 0:
self.serial.flush()
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
if len(vals) != 7:
continue
try:
vals = [int(val) for val in vals]#remove last digit
except ValueError:
self.serial.flush()
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
vals = [int(val) for val in vals]
d = {
"wrist_roll": vals[0],
"wrist_yaw": vals[1],
"wrist_pitch": vals[2],
"elbow_flex": vals[3],
"shoulder_roll": vals[4],
"shoulder_yaw": vals[5],
"shoulder_pitch": vals[6],
}
# Update the last_d dictionary
self.last_d = d
# Also push these new values into the rolling buffers
for joint_name, joint_val in d.items():
self.joint_buffer[joint_name].append(joint_val)
# Optional: short sleep to avoid busy-loop
# time.sleep(0.001)
def run_calibration(self, robot):
robot.arm_bus.write("Acceleration", 50)
n_joints = len(self.joint_names)
max_open_all = np.zeros(n_joints, dtype=np.float32)
min_open_all = np.zeros(n_joints, dtype=np.float32)
max_closed_all = np.zeros(n_joints, dtype=np.float32)
min_closed_all = np.zeros(n_joints, dtype=np.float32)
for i, jname in enumerate(self.joint_names):
print(f"\n--- Calibrating joint '{jname}' ---")
joint_idx = robot.arm_calib_dict["motor_names"].index(jname)
open_val = robot.arm_calib_dict["start_pos"][joint_idx]
print(f"Commanding {jname} to OPEN position {open_val}...")
robot.arm_bus.write("Goal_Position", [open_val], [jname])
input("Physically verify or adjust the joint. Press Enter when ready to capture...")
open_pos_list = []
for _ in range(100):
all_joints_vals = self.read() # read entire arm
open_pos_list.append(all_joints_vals[i]) # store only this joint
time.sleep(0.01)
# Convert to numpy and track min/max
open_array = np.array(open_pos_list, dtype=np.float32)
max_open_all[i] = open_array.max()
min_open_all[i] = open_array.min()
closed_val = robot.arm_calib_dict["end_pos"][joint_idx]
if jname == "elbow_flex":
closed_val = closed_val - 700
closed_val = robot.arm_calib_dict["end_pos"][joint_idx]
print(f"Commanding {jname} to CLOSED position {closed_val}...")
robot.arm_bus.write("Goal_Position", [closed_val], [jname])
input("Physically verify or adjust the joint. Press Enter when ready to capture...")
closed_pos_list = []
for _ in range(100):
all_joints_vals = self.read()
closed_pos_list.append(all_joints_vals[i])
time.sleep(0.01)
closed_array = np.array(closed_pos_list, dtype=np.float32)
# Some thresholding for closed positions
#closed_array[closed_array < 1000] = 60000
max_closed_all[i] = closed_array.max()
min_closed_all[i] = closed_array.min()
robot.arm_bus.write("Goal_Position", [int((closed_val+open_val)/2)], [jname])
open_pos = np.maximum(max_open_all, max_closed_all)
closed_pos = np.minimum(min_open_all, min_closed_all)
for i, jname in enumerate(self.joint_names):
if jname not in ["wrist_pitch", "shoulder_pitch"]:
# Swap open/closed for these joints
tmp_pos = open_pos[i]
open_pos[i] = closed_pos[i]
closed_pos[i] = tmp_pos
# Debug prints
print("\nFinal open/closed arrays after any swaps/inversions:")
print(f"open_pos={open_pos}")
print(f"closed_pos={closed_pos}")
homing_offset = [0] * n_joints
drive_mode = [0] * n_joints
calib_modes = [CalibrationMode.LINEAR.name] * n_joints
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": open_pos,
"end_pos": closed_pos,
"calib_mode": calib_modes,
"motor_names": self.joint_names,
}
file_path = "examples/hopejr/settings/arm_calib.pkl"
if not os.path.exists(file_path):
with open(file_path, "wb") as f:
pickle.dump(calib_dict, f)
print(f"Dictionary saved to {file_path}")
self.set_calibration(calib_dict)
def set_calibration(self, calibration: dict[str, list]):
self.calibration = calibration
def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
"""
Example calibration that linearly maps [start_pos, end_pos] to [0,100].
Extend or modify for your needs.
"""
if motor_names is None:
motor_names = self.joint_names
values = values.astype(np.float32)
for i, name in enumerate(motor_names):
calib_idx = self.calibration["motor_names"].index(name)
calib_mode = self.calibration["calib_mode"][calib_idx]
if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
start_pos = self.calibration["start_pos"][calib_idx]
end_pos = self.calibration["end_pos"][calib_idx]
# Rescale the present position to [0, 100]
values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
# Check boundaries
if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
# If you want to handle out-of-range differently:
# raise JointOutOfRangeError(msg)
msg = (
f"Wrong motor position range detected for {name}. "
f"Value = {values[i]} %, expected within [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}]"
)
print(msg)
return values
class HomonculusGlove:
def __init__(self, serial_port: str = "/dev/ttyACM1", baud_rate: int = 115200):
self.serial_port = serial_port
self.baud_rate = baud_rate
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
# Number of past values to keep in memory
self.buffer_size = 10
# Initialize a buffer (deque) for each joint
self.joint_buffer = {
"thumb_0": deque(maxlen=self.buffer_size),
"thumb_1": deque(maxlen=self.buffer_size),
"thumb_2": deque(maxlen=self.buffer_size),
"thumb_3": deque(maxlen=self.buffer_size),
"index_0": deque(maxlen=self.buffer_size),
"index_1": deque(maxlen=self.buffer_size),
"index_2": deque(maxlen=self.buffer_size),
"middle_0": deque(maxlen=self.buffer_size),
"middle_1": deque(maxlen=self.buffer_size),
"middle_2": deque(maxlen=self.buffer_size),
"ring_0": deque(maxlen=self.buffer_size),
"ring_1": deque(maxlen=self.buffer_size),
"ring_2": deque(maxlen=self.buffer_size),
"pinky_0": deque(maxlen=self.buffer_size),
"pinky_1": deque(maxlen=self.buffer_size),
"pinky_2": deque(maxlen=self.buffer_size),
"battery_voltage": deque(maxlen=self.buffer_size),
}
# Start the reading thread
self.thread = threading.Thread(target=self.async_read, daemon=True)
self.thread.start()
# Last read dictionary
self.last_d = {
"thumb_0": 100,
"thumb_1": 100,
"thumb_2": 100,
"thumb_3": 100,
"index_0": 100,
"index_1": 100,
"index_2": 100,
"middle_0": 100,
"middle_1": 100,
"middle_2": 100,
"ring_0": 100,
"ring_1": 100,
"ring_2": 100,
"pinky_0": 100,
"pinky_1": 100,
"pinky_2": 100,
"battery_voltage": 100,
}
self.calibration = None
@property
def joint_names(self):
return list(self.last_d.keys())
def read(self, motor_names: list[str] | None = None):
"""
Return the most recent (single) values from self.last_d,
optionally applying calibration.
"""
if motor_names is None:
motor_names = self.joint_names
# Get raw (last) values
values = np.array([self.last_d[k] for k in motor_names])
print(values)
# Apply calibration if available
if self.calibration is not None:
values = self.apply_calibration(values, motor_names)
print(values)
return values
def read_running_average(self, motor_names: list[str] | None = None, linearize=False):
"""
Return the AVERAGE of the most recent self.buffer_size (or fewer, if not enough data) readings
for each joint, optionally applying calibration.
"""
if motor_names is None:
motor_names = self.joint_names
# Gather averaged readings from buffers
smoothed_vals = []
for name in motor_names:
buf = self.joint_buffer[name]
if len(buf) == 0:
# If no data has been read yet, fall back to last_d
smoothed_vals.append(self.last_d[name])
else:
# Otherwise, average over the existing buffer
smoothed_vals.append(np.mean(buf))
smoothed_vals = np.array(smoothed_vals, dtype=np.float32)
# Apply calibration if available
if self.calibration is not None:
smoothed_vals = self.apply_calibration(smoothed_vals, motor_names)
return smoothed_vals
def async_read(self):
"""
Continuously read from the serial buffer in its own thread,
store into `self.last_d` and also append to the rolling buffer (joint_buffer).
"""
while True:
if self.serial.in_waiting > 0:
self.serial.flush()
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
if len(vals) != 17:
continue
vals = [int(val) for val in vals]
d = {
"thumb_0": vals[0],
"thumb_1": vals[1],
"thumb_2": vals[2],
"thumb_3": vals[3],
"index_0": vals[4],
"index_1": vals[5],
"index_2": vals[6],
"middle_0": vals[7],
"middle_1": vals[8],
"middle_2": vals[9],
"ring_0": vals[10],
"ring_1": vals[11],
"ring_2": vals[12],
"pinky_0": vals[13],
"pinky_1": vals[14],
"pinky_2": vals[15],
"battery_voltage": vals[16],
}
# Update the last_d dictionary
self.last_d = d
# Also push these new values into the rolling buffers
for joint_name, joint_val in d.items():
self.joint_buffer[joint_name].append(joint_val)
def run_calibration(self):
print("\nMove arm to open position")
input("Press Enter to continue...")
open_pos_list = []
for _ in range(100):
open_pos = self.read()
open_pos_list.append(open_pos)
time.sleep(0.01)
open_pos = np.array(open_pos_list)
max_open_pos = open_pos.max(axis=0)
min_open_pos = open_pos.min(axis=0)
print(f"{max_open_pos=}")
print(f"{min_open_pos=}")
print("\nMove arm to closed position")
input("Press Enter to continue...")
closed_pos_list = []
for _ in range(100):
closed_pos = self.read()
closed_pos_list.append(closed_pos)
time.sleep(0.01)
closed_pos = np.array(closed_pos_list)
max_closed_pos = closed_pos.max(axis=0)
closed_pos[closed_pos < 1000] = 60000
min_closed_pos = closed_pos.min(axis=0)
print(f"{max_closed_pos=}")
print(f"{min_closed_pos=}")
open_pos = np.array([max_open_pos, max_closed_pos]).max(axis=0)
closed_pos = np.array([min_open_pos, min_closed_pos]).min(axis=0)
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
for i, jname in enumerate(self.joint_names):
if jname in [
"thumb_0",
"thumb_3",
"index_2",
"middle_2",
"ring_2",
"pinky_2",
"index_0",
]:
tmp_pos = open_pos[i]
open_pos[i] = closed_pos[i]
closed_pos[i] = tmp_pos
print()
print(f"{open_pos=}")
print(f"{closed_pos=}")
homing_offset = [0] * len(self.joint_names)
drive_mode = [0] * len(self.joint_names)
calib_modes = [CalibrationMode.LINEAR.name] * len(self.joint_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": open_pos,
"end_pos": closed_pos,
"calib_mode": calib_modes,
"motor_names": self.joint_names,
}
file_path = "examples/hopejr/settings/hand_calib.pkl"
if not os.path.exists(file_path):
with open(file_path, "wb") as f:
pickle.dump(calib_dict, f)
print(f"Dictionary saved to {file_path}")
# return calib_dict
self.set_calibration(calib_dict)
def set_calibration(self, calibration: dict[str, list]):
self.calibration = calibration
def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
"""Convert from unsigned int32 joint position range [0, 2**32[ to the universal float32 nominal degree range ]-180.0, 180.0[ with
a "zero position" at 0 degree.
Note: We say "nominal degree range" since the motors can take values outside this range. For instance, 190 degrees, if the motor
rotate more than a half a turn from the zero position. However, most motors can't rotate more than 180 degrees and will stay in this range.
Joints values are original in [0, 2**32[ (unsigned int32). Each motor are expected to complete a full rotation
when given a goal position that is + or - their resolution. For instance, feetech xl330-m077 have a resolution of 4096, and
at any position in their original range, let's say the position 56734, they complete a full rotation clockwise by moving to 60830,
or anticlockwise by moving to 52638. The position in the original range is arbitrary and might change a lot between each motor.
To harmonize between motors of the same model, different robots, or even models of different brands, we propose to work
in the centered nominal degree range ]-180, 180[.
"""
if motor_names is None:
motor_names = self.motor_names
# Convert from unsigned int32 original range [0, 2**32] to signed float32 range
values = values.astype(np.float32)
for i, name in enumerate(motor_names):
calib_idx = self.calibration["motor_names"].index(name)
calib_mode = self.calibration["calib_mode"][calib_idx]
if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
start_pos = self.calibration["start_pos"][calib_idx]
end_pos = self.calibration["end_pos"][calib_idx]
# Rescale the present position to a nominal range [0, 100] %,
# useful for joints with linear motions like Aloha gripper
values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
if name == "pinky_1" and (values[i] < LOWER_BOUND_LINEAR):
values[i] = end_pos
else:
msg = (
f"Wrong motor position range detected for {name}. "
f"Expected to be in nominal range of [0, 100] % (a full linear translation), "
f"with a maximum range of [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}] % to account for some imprecision during calibration, "
f"but present value is {values[i]} %. "
"This might be due to a cable connection issue creating an artificial jump in motor values. "
"You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
)
print(msg)
# raise JointOutOfRangeError(msg)
return values
# def revert_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
# """Inverse of `apply_calibration`."""
# if motor_names is None:
# motor_names = self.motor_names
# for i, name in enumerate(motor_names):
# calib_idx = self.calibration["motor_names"].index(name)
# calib_mode = self.calibration["calib_mode"][calib_idx]
# if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
# start_pos = self.calibration["start_pos"][calib_idx]
# end_pos = self.calibration["end_pos"][calib_idx]
# # Convert from nominal lnear range of [0, 100] % to
# # actual motor range of values which can be arbitrary.
# values[i] = values[i] / 100 * (end_pos - start_pos) + start_pos
# values = np.round(values).astype(np.int32)
# return values
class EncoderReader:
def __init__(self, serial_port="/dev/ttyUSB1", baud_rate=115200):
self.serial_port = serial_port
self.baud_rate = baud_rate
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
# Start a background thread to continuously read from the serial port
self.thread = threading.Thread(target=self.async_read, daemon=True)
self.thread.start()
# Store the latest encoder reading in this dictionary
self.last_d = {"encoder": 500}
def async_read(self):
while True:
# Read one line from serial
line = self.serial.readline().decode("utf-8").strip()
if line:
try:
val = int(line) # Parse the incoming line as integer
self.last_d["encoder"] = val
except ValueError:
# If we couldn't parse it as an integer, just skip
pass
def read(self):
"""
Returns the last encoder value that was read.
"""
return self.last_d["encoder"]
class Tac_Man:
def __init__(self, serial_port="/dev/ttyUSB1", baud_rate=115200):
self.serial_port = serial_port
self.baud_rate = baud_rate
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
# Start a background thread to continuously read from the serial port
self.thread = threading.Thread(target=self.async_read, daemon=True)
self.thread.start()
# Store the latest encoder readings in this list
self.last_d = [0, 0, 0] # Default values for three readings
def async_read(self):
while True:
# Read one line from serial
line = self.serial.readline().decode("utf-8").strip()
if line:
try:
# Parse the incoming line as three comma-separated integers
values = [int(val) for val in line.split(",")]
if len(values) == 3: # Ensure we have exactly three values
self.last_d = values
except ValueError:
# If parsing fails, skip this line
pass
def read(self):
"""
Returns the last encoder values that were read as a list of three integers.
"""
return self.last_d

111
examples/hopejr/notes Normal file
View File

@@ -0,0 +1,111 @@
test and test4
installed serial and opencv
after pip install -e .
pip install -e ".[feetech]"
robot.hand_bus.read("Present_Position")
array([ 349, 799, 1000, 1004, 508, 503, 673, 608, 791, 390, 552,
506, 600, 565, 428, 379], dtype=int32)
robot.hand_bus.write("Goal_Position",[349,799,500,500,508,503,673,608,791,390,552,506,600,565,428,379])
robot.arm_bus.write("Goal_Position", [1825, 2045, 2010, 2035, 1414, 1800, 1615])
robot.arm_bus.read("Present_Position")
robot.arm_bus.write("Goal_Position", [1500], ["elbow_flex"])
robot.arm_bus.write("Goal_Position", [2000], ["wrist_yaw"])
ranges: [600-2300, 1500-2300, 1300-2800, 1000-2500, 600-2800,400-1700, 1300-2300]
shoulder_up,
shoulder forward,
shoulder yaw,
elbow_flex
wrist_yaw,
wrist_pitch,
wrist_roll
COM18
C:/Users/Lenovo/AppData/Local/Programs/Python/Python310/python.exe c:/Users/Lenovo/Documents/HuggingFace/lerobot/examples/test4.py
wrist pitch is fucked
so the wrist motor was fucked
and we didnt know which one it was because
if the chain hjas an issue we dont know how to locate whihc motor is at fault (cables are hard to remove)
to calibrate:
python lerobot/scripts/configure_motor.py \
--port /dev/ttyACM1 \
--brand feetech \
--model sts3250 \
--baudrate 1000000 \
--ID 2
python lerobot/scripts/configure_motor.py \
--port /dev/ttyACM0 \
--brand feetech \
--model sm8512bl \
--baudrate 115200 \
--ID 1
python lerobot/scripts/configure_motor.py \
--port /dev/ttyACM1 \
--brand feetech \
--model scs0009 \
--baudrate 1000000 \
--ID 30
why are the motors beeping?
#interpolate between start and end pos
robot.arm_bus.write("Goal_Position", [int((i*interp+j*(1-interp))) for i, j in zip(arm_calibration["start_pos"], arm_calibration["end_pos"])])
control maj M to look for stuff
set calibration is useless
move the joints to that position too
/home/nepyope/Desktop/HuggingFace/lerobot/lerobot/common/robot_devices/motors/feetech.py
theres clearly some lag, and its probably because of an out of range issue
# hand_calibration = robot.get_hand_calibration()
# joint = input("Enter joint name: ")
# j1 = f"{joint}_pinky_side"
# j2 = f"{joint}_thumb_side"
# encoder = EncoderReader("/dev/ttyUSB0", 115200)
# start_angle1 = hand_calibration['start_pos'][hand_calibration['motor_names'].index(j1)]
# end_angle1 = hand_calibration['end_pos'][hand_calibration['motor_names'].index(j1)]
# start_angle2 = hand_calibration['start_pos'][hand_calibration['motor_names'].index(j2)]
# end_angle2 = hand_calibration['end_pos'][hand_calibration['motor_names'].index(j2)]
# # start_angle = shoulder_calibration['start_pos'][shoulder_calibration['motor_names'].index(joint)]
# # end_angle = shoulder_calibration['end_pos'][shoulder_calibration['motor_names'].index(joint)]
# while True:
# angle1 = int(start_angle1+(end_angle1-start_angle1)*encoder.read()/1000)
# angle2 = int(start_angle2+(end_angle2-start_angle2)*encoder.read()/1000)
# robot.hand_bus.write("Goal_Position",angle1, [j1])
# robot.hand_bus.write("Goal_Position",angle2, [j2])
# print(angle1, angle2)
# time.sleep(0.1)
# print(robot.hand_bus.find_motor_indices())
# exit()
maybe divide the 3.3 by 2 and use that as a reference
https://jlcpcb.com/partdetail/23831236-OPA340UA_UMW/C22365307
-90 is good for the op amp

52
examples/hopejr/read.cs Normal file
View File

@@ -0,0 +1,52 @@
#include <Arduino.h>
// Define multiplexer input pins
#define S0 5
#define S1 6
#define S2 8
#define S3 7
#define SENSOR_INPUT 4
#define SENSOR_COUNT 16
int rawVals[SENSOR_COUNT];
void measureRawValues() {
for (uint8_t i = 0; i < SENSOR_COUNT; i++) {
digitalWrite(S0, (i & 0b1) ^ 0b1);;
digitalWrite(S1, (i >> 1 & 0b1) ^ 0b1);;
digitalWrite(S2, (i >> 2 & 0b1) ^ 0b1);;
digitalWrite(S3, i >> 3 & 0b1);
delay(1);
rawVals[i] = analogRead(SENSOR_INPUT);
}
}
void printRawValues() {
for (uint8_t i = 0; i < SENSOR_COUNT; i++) {
Serial.print(rawVals[i]);
if (i < SENSOR_COUNT - 1) Serial.print(" ");
}
Serial.println();
}
void setup() {
Serial.begin(115200);
pinMode(S0, OUTPUT);
pinMode(S1, OUTPUT);
pinMode(S2, OUTPUT);
pinMode(S3, OUTPUT);
digitalWrite(S0, LOW);
digitalWrite(S1, LOW);
digitalWrite(S2, LOW);
digitalWrite(S3, LOW);
}
void loop() {
measureRawValues();
printRawValues();
delay(1);
}

View File

@@ -0,0 +1,52 @@
#include <Arduino.h>
// Define multiplexer input pins
#define S0 5
#define S1 6
#define S2 8
#define S3 7
#define SENSOR_INPUT 4
#define SENSOR_COUNT 16
int rawVals[SENSOR_COUNT];
void measureRawValues() {
for (uint8_t i = 0; i < SENSOR_COUNT; i++) {
digitalWrite(S0, (i & 0b1) ^ 0b1);;
digitalWrite(S1, (i >> 1 & 0b1) ^ 0b1);;
digitalWrite(S2, (i >> 2 & 0b1) ^ 0b1);;
digitalWrite(S3, i >> 3 & 0b1);
delay(1);
rawVals[i] = analogRead(SENSOR_INPUT);
}
}
void printRawValues() {
for (uint8_t i = 0; i < SENSOR_COUNT; i++) {
Serial.print(rawVals[i]);
if (i < SENSOR_COUNT - 1) Serial.print(" ");
}
Serial.println();
}
void setup() {
Serial.begin(115200);
pinMode(S0, OUTPUT);
pinMode(S1, OUTPUT);
pinMode(S2, OUTPUT);
pinMode(S3, OUTPUT);
digitalWrite(S0, LOW);
digitalWrite(S1, LOW);
digitalWrite(S2, LOW);
digitalWrite(S3, LOW);
}
void loop() {
measureRawValues();
printRawValues();
delay(1);
}

View File

@@ -0,0 +1,52 @@
#include <Arduino.h>
// Define multiplexer input pins
#define S0 5
#define S1 6
#define S2 8
#define S3 7
#define SENSOR_INPUT 4
#define SENSOR_COUNT 16
int rawVals[SENSOR_COUNT];
void measureRawValues() {
for (uint8_t i = 0; i < SENSOR_COUNT; i++) {
digitalWrite(S0, (i & 0b1) ^ 0b1);;
digitalWrite(S1, (i >> 1 & 0b1) ^ 0b1);;
digitalWrite(S2, (i >> 2 & 0b1) ^ 0b1);;
digitalWrite(S3, i >> 3 & 0b1);
delay(1);
rawVals[i] = analogRead(SENSOR_INPUT);
}
}
void printRawValues() {
for (uint8_t i = 0; i < SENSOR_COUNT; i++) {
Serial.print(rawVals[i]);
if (i < SENSOR_COUNT - 1) Serial.print(" ");
}
Serial.println();
}
void setup() {
Serial.begin(115200);
pinMode(S0, OUTPUT);
pinMode(S1, OUTPUT);
pinMode(S2, OUTPUT);
pinMode(S3, OUTPUT);
digitalWrite(S0, LOW);
digitalWrite(S1, LOW);
digitalWrite(S2, LOW);
digitalWrite(S3, LOW);
}
void loop() {
measureRawValues();
printRawValues();
delay(1);
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

View File

@@ -0,0 +1,74 @@
import numpy as np
from PIL import Image, ImageSequence
def coalesce_gif(im):
"""
Attempt to coalesce frames so each one is a full image.
This handles many (though not all) partial-frame GIFs.
"""
# Convert mode to RGBA
im = im.convert("RGBA")
# Prepare an accumulator the same size as the base frame
base = Image.new("RGBA", im.size)
frames = []
# Go through each frame
for frame in ImageSequence.Iterator(im):
base.alpha_composite(frame.convert("RGBA"))
frames.append(base.copy())
return frames
def remove_white_make_black(arr, threshold=250):
"""
For each pixel in arr (H,W,3), if R,G,B >= threshold, set to black (0,0,0).
This effectively 'removes' white so it won't affect the sum.
"""
mask = (arr[..., 0] >= threshold) & \
(arr[..., 1] >= threshold) & \
(arr[..., 2] >= threshold)
arr[mask] = 0 # set to black
def main():
# Load the animated GIF
gif = Image.open("input.gif")
# Coalesce frames so each is full-size
frames = coalesce_gif(gif)
if not frames:
print("No frames found!")
return
# Convert first frame to RGB array, initialize sum array
w, h = frames[0].size
sum_array = np.zeros((h, w, 3), dtype=np.uint16) # 16-bit to avoid overflow
# For each frame:
for f in frames:
# Convert to RGB
rgb = f.convert("RGB")
arr = np.array(rgb, dtype=np.uint16) # shape (H, W, 3)
# Remove near-white by setting it to black
remove_white_make_black(arr, threshold=250)
# Add to sum_array, then clamp to 255
sum_array += arr
np.clip(sum_array, 0, 255, out=sum_array)
# Convert sum_array back to 8-bit
sum_array = sum_array.astype(np.uint8)
# Finally, any pixel that stayed black is presumably "empty," so we set it to white
black_mask = (sum_array[..., 0] == 0) & \
(sum_array[..., 1] == 0) & \
(sum_array[..., 2] == 0)
sum_array[black_mask] = [255, 255, 255]
# Create final Pillow image
final_img = Image.fromarray(sum_array, mode="RGB")
final_img.save("result.png")
print("Done! Wrote result.png.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,31 @@
import time
import serial
import numpy as np
import matplotlib.pyplot as plt
# Import the motor bus (adjust the import path as needed)
from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus
def main():
bus = FeetechMotorsBus(
port="/dev/ttyACM0",
motors={
"leader": [1, "scs0009"],
"follower": [2, "scs0009"]
},
protocol_version=1,
group_sync_read=False
)
bus.connect()
print(bus.read("Present_Position", "leader"))
bus.write("Torque_Enable", 0, ["leader"])
bus.write("Torque_Enable", 1, ["follower"])
for i in range(10000000):
time.sleep(0.01)
pos = bus.read("Present_Position", "leader")
if pos[0] > 1 and pos[0] < 1022:
bus.write("Goal_Position", pos, ["follower"])
if __name__ == "__main__":
main()

Binary file not shown.

View File

@@ -0,0 +1,15 @@
robot:
arm_bus:
Lock: 0
Torque_Limit: 1000
Protection_Current: 500
Over_Current_Protection_Time: 10
Max_Torque_Limit: 1000
Overload_Torque: 40 # Play around with this
Protection_Time: 1000 # When does it kick in?
Protective_Torque: 1
Maximum_Acceleration: 100
Torque_Enable: 1
Acceleration: 30
hand_bus:
Acceleration: 100

Binary file not shown.

View File

@@ -0,0 +1,61 @@
import time
import numpy as np
from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus
def main():
# Instantiate the bus for a single motor on port /dev/ttyACM0.
arm_bus = FeetechMotorsBus(
port="/dev/ttyACM0",
motors={"wrist_pitch": [1, "scs0009"]},
protocol_version=1,
group_sync_read=False, # using individual read calls
)
arm_bus.connect()
# Configure continuous rotation mode.
arm_bus.write("Min_Angle_Limit", 0)
arm_bus.write("Max_Angle_Limit", 1024)
# For model "scs0009", the raw reading runs from 0 to ~1022.
resolution_max = 1022 # use 1022 as the effective maximum raw value
# Read initial raw motor position.
prev_raw = arm_bus.read("Present_Position", ["wrist_pitch"])[0]
print("Initial raw position:", prev_raw)
# Command continuous rotation.
arm_bus.write("Goal_Position", 1024)
# Initialize loop counter.
loops_count = 0
target_effective = 1780
tolerance = 50 # stop when effective position is within ±50 of target
while True:
current_raw = arm_bus.read("Present_Position", ["wrist_pitch"])[0]
# Detect wrap-around: if the previous reading was near the top (>= 1020)
# and current reading is low (< 100), count that as one full loop.
if prev_raw >= 1020 and current_raw < 100:
loops_count += 1
print(f"Wrap detected! loops_count increased to {loops_count}")
# Compute the effective position.
effective_position = loops_count * resolution_max + current_raw
print(f"Raw position: {current_raw} | loops_count: {loops_count} | Effective position: {effective_position}")
# Check if effective position is within tolerance of the target.
if abs(effective_position - target_effective) <= tolerance:
# Command motor to stop by setting the current raw position as goal.
arm_bus.write("Goal_Position", current_raw)
print(f"Target reached (effective position: {effective_position}). Stopping motor at raw position {current_raw}.")
break
prev_raw = current_raw
time.sleep(0.01) # 10 ms delay
time.sleep(1)
arm_bus.disconnect()
if __name__ == "__main__":
main()

226
examples/hopejr/teleop.py Normal file
View File

@@ -0,0 +1,226 @@
from follower import HopeJuniorRobot
from leader import (
HomonculusArm,
HomonculusGlove,
EncoderReader
)
from visualizer import value_to_color
import time
import numpy as np
import pickle
import pygame
import typer
def main(
calibrate_glove: bool = typer.Option(False, "--calibrate-glove", help="Calibrate the glove"),
calibrate_exoskeleton: bool = typer.Option(False, "--calibrate-exoskeleton", help="Calibrate the exoskeleton"),
freeze_fingers: bool = typer.Option(False, "--freeze-fingers", help="Freeze the fingers"),
freeze_arm: bool = typer.Option(False, "--freeze-arm", help="Freeze the arm")):
show_loads: bool = typer.Option(False, "--show-loads", help="Show the loads in a GUI")
robot = HopeJuniorRobot()
robot.connect_hand()
robot.connect_arm()
#read pos
print(robot.hand_bus.read("Present_Position"))
print(robot.arm_bus.read("Present_Position", "shoulder_pitch"))
print(robot.arm_bus.read("Present_Position",["shoulder_yaw","shoulder_roll","elbow_flex","wrist_roll","wrist_yaw","wrist_pitch"]))
#robot.arm_bus.write("Goal_Position", robot.arm_calib_dict["start_pos"][0]*1 +robot.arm_calib_dict["end_pos"][0]*0, ["wrist_roll"])
for i in range(10):
time.sleep(0.1)
robot.apply_arm_config('examples/hopejr/settings/config.yaml')
# #calibrate arm
arm_calibration = robot.get_arm_calibration()
exoskeleton = HomonculusArm(serial_port="/dev/tty.usbmodem1201")
if calibrate_exoskeleton:
exoskeleton.run_calibration(robot)
file_path = "examples/hopejr/settings/arm_calib.pkl"
with open(file_path, "rb") as f:
calib_dict = pickle.load(f)
print("Loaded dictionary:", calib_dict)
exoskeleton.set_calibration(calib_dict)
#calibrate hand
hand_calibration = robot.get_hand_calibration()
glove = HomonculusGlove(serial_port = "/dev/tty.usbmodem1101")
if calibrate_glove:
glove.run_calibration()
file_path = "examples/hopejr/settings/hand_calib.pkl"
with open(file_path, "rb") as f:
calib_dict = pickle.load(f)
print("Loaded dictionary:", calib_dict)
glove.set_calibration(calib_dict)
robot.hand_bus.set_calibration(hand_calibration)
robot.arm_bus.set_calibration(arm_calibration)
# Initialize Pygame
# pygame.init()
# # Set up the display
# screen = pygame.display.set_mode((800, 600))
# pygame.display.set_caption("Robot Hand Visualization")
# # Create hand structure with 16 squares and initial values
# hand_components = []
# # Add thumb (4 squares in diamond shape)
# thumb_positions = [
# (150, 300), (125, 350),
# (175, 350), (150, 400)
# ]
# for pos in thumb_positions:
# hand_components.append({"pos": pos, "value": 0})
# # Add fingers (4 fingers with 3 squares each in vertical lines)
# finger_positions = [
# (200, 100), # Index
# (250, 100), # Middle
# (300, 100), # Ring
# (350, 100) # Pinky
# ]
# for x, y in finger_positions:
# for i in range(3):
# hand_components.append({"pos": (x, y + i * 50), "value": 0})
for i in range(1000000000000000):
robot.apply_arm_config('examples/hopejr/settings/config.yaml')
#robot.arm_bus.write("Acceleration", 50, "shoulder_yaw")
joint_names = ["shoulder_pitch", "shoulder_yaw", "shoulder_roll", "elbow_flex", "wrist_roll", "wrist_yaw", "wrist_pitch"]
#only wrist roll
#joint_names = ["shoulder_pitch"]
joint_values = exoskeleton.read(motor_names=joint_names)
#joint_values = joint_values.round().astype(int)
joint_dict = {k: v for k, v in zip(joint_names, joint_values, strict=False)}
motor_values = []
motor_names = []
motor_names += ["shoulder_pitch", "shoulder_yaw", "shoulder_roll", "elbow_flex", "wrist_roll", "wrist_yaw", "wrist_pitch"]
#motor_names += ["shoulder_pitch"]
motor_values += [joint_dict[name] for name in motor_names]
#remove 50 from shoulder_roll
#motor_values += [joint_dict[name] for name in motor_names]
motor_values = np.array(motor_values)
motor_values = np.clip(motor_values, 0, 100)
print(motor_names, motor_values)
if not freeze_arm:
robot.arm_bus.write("Goal_Position", motor_values, motor_names)
if not freeze_fingers:#include hand
hand_joint_names = []
hand_joint_names += ["thumb_3", "thumb_2", "thumb_1", "thumb_0"]#, "thumb_3"]
hand_joint_names += ["index_0", "index_1", "index_2"]
hand_joint_names += ["middle_0", "middle_1", "middle_2"]
hand_joint_names += ["ring_0", "ring_1", "ring_2"]
hand_joint_names += ["pinky_0", "pinky_1", "pinky_2"]
hand_joint_values = glove.read(hand_joint_names)
hand_joint_values = hand_joint_values.round( ).astype(int)
hand_joint_dict = {k: v for k, v in zip(hand_joint_names, hand_joint_values, strict=False)}
hand_motor_values = []
hand_motor_names = []
# Thumb
hand_motor_names += ["thumb_basel_rotation", "thumb_mcp", "thumb_pip", "thumb_dip"]#, "thumb_MCP"]
hand_motor_values += [
hand_joint_dict["thumb_3"],
hand_joint_dict["thumb_2"],
hand_joint_dict["thumb_1"],
hand_joint_dict["thumb_0"]
]
# # Index finger
index_splay = 0.1
hand_motor_names += ["index_flexor", "index_pinky_side", "index_thumb_side"]
hand_motor_values += [
hand_joint_dict["index_2"],
(100 - hand_joint_dict["index_0"]) * index_splay + hand_joint_dict["index_1"] * (1 - index_splay),
hand_joint_dict["index_0"] * index_splay + hand_joint_dict["index_1"] * (1 - index_splay),
]
# Middle finger
middle_splay = 0.1
hand_motor_names += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
hand_motor_values += [
hand_joint_dict["middle_2"],
hand_joint_dict["middle_0"] * middle_splay + hand_joint_dict["middle_1"] * (1 - middle_splay),
(100 - hand_joint_dict["middle_0"]) * middle_splay + hand_joint_dict["middle_1"] * (1 - middle_splay),
]
# # Ring finger
ring_splay = 0.1
hand_motor_names += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
hand_motor_values += [
hand_joint_dict["ring_2"],
(100 - hand_joint_dict["ring_0"]) * ring_splay + hand_joint_dict["ring_1"] * (1 - ring_splay),
hand_joint_dict["ring_0"] * ring_splay + hand_joint_dict["ring_1"] * (1 - ring_splay),
]
# # Pinky finger
pinky_splay = -.1
hand_motor_names += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
hand_motor_values += [
hand_joint_dict["pinky_2"],
hand_joint_dict["pinky_0"] * pinky_splay + hand_joint_dict["pinky_1"] * (1 - pinky_splay),
(100 - hand_joint_dict["pinky_0"]) * pinky_splay + hand_joint_dict["pinky_1"] * (1 - pinky_splay),
]
hand_motor_values = np.array(hand_motor_values)
hand_motor_values = np.clip(hand_motor_values, 0, 100)
robot.hand_bus.write("Acceleration", 255, hand_motor_names)
robot.hand_bus.write("Goal_Position", hand_motor_values, hand_motor_names)
# if i%20==0 and i > 100:
# try:
# loads = robot.hand_bus.read("Present_Load")
# for i, comp in enumerate(hand_components):
# # Wave oscillates between 0 and 2024:
# # Center (1012) +/- 1012 * sin(...)
# comp["value"] = loads[i]
# except:
# pass
time.sleep(0.01)
# for event in pygame.event.get():
# if event.type == pygame.QUIT:
# robot.hand_bus.disconnect()
# robot.arm_bus.disconnect()
# exit()
# # Check for user pressing 'q' to quit
# if event.type == pygame.KEYDOWN:
# if event.key == pygame.K_q:
# robot.hand_bus.disconnect()
# robot.arm_bus.disconnect()
# exit()
# # Draw background
# screen.fill((0, 0, 0)) # Black background
# # Draw hand components
# for comp in hand_components:
# x, y = comp["pos"]
# color = value_to_color(comp["value"])
# pygame.draw.rect(screen, color, (x, y, 30, 30))
# pygame.display.flip()
if __name__ == "__main__":
typer.run(main)

View File

@@ -0,0 +1,135 @@
import serial
import threading
import time
import numpy as np
import matplotlib.pyplot as plt
# Thread function to read from a serial port continuously until stop_event is set.
def read_serial(port, baudrate, stop_event, data_list):
try:
ser = serial.Serial(port, baudrate, timeout=1)
except Exception as e:
print(f"Error opening {port}: {e}")
return
while not stop_event.is_set():
try:
line = ser.readline().decode('utf-8').strip()
except Exception as e:
print(f"Decode error on {port}: {e}")
continue
if line:
try:
# Split the line into integer values.
values = [int(x) for x in line.split()]
# For ACM1, ignore the extra value if present.
if len(values) >= 16:
if len(values) > 16:
values = values[:16]
# Save the timestamp (relative to start) with the sensor readings.
timestamp = time.time()
data_list.append((timestamp, values))
except Exception as e:
print(f"Error parsing line from {port}: '{line}' -> {e}")
ser.close()
def main():
# --- Configuration ---
# Set your serial port names here (adjust for your system)
acm0_port = "/dev/ttyACM0" # Example for Linux (or "COM3" on Windows)
acm1_port = "/dev/ttyACM1" # Example for Linux (or "COM4" on Windows)
baudrate = 115200
# Data storage for each device:
data_acm0 = [] # Will hold tuples of (timestamp, [16 sensor values])
data_acm1 = []
# Event to signal threads to stop reading.
stop_event = threading.Event()
# Create and start reader threads.
thread_acm0 = threading.Thread(target=read_serial, args=(acm0_port, baudrate, stop_event, data_acm0))
thread_acm1 = threading.Thread(target=read_serial, args=(acm1_port, baudrate, stop_event, data_acm1))
thread_acm0.start()
thread_acm1.start()
# Record data for 10 seconds.
record_duration = 10 # seconds
start_time = time.time()
time.sleep(record_duration)
stop_event.set() # signal threads to stop
# Wait for both threads to finish.
thread_acm0.join()
thread_acm1.join()
print("Finished recording.")
# --- Process the Data ---
# Convert lists of (timestamp, values) to numpy arrays.
# Compute time relative to the start of the recording.
times_acm0 = np.array([t - start_time for t, _ in data_acm0])
sensor_acm0 = np.array([vals for _, vals in data_acm0]) # shape (N0, 16)
times_acm1 = np.array([t - start_time for t, _ in data_acm1])
sensor_acm1 = np.array([vals for _, vals in data_acm1]) # shape (N1, 16)
# --- Plot 1: Overlapping Time Series ---
plt.figure(figsize=(12, 8))
# Plot each sensor from ACM0 in red.
for i in range(16):
plt.plot(times_acm0, sensor_acm0[:, i], color='red', alpha=0.7,
label='ACM0 Sensor 1' if i == 0 else None)
# Plot each sensor from ACM1 in blue.
for i in range(16):
plt.plot(times_acm1, sensor_acm1[:, i], color='blue', alpha=0.7,
label='ACM1 Sensor 1' if i == 0 else None)
plt.xlabel("Time (s)")
plt.ylabel("Sensor Reading")
plt.title("Overlapping Sensor Readings (ACM0 in Red, ACM1 in Blue)")
plt.legend()
plt.tight_layout()
plt.savefig("overlapping_sensor_readings.png", dpi=300)
plt.close()
print("Saved overlapping_sensor_readings.png")
# --- Plot 2: Variance of Noise for Each Sensor ---
# Compute variance (over time) for each sensor channel.
variance_acm0 = np.var(sensor_acm0, axis=0)
variance_acm1 = np.var(sensor_acm1, axis=0)
sensor_numbers = np.arange(1, 17)
bar_width = 0.35
plt.figure(figsize=(12, 6))
plt.bar(sensor_numbers - bar_width/2, variance_acm0, bar_width, color='red', label='ACM0')
plt.bar(sensor_numbers + bar_width/2, variance_acm1, bar_width, color='blue', label='ACM1')
plt.xlabel("Sensor Number")
plt.ylabel("Variance")
plt.title("Noise Variance per Sensor")
plt.xticks(sensor_numbers)
plt.legend()
plt.tight_layout()
plt.savefig("sensor_variance.png", dpi=300)
plt.close()
print("Saved sensor_variance.png")
# --- Plot 3: Difference Between ACM0 and ACM1 Readings ---
# Since the two devices may not sample at exactly the same time,
# we interpolate ACM1's data onto ACM0's time base for each sensor.
plt.figure(figsize=(12, 8))
for i in range(16):
if len(times_acm1) > 1 and len(times_acm0) > 1:
interp_acm1 = np.interp(times_acm0, times_acm1, sensor_acm1[:, i])
diff = sensor_acm0[:, i] - interp_acm1
plt.plot(times_acm0, diff, label=f"Sensor {i+1}")
plt.xlabel("Time (s)")
plt.ylabel("Difference (ACM0 - ACM1)")
plt.title("Difference in Sensor Readings")
plt.legend(fontsize='small', ncol=2)
plt.tight_layout()
plt.savefig("sensor_differences.png", dpi=300)
plt.close()
print("Saved sensor_differences.png")
if __name__ == "__main__":
main()

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

View File

@@ -0,0 +1,84 @@
import time
import serial
import numpy as np
import matplotlib.pyplot as plt
# Import the motor bus (adjust the import path as needed)
from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus
def main():
# -------------------------------
# Setup the motor bus (ACM0)
# -------------------------------
arm_bus = FeetechMotorsBus(
port="/dev/ttyACM0",
motors={
"wrist_pitch": [7, "sts3215"],
},
protocol_version=0,
)
arm_bus.connect()
# -------------------------------
# Setup the serial connection for sensor (ACM1)
# -------------------------------
try:
ser = serial.Serial("/dev/ttyACM1", 115200, timeout=1)
except Exception as e:
print(f"Error opening serial port /dev/ttyACM1: {e}")
return
# Lists to store the motor positions and sensor values.
positions = []
sensor_values = []
# -------------------------------
# Loop: move motor and collect sensor data
# -------------------------------
# We assume that 2800 > 1480 so we decrement by 10 each step.
for pos in range(2800, 1500, -10): # 2800 down to 1480 (inclusive)
# Command the motor to go to position 'pos'
arm_bus.write("Goal_Position", pos, ["wrist_pitch"])
# Wait a short period for the motor to move and the sensor to update.
time.sleep(0.01)
# Read one line from the sensor device.
sensor_val = np.nan # default if reading fails
try:
line = ser.readline().decode('utf-8').strip()
if line:
# Split the line into parts and convert each part to int.
parts = line.split()
# Ensure there are enough values (we expect at least 15 values)
if len(parts) >= 15:
values = [int(x) for x in parts]
# Use the 15th value (index 14)
sensor_val = values[14]
except Exception as e:
print(f"Error parsing sensor data: {e}")
positions.append(pos)
sensor_values.append(sensor_val)
print(f"Motor pos: {pos} | Sensor 15th value: {sensor_val}")
#move it back to
arm_bus.write("Goal_Position", 2800, ["wrist_pitch"])
# -------------------------------
# Plot the data: Motor Angle vs. Sensor 15th Value
# -------------------------------
plt.figure(figsize=(10, 6))
plt.plot(positions, sensor_values, marker='o', linestyle='-')
plt.xlabel("Motor Angle")
plt.ylabel("Sensor 15th Value")
plt.title("Motor Angle vs Sensor 15th Value")
plt.grid(True)
plt.savefig("asd.png", dpi=300)
plt.close()
print("Plot saved as asd.png")
# Close the serial connection.
ser.close()
if __name__ == "__main__":
main()

682
examples/hopejr/test.py Normal file
View File

@@ -0,0 +1,682 @@
import threading
import time
from typing import Callable
import cv2
import numpy as np
# from qai_hub_models.models.mediapipe_hand.app import MediaPipeHandApp
# from qai_hub_models.models.mediapipe_hand.model import (
# MediaPipeHand,
# )
# from qai_hub_models.utils.image_processing import (
# app_to_net_image_inputs,
# )
from lerobot.common.robot_devices.motors.feetech import (
CalibrationMode,
FeetechMotorsBus,
)
LOWER_BOUND_LINEAR = -100
UPPER_BOUND_LINEAR = 200
import serial
class HomonculusGlove:
def __init__(self):
self.serial_port = "COM10"
self.baud_rate = 115200
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
self.thread = threading.Thread(target=self.async_read)
self.thread.start()
self.last_d = {
"thumb_0": 100,
"thumb_1": 100,
"thumb_2": 100,
"thumb_3": 100,
"index_0": 100,
"index_1": 100,
"index_2": 100,
"middle_0": 100,
"middle_1": 100,
"middle_2": 100,
"ring_0": 100,
"ring_1": 100,
"ring_2": 100,
"pinky_0": 100,
"pinky_1": 100,
"pinky_2": 100,
"battery_voltage": 100,
}
self.calibration = None
@property
def joint_names(self):
return list(self.last_d.keys())
def read(self, motor_names: list[str] | None = None):
if motor_names is None:
motor_names = self.joint_names
values = np.array([self.last_d[k] for k in motor_names])
print(motor_names)
print(values)
if self.calibration is not None:
values = self.apply_calibration(values, motor_names)
print(values)
return values
def async_read(self):
while True:
if self.serial.in_waiting > 0:
self.serial.flush()
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
if len(vals) != 17:
continue
vals = [int(val) for val in vals]
d = {
"thumb_0": vals[0],
"thumb_1": vals[1],
"thumb_2": vals[2],
"thumb_3": vals[3],
"index_0": vals[4],
"index_1": vals[5],
"index_2": vals[6],
"middle_0": vals[7],
"middle_1": vals[8],
"middle_2": vals[9],
"ring_0": vals[10],
"ring_1": vals[11],
"ring_2": vals[12],
"pinky_0": vals[13],
"pinky_1": vals[14],
"pinky_2": vals[15],
"battery_voltage": vals[16],
}
self.last_d = d
# print(d.values())
def run_calibration(self):
print("\nMove arm to open position")
input("Press Enter to continue...")
open_pos_list = []
for _ in range(300):
open_pos = self.read()
open_pos_list.append(open_pos)
time.sleep(0.01)
open_pos = np.array(open_pos_list)
max_open_pos = open_pos.max(axis=0)
min_open_pos = open_pos.min(axis=0)
print(f"{max_open_pos=}")
print(f"{min_open_pos=}")
print("\nMove arm to closed position")
input("Press Enter to continue...")
closed_pos_list = []
for _ in range(300):
closed_pos = self.read()
closed_pos_list.append(closed_pos)
time.sleep(0.01)
closed_pos = np.array(closed_pos_list)
max_closed_pos = closed_pos.max(axis=0)
closed_pos[closed_pos < 1000] = 60000
min_closed_pos = closed_pos.min(axis=0)
print(f"{max_closed_pos=}")
print(f"{min_closed_pos=}")
open_pos = np.array([max_open_pos, max_closed_pos]).max(axis=0)
closed_pos = np.array([min_open_pos, min_closed_pos]).min(axis=0)
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
for i, jname in enumerate(self.joint_names):
if jname in ["thumb_0", "thumb_3", "index_2", "middle_2", "ring_2", "pinky_0", "pinky_2"]:
tmp_pos = open_pos[i]
open_pos[i] = closed_pos[i]
closed_pos[i] = tmp_pos
print()
print(f"{open_pos=}")
print(f"{closed_pos=}")
homing_offset = [0] * len(self.joint_names)
drive_mode = [0] * len(self.joint_names)
calib_modes = [CalibrationMode.LINEAR.name] * len(self.joint_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": open_pos,
"end_pos": closed_pos,
"calib_mode": calib_modes,
"motor_names": self.joint_names,
}
# return calib_dict
self.set_calibration(calib_dict)
def set_calibration(self, calibration: dict[str, list]):
self.calibration = calibration
def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
"""Convert from unsigned int32 joint position range [0, 2**32[ to the universal float32 nominal degree range ]-180.0, 180.0[ with
a "zero position" at 0 degree.
Note: We say "nominal degree range" since the motors can take values outside this range. For instance, 190 degrees, if the motor
rotate more than a half a turn from the zero position. However, most motors can't rotate more than 180 degrees and will stay in this range.
Joints values are original in [0, 2**32[ (unsigned int32). Each motor are expected to complete a full rotation
when given a goal position that is + or - their resolution. For instance, feetech xl330-m077 have a resolution of 4096, and
at any position in their original range, let's say the position 56734, they complete a full rotation clockwise by moving to 60830,
or anticlockwise by moving to 52638. The position in the original range is arbitrary and might change a lot between each motor.
To harmonize between motors of the same model, different robots, or even models of different brands, we propose to work
in the centered nominal degree range ]-180, 180[.
"""
if motor_names is None:
motor_names = self.motor_names
# Convert from unsigned int32 original range [0, 2**32] to signed float32 range
values = values.astype(np.float32)
for i, name in enumerate(motor_names):
calib_idx = self.calibration["motor_names"].index(name)
calib_mode = self.calibration["calib_mode"][calib_idx]
if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
start_pos = self.calibration["start_pos"][calib_idx]
end_pos = self.calibration["end_pos"][calib_idx]
# Rescale the present position to a nominal range [0, 100] %,
# useful for joints with linear motions like Aloha gripper
values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
if name == "pinky_1" and (values[i] < LOWER_BOUND_LINEAR):
values[i] = end_pos
else:
msg = (
f"Wrong motor position range detected for {name}. "
f"Expected to be in nominal range of [0, 100] % (a full linear translation), "
f"with a maximum range of [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}] % to account for some imprecision during calibration, "
f"but present value is {values[i]} %. "
"This might be due to a cable connection issue creating an artificial jump in motor values. "
"You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
)
print(msg)
# raise JointOutOfRangeError(msg)
return values
# def revert_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
# """Inverse of `apply_calibration`."""
# if motor_names is None:
# motor_names = self.motor_names
# for i, name in enumerate(motor_names):
# calib_idx = self.calibration["motor_names"].index(name)
# calib_mode = self.calibration["calib_mode"][calib_idx]
# if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
# start_pos = self.calibration["start_pos"][calib_idx]
# end_pos = self.calibration["end_pos"][calib_idx]
# # Convert from nominal lnear range of [0, 100] % to
# # actual motor range of values which can be arbitrary.
# values[i] = values[i] / 100 * (end_pos - start_pos) + start_pos
# values = np.round(values).astype(np.int32)
# return values
class HopeJuniorRobot:
def __init__(self):
self.arm_bus = FeetechMotorsBus(
port="COM14",
motors={
# "motor1": (2, "sts3250"),
# "motor2": (1, "scs0009"),
"shoulder_pitch": [1, "sts3250"],
"shoulder_yaw": [2, "sts3215"], # TODO: sts3250
"shoulder_roll": [3, "sts3215"], # TODO: sts3250
"elbow_flex": [4, "sts3250"],
"wrist_roll": [5, "sts3215"],
"wrist_yaw": [6, "sts3215"],
"wrist_pitch": [7, "sts3215"],
},
protocol_version=0,
)
self.hand_bus = FeetechMotorsBus(
port="COM15",
motors={
"thumb_basel_rotation": [30, "scs0009"],
"thumb_flexor": [27, "scs0009"],
"thumb_pinky_side": [26, "scs0009"],
"thumb_thumb_side": [28, "scs0009"],
"index_flexor": [25, "scs0009"],
"index_pinky_side": [31, "scs0009"],
"index_thumb_side": [32, "scs0009"],
"middle_flexor": [24, "scs0009"],
"middle_pinky_side": [33, "scs0009"],
"middle_thumb_side": [34, "scs0009"],
"ring_flexor": [21, "scs0009"],
"ring_pinky_side": [36, "scs0009"],
"ring_thumb_side": [35, "scs0009"],
"pinky_flexor": [23, "scs0009"],
"pinky_pinky_side": [38, "scs0009"],
"pinky_thumb_side": [37, "scs0009"],
},
protocol_version=1,
group_sync_read=False,
)
def get_hand_calibration(self):
homing_offset = [0] * len(self.hand_bus.motor_names)
drive_mode = [0] * len(self.hand_bus.motor_names)
start_pos = [
500,
900,
1000,
0,
100,
250,
750,
100,
400,
150,
100,
120,
980,
100,
950,
750,
]
end_pos = [
500 - 250,
900 - 300,
1000 - 550,
0 + 550,
1000,
250 + 700,
750 - 700,
1000,
400 + 700,
150 + 700,
1000,
120 + 700,
980 - 700,
1000,
950 - 700,
750 - 700,
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.hand_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.hand_bus.motor_names,
}
return calib_dict
def connect(self):
self.arm_bus.connect()
#self.hand_bus.connect()
ESCAPE_KEY_ID = 27
def capture_and_display_processed_frames(
frame_processor: Callable[[np.ndarray], np.ndarray],
window_display_name: str,
cap_device: int = 0,
) -> None:
"""
Capture frames from the given input camera device, run them through
the frame processor, and display the outputs in a window with the given name.
User should press Esc to exit.
Inputs:
frame_processor: Callable[[np.ndarray], np.ndarray]
Processes frames.
Input and output are numpy arrays of shape (H W C) with BGR channel layout and dtype uint8 / byte.
window_display_name: str
Name of the window used to display frames.
cap_device: int
Identifier for the camera to use to capture frames.
"""
cv2.namedWindow(window_display_name)
capture = cv2.VideoCapture(cap_device)
if not capture.isOpened():
raise ValueError("Unable to open video capture.")
frame_count = 0
has_frame, frame = capture.read()
while has_frame:
assert isinstance(frame, np.ndarray)
frame_count = frame_count + 1
# mirror frame
frame = np.ascontiguousarray(frame[:, ::-1, ::-1])
# process & show frame
processed_frame = frame_processor(frame)
cv2.imshow(window_display_name, processed_frame[:, :, ::-1])
has_frame, frame = capture.read()
key = cv2.waitKey(1)
if key == ESCAPE_KEY_ID:
break
capture.release()
def main():
robot = HopeJuniorRobot()
robot.connect()
# robot.hand_bus.calibration = None
# breakpoint()
# print(robot.arm_bus.read("Present_Position"))
robot.arm_bus.write("Torque_Enable", 1)
robot.arm_bus.write("Acceleration", 20)
robot.arm_bus.read("Acceleration")
robot.arm_bus.write("Goal_Position", calibration["start_pos"])
exit()
calibration = robot.get_hand_calibration()
robot.arm_bus.write("Goal_Position", calibration["start_pos"])
# robot.hand_bus.write("Goal_Position", calibration["end_pos"][:4], robot.hand_bus.motor_names[:4])
robot.hand_bus.set_calibration(calibration)
lol = 1
# # print(motors_bus.write("Goal_Position", 500))
# print(robot.hand_bus.read("Present_Position"))
# # pos = hand_bus.read("Present_Position")
# # hand_bus.write("Goal_Position", pos[0]+20, hand_bus.motor_names[0])
# # hand_bus.write("Goal_Position", pos[i]+delta, hand_bus.motor_names[i])
# robot.hand_bus.read("Acceleration")
# robot.hand_bus.write("Acceleration", 10)
# sleep = 1
# # robot.hand_bus.write(
# # "Goal_Position", [glove.last_d['index_2']-1500,300,300], ["index_pinky_side", "index_flexor", "index_thumb_side"]
# # )
# #time.sleep(sleep)
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [100, 100, 100], ["index_flexor", "index_pinky_side", "index_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [100, 0, 0], ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [200, 200, 0], ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [200, 100, 600], ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
# )
# time.sleep(sleep)
# breakpoint()
glove = HomonculusGlove()
glove.run_calibration()
# while True:
# joint_names = ["index_1", "index_2"]
# joint_values = glove.read(joint_names)
# print(joint_values)
input()
while True:
joint_names = []
joint_names += ["thumb_0", "thumb_2", "thumb_3"]
joint_names += ["index_1", "index_2"]
joint_names += ["middle_1", "middle_2"]
joint_names += ["ring_1", "ring_2"]
joint_names += ["pinky_1", "pinky_2"]
joint_values = glove.read(joint_names)
joint_values = joint_values.round().astype(int)
joint_dict = {k: v for k, v in zip(joint_names, joint_values, strict=False)}
motor_values = []
motor_names = []
motor_names += ["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"]
motor_values += [
joint_dict["thumb_3"],
joint_dict["thumb_0"],
joint_dict["thumb_2"],
joint_dict["thumb_2"],
]
motor_names += ["index_flexor", "index_pinky_side", "index_thumb_side"]
motor_values += [joint_dict["index_2"], joint_dict["index_1"], joint_dict["index_1"]]
motor_names += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
motor_values += [joint_dict["middle_2"], joint_dict["middle_1"], joint_dict["middle_1"]]
motor_names += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
motor_values += [joint_dict["ring_2"], joint_dict["ring_1"], joint_dict["ring_1"]]
motor_names += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
motor_values += [joint_dict["pinky_2"], joint_dict["pinky_1"], joint_dict["pinky_1"]]
motor_values = np.array(motor_values)
motor_values = np.clip(motor_values, 0, 100)
robot.hand_bus.write("Goal_Position", motor_values, motor_names)
time.sleep(0.02)
while True:
# print(glove.read()['index_2']-1500)
glove_index_flexor = glove.read()["index_2"] - 1500
glove_index_subflexor = glove.read()["index_1"] - 1500
glove_index_side = glove.read()["index_0"] - 2100
vals = [glove_index_flexor, 1000 - (glove_index_subflexor), glove_index_subflexor]
keys = ["index_flexor", "index_pinky_side", "index_thumb_side"]
glove_middle_flexor = glove.read()["middle_2"] - 1500
glove_middle_subflexor = 1000 - (glove.read()["middle_1"] - 1700)
vals += [glove_middle_flexor, glove_middle_subflexor, glove_middle_subflexor - 200]
keys += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
glove_ring_flexor = glove.read()["ring_2"] - 1300
print(glove_ring_flexor)
glove_ring_subflexor = glove.read()["ring_1"] - 1100
vals += [glove_ring_flexor, 1000 - glove_ring_subflexor, glove_ring_subflexor]
keys += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
glove_pinky_flexor = glove.read()["pinky_2"] - 1500
glove_pinky_subflexor = glove.read()["pinky_1"] - 1300
vals += [300 + glove_pinky_flexor, max(1000 - glove_pinky_subflexor - 100, 0), glove_pinky_subflexor]
keys += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
robot.hand_bus.write("Goal_Position", vals, keys)
time.sleep(0.1)
time.sleep(3)
def move_arm(loop=10):
sleep = 1
for i in range(loop):
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1195])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 2195])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1457, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 2357, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 974, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 2674, 1957, 1695])
time.sleep(sleep + 2)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 1632, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 1369, 1632, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 1330, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [2381, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1681, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
def move_hand(loop=10):
sleep = 0.5
for i in range(loop):
robot.hand_bus.write(
"Goal_Position",
[500, 1000, 0, 1000],
["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [100, 100, 100], ["index_flexor", "index_pinky_side", "index_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [100, 1000, 150], ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [200, 200, 0], ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [200, 100, 700], ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[500, 1000 - 250, 0 + 300, 1000 - 200],
["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[100 + 450, 100 + 400, 100 + 400],
["index_flexor", "index_pinky_side", "index_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[100 + 350, 1000 - 450, 150 + 450],
["middle_flexor", "middle_pinky_side", "middle_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[200 + 650, 200 + 350, 0 + 350],
["ring_flexor", "ring_pinky_side", "ring_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[200 + 450, 100 + 400, 700 - 400],
["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"],
)
time.sleep(sleep)
move_hand(3)
move_arm(1)
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor() as executor:
executor.submit(move_arm)
executor.submit(move_hand)
# initial position
for i in range(3):
robot.hand_bus.write(
"Goal_Position", [500, 1000, 0, 1000, 100, 950, 100, 100, 1000, 150, 200, 200, 0, 200, 100, 700]
)
time.sleep(1)
# for i in range(3):
# robot.hand_bus.write("Goal_Position", [500, 1000-150, 0+250, 1000-150,
# 100+300, 950-250, 100+250,
# 100+200, 1000-300, 150+300,
# 200+500, 200+200, 0+200,
# 200+300, 100+200, 700-200])
# time.sleep(1)
# camera = 0
# score_threshold = 0.95
# iou_threshold = 0.3
# app = MediaPipeHandApp(MediaPipeHand.from_pretrained(), score_threshold, iou_threshold)
# def frame_processor(frame: np.ndarray) -> np.ndarray:
# # Input Prep
# NHWC_int_numpy_frames, NCHW_fp32_torch_frames = app_to_net_image_inputs(frame)
# # Run Bounding Box & Keypoint Detector
# batched_selected_boxes, batched_selected_keypoints = app._run_box_detector(NCHW_fp32_torch_frames)
# # The region of interest ( bounding box of 4 (x, y) corners).
# # list[torch.Tensor(shape=[Num Boxes, 4, 2])],
# # where 2 == (x, y)
# #
# # A list element will be None if there is no selected ROI.
# batched_roi_4corners = app._compute_object_roi(batched_selected_boxes, batched_selected_keypoints)
# # selected landmarks for the ROI (if any)
# # list[torch.Tensor(shape=[Num Selected Landmarks, K, 3])],
# # where K == number of landmark keypoints, 3 == (x, y, confidence)
# #
# # A list element will be None if there is no ROI.
# landmarks_out = app._run_landmark_detector(NHWC_int_numpy_frames, batched_roi_4corners)
# app._draw_predictions(
# NHWC_int_numpy_frames,
# batched_selected_boxes,
# batched_selected_keypoints,
# batched_roi_4corners,
# *landmarks_out,
# )
# return NHWC_int_numpy_frames[0]
# capture_and_display_processed_frames(frame_processor, "QAIHM Mediapipe Hand Demo", camera)
if __name__ == "__main__":
main()

231
examples/hopejr/utils.py Normal file
View File

@@ -0,0 +1,231 @@
#robot.arm_bus.write("Acceleration", [20], ["shoulder_pitch"])
####DEBUGGER####################
# joint = input("Enter joint name: ")
# encoder = EncoderReader("/dev/ttyUSB1", 115200)
# start_angle = arm_calibration['start_pos'][arm_calibration['motor_names'].index(joint)]
# end_angle = arm_calibration['end_pos'][arm_calibration['motor_names'].index(joint)]
# # start_angle = shoulder_calibration['start_pos'][shoulder_calibration['motor_names'].index(joint)]
# # end_angle = shoulder_calibration['end_pos'][shoulder_calibration['motor_names'].index(joint)]
# while True:
# angle = int(start_angle+(end_angle-start_angle)*encoder.read()/1000)
# # robot.shoulder_bus.set_bus_baudrate(115200)
# # robot.shoulder_bus.write("Goal_Position",angle, [joint])
# robot.shoulder_bus.set_bus_baudrate(1000000)
# robot.arm_bus.write("Goal_Position",angle, [joint])
# print(angle)
# time.sleep(0.1)
#####SAFETY CHECKS EXPLAINED#####
#There are two safety checks built-in: one is based on load and the other is based on current.
#Current: if Protection_Current > Present_Current we wait Over_Current_Protection_Time (expressed in ms) and set Torque_Enable to 0
#Load: if Max_Torque_Limit*Overload_Torque (expressed as a percentage) > Present_Load, we wait Protection_Time (expressed in ms
#and set Max_Torque_Limit to Protective_Torque)
#Though we can specify Min-Max_Angle_Limit, Max_Temperature_Limit, Min-Max_Voltage_Limit, no safety checks are implemented for these values
#robot.arm_bus.set_calibration(arm_calibration)
#method 1
# robot.arm_bus.write("Overload_Torque", 80)
# robot.arm_bus.write("Protection_Time", 10)
# robot.arm_bus.write("Protective_Torque", 1)
# robot.arm_bus.write("Protection_Current", 200,["shoulder_pitch"])
# robot.arm_bus.write("Over_Current_Protection_Time", 10)
#method 2
# robot.arm_bus.write("Protection_Current", 500,["shoulder_pitch"])
# robot.arm_bus.write("Over_Current_Protection_Time", 10)
# robot.arm_bus.write("Max_Torque_Limit", 1000)
# robot.arm_bus.write("Overload_Torque", 40)
# robot.arm_bus.write("Protection_Time", 10)
# robot.arm_bus.write("Protective_Torque", 1)
# robot.shoulder_bus.set_bus_baudrate(115200)
# robot.shoulder_bus.write("Goal_Position",2500)
# exit()
######LOGGER####################
# from test_torque.log_and_plot_feetech import log_and_plot_params
# params_to_log = [
# "Protection_Current",
# "Present_Current",
# "Max_Torque_Limit",
# "Protection_Time",
# "Overload_Torque",
# "Present_Load",
# "Present_Position",
# ]
# servo_names = ["shoulder_pitch"]
# servo_data, timestamps = log_and_plot_params(robot.shoulder_bus, params_to_log, servo_names, test_id="shoulder_pitch")
# exit()
#robot.arm_bus.write("Goal_Position",2300, ["shoulder_pitch"])
# dt = 2
# steps = 4
# max_pos = 1500
# min_pos = 2300
# increment = (max_pos - min_pos) / steps
# # Move from min_pos to max_pos in steps
# for i in range(steps + 1): # Include the last step
# current_pos = min_pos + int(i * increment)
# robot.arm_bus.write("Goal_Position", [current_pos], ["shoulder_pitch"])
# time.sleep(dt)
# # Move back from max_pos to min_pos in steps
# for i in range(steps + 1): # Include the last step
# current_pos = max_pos - int(i * increment)
# robot.arm_bus.write("Goal_Position", [current_pos], ["shoulder_pitch"])
# time.sleep(dt)shoulder_pitch
#demo to show how sending a lot of values makes the robt shake
# # Step increment
#
# # Move from min_pos to max_pos in steps
# for i in range(steps + 1): # Include the last step
# current_pos = min_pos + int(i * increment)
# robot.arm_bus.write("Goal_Position", [current_pos], ["elbow_flex"])
# time.sleep(dt)
# # Move back from max_pos to min_pos in steps
# for i in range(steps + 1): # Include the last step
# current_pos = max_pos - int(i * increment)
# robot.arm_bus.write("Goal_Position", [current_pos], ["elbow_flex"])
# time.sleep(dt)
# exit()
#robot.arm_bus.write("Goal_Position", a # shoulder_calibration = robot.get_shoulder_calibration()
# print(shoulder_calibration)m_calibration["start_pos"])
# robot.arm_bus.write("Over_Current_Protection_Time", 50)
# robot.arm_bus.write("Protection_Current", 310, ["shoulder_pitch"])
# robot.arm_bus.write("Overload_Torque", 80, ["shoulder_pitch"])
# robot.arm_bus.write("Protection_Time", 100, ["shoulder_pitch"])
# robot.arm_bus.write("Over_Current_Protection_Time", 50, ["shoulder_pitch"])
# robot.arm_bus.write("Protective_Torque", 20, ["shoulder_pitch"])
# robot.arm_bus.write("Goal_Position", [600],["shoulder_pitch"])
# from test_torque.log_and_plot_feetech import log_and_plot_params
# params_to_log = [
# "Present_Current",
# "Protection_Current",
# "Overload_Torque",
# "Protection_Time",
# "Protective_Torque",
# "Present_Load",
# "Present_Position",
# ]
# servo_names = ["shoulder_pitch"]
#
#robot.arm_bus.write("Goal_Position", arm_calibration["start_pos"])
#robot.hand_bus.set_calibration(hand_calibration)
#interp = 0.3
#robot.arm_bus.write("Goal_Position", [int((i*interp+j*(1-interp))) for i, j in zip(arm_calibration["start_pos"], arm_calibration["end_pos"])])
#exit()
# glove = HomonculusGlove()
# glove.run_calibration()
####GOOD FOR GRASPING
# start_pos = [
# 500,
# 900,
# 500,
# 1000,
# 100,
# 450,#250
# 950,#750
# 100,
# 300,#400
# 50,#150
# 100,
# 120,
# 980,
# 100,
# 950,
# 750,
# ]
# end_pos = [
# start_pos[0] - 400,
# start_pos[1] - 300,
# start_pos[2] + 500,
# start_pos[3] - 50,
# start_pos[4] + 900,
# start_pos[5] + 500,
# start_pos[6] - 500,
# start_pos[7] + 900,
# start_pos[8] + 700,
# start_pos[9] + 700,
# start_pos[10] + 900,
# start_pos[11] + 700,
# start_pos[12] - 700,
# start_pos[13] + 900,
# start_pos[14] - 700,
# start_pos[15] - 700,
# ]
SCS_SERIES_CONTROL_TABLE = {
# "Max_Torque_Limit": (16, 2),
# "Phase": (18, 1),
# "Unloading_Condition": (19, 1),
"Protective_Torque": (37, 1),
"Protection_Time": (38, 1),
#Baud_Rate": (48, 1),
}
def read_and_print_scs_values(robot):
for param_name in SCS_SERIES_CONTROL_TABLE:
value = robot.hand_bus.read(param_name)
print(f"{param_name}: {value}")
motor_1_values = {
"Lock" : 255,
#"Protection_Time": 20#so if you write to these they turn to 0 for some fucking reason. protection time was 100, procetive to
}
# motor_1_values = {
# "Lock": 1,
# "Protection_Time": 100,
# "Protective_Torque": 20,
# "Phase": 1,#thisu is bullshit
# "Unloading_Condition": 32,
# }
#bug in writing to specific values of the scs0009
# Write values to motor 2, there is overload torque there
#ok so i can write, the jittering is because of the overload torque which is still being triggered
#TODO: i have to write a functioining version for the sc009 (or i dont who cares)

View File

@@ -0,0 +1,18 @@
# Color gradient function (0-2024 scaled to 0-10)
def value_to_color(value):
# Clamp the value between 0 and 2024
value = max(0, min(2024, value))
# Scale from [0..2024] to [0..10]
scaled_value = (value / 2024) * 10
# Green to Yellow (scaled_value 0..5), then Yellow to Red (scaled_value 5..10)
if scaled_value <= 5:
r = int(255 * (scaled_value / 5))
g = 255
else:
r = 255
g = int(255 * (1 - (scaled_value - 5) / 5))
b = 0
return (r, g, b)

681
examples/test.py Normal file
View File

@@ -0,0 +1,681 @@
import threading
import time
from typing import Callable
import cv2
import numpy as np
# from qai_hub_models.models.mediapipe_hand.app import MediaPipeHandApp
# from qai_hub_models.models.mediapipe_hand.model import (
# MediaPipeHand,
# )
# from qai_hub_models.utils.image_processing import (
# app_to_net_image_inputs,
# )
from lerobot.common.robot_devices.motors.feetech import (
CalibrationMode,
FeetechMotorsBus,
)
LOWER_BOUND_LINEAR = -100
UPPER_BOUND_LINEAR = 200
import serial
class HomonculusGlove:
def __init__(self):
self.serial_port = "/dev/tty.usbmodem21401"
self.baud_rate = 115200
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
self.thread = threading.Thread(target=self.async_read)
self.thread.start()
self.last_d = {
"thumb_0": 100,
"thumb_1": 100,
"thumb_2": 100,
"thumb_3": 100,
"index_0": 100,
"index_1": 100,
"index_2": 100,
"middle_0": 100,
"middle_1": 100,
"middle_2": 100,
"ring_0": 100,
"ring_1": 100,
"ring_2": 100,
"pinky_0": 100,
"pinky_1": 100,
"pinky_2": 100,
"battery_voltage": 100,
}
self.calibration = None
@property
def joint_names(self):
return list(self.last_d.keys())
def read(self, motor_names: list[str] | None = None):
if motor_names is None:
motor_names = self.joint_names
values = np.array([self.last_d[k] for k in motor_names])
print(motor_names)
print(values)
if self.calibration is not None:
values = self.apply_calibration(values, motor_names)
print(values)
return values
def async_read(self):
while True:
if self.serial.in_waiting > 0:
self.serial.flush()
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
if len(vals) != 17:
continue
vals = [int(val) for val in vals]
d = {
"thumb_0": vals[0],
"thumb_1": vals[1],
"thumb_2": vals[2],
"thumb_3": vals[3],
"index_0": vals[4],
"index_1": vals[5],
"index_2": vals[6],
"middle_0": vals[7],
"middle_1": vals[8],
"middle_2": vals[9],
"ring_0": vals[10],
"ring_1": vals[11],
"ring_2": vals[12],
"pinky_0": vals[13],
"pinky_1": vals[14],
"pinky_2": vals[15],
"battery_voltage": vals[16],
}
self.last_d = d
# print(d.values())
def run_calibration(self):
print("\nMove arm to open position")
input("Press Enter to continue...")
open_pos_list = []
for _ in range(300):
open_pos = self.read()
open_pos_list.append(open_pos)
time.sleep(0.01)
open_pos = np.array(open_pos_list)
max_open_pos = open_pos.max(axis=0)
min_open_pos = open_pos.min(axis=0)
print(f"{max_open_pos=}")
print(f"{min_open_pos=}")
print("\nMove arm to closed position")
input("Press Enter to continue...")
closed_pos_list = []
for _ in range(300):
closed_pos = self.read()
closed_pos_list.append(closed_pos)
time.sleep(0.01)
closed_pos = np.array(closed_pos_list)
max_closed_pos = closed_pos.max(axis=0)
closed_pos[closed_pos < 1000] = 60000
min_closed_pos = closed_pos.min(axis=0)
print(f"{max_closed_pos=}")
print(f"{min_closed_pos=}")
open_pos = np.array([max_open_pos, max_closed_pos]).max(axis=0)
closed_pos = np.array([min_open_pos, min_closed_pos]).min(axis=0)
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
for i, jname in enumerate(self.joint_names):
if jname in ["thumb_0", "thumb_3", "index_2", "middle_2", "ring_2", "pinky_0", "pinky_2"]:
tmp_pos = open_pos[i]
open_pos[i] = closed_pos[i]
closed_pos[i] = tmp_pos
print()
print(f"{open_pos=}")
print(f"{closed_pos=}")
homing_offset = [0] * len(self.joint_names)
drive_mode = [0] * len(self.joint_names)
calib_modes = [CalibrationMode.LINEAR.name] * len(self.joint_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": open_pos,
"end_pos": closed_pos,
"calib_mode": calib_modes,
"motor_names": self.joint_names,
}
# return calib_dict
self.set_calibration(calib_dict)
def set_calibration(self, calibration: dict[str, list]):
self.calibration = calibration
def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
"""Convert from unsigned int32 joint position range [0, 2**32[ to the universal float32 nominal degree range ]-180.0, 180.0[ with
a "zero position" at 0 degree.
Note: We say "nominal degree range" since the motors can take values outside this range. For instance, 190 degrees, if the motor
rotate more than a half a turn from the zero position. However, most motors can't rotate more than 180 degrees and will stay in this range.
Joints values are original in [0, 2**32[ (unsigned int32). Each motor are expected to complete a full rotation
when given a goal position that is + or - their resolution. For instance, feetech xl330-m077 have a resolution of 4096, and
at any position in their original range, let's say the position 56734, they complete a full rotation clockwise by moving to 60830,
or anticlockwise by moving to 52638. The position in the original range is arbitrary and might change a lot between each motor.
To harmonize between motors of the same model, different robots, or even models of different brands, we propose to work
in the centered nominal degree range ]-180, 180[.
"""
if motor_names is None:
motor_names = self.motor_names
# Convert from unsigned int32 original range [0, 2**32] to signed float32 range
values = values.astype(np.float32)
for i, name in enumerate(motor_names):
calib_idx = self.calibration["motor_names"].index(name)
calib_mode = self.calibration["calib_mode"][calib_idx]
if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
start_pos = self.calibration["start_pos"][calib_idx]
end_pos = self.calibration["end_pos"][calib_idx]
# Rescale the present position to a nominal range [0, 100] %,
# useful for joints with linear motions like Aloha gripper
values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
if name == "pinky_1" and (values[i] < LOWER_BOUND_LINEAR):
values[i] = end_pos
else:
msg = (
f"Wrong motor position range detected for {name}. "
f"Expected to be in nominal range of [0, 100] % (a full linear translation), "
f"with a maximum range of [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}] % to account for some imprecision during calibration, "
f"but present value is {values[i]} %. "
"This might be due to a cable connection issue creating an artificial jump in motor values. "
"You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
)
print(msg)
# raise JointOutOfRangeError(msg)
return values
# def revert_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
# """Inverse of `apply_calibration`."""
# if motor_names is None:
# motor_names = self.motor_names
# for i, name in enumerate(motor_names):
# calib_idx = self.calibration["motor_names"].index(name)
# calib_mode = self.calibration["calib_mode"][calib_idx]
# if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
# start_pos = self.calibration["start_pos"][calib_idx]
# end_pos = self.calibration["end_pos"][calib_idx]
# # Convert from nominal lnear range of [0, 100] % to
# # actual motor range of values which can be arbitrary.
# values[i] = values[i] / 100 * (end_pos - start_pos) + start_pos
# values = np.round(values).astype(np.int32)
# return values
class HopeJuniorRobot:
def __init__(self):
self.arm_bus = FeetechMotorsBus(
port="/dev/tty.usbmodem58760429571",
motors={
# "motor1": (2, "sts3250"),
# "motor2": (1, "scs0009"),
"shoulder_pitch": [1, "sts3250"],
"shoulder_yaw": [2, "sts3215"], # TODO: sts3250
"shoulder_roll": [3, "sts3215"], # TODO: sts3250
"elbow_flex": [4, "sts3250"],
"wrist_roll": [5, "sts3215"],
"wrist_yaw": [6, "sts3215"],
"wrist_pitch": [7, "sts3215"],
},
protocol_version=0,
)
self.hand_bus = FeetechMotorsBus(
port="/dev/tty.usbmodem585A0077581",
motors={
"thumb_basel_rotation": [30, "scs0009"],
"thumb_flexor": [27, "scs0009"],
"thumb_pinky_side": [26, "scs0009"],
"thumb_thumb_side": [28, "scs0009"],
"index_flexor": [25, "scs0009"],
"index_pinky_side": [31, "scs0009"],
"index_thumb_side": [32, "scs0009"],
"middle_flexor": [24, "scs0009"],
"middle_pinky_side": [33, "scs0009"],
"middle_thumb_side": [34, "scs0009"],
"ring_flexor": [21, "scs0009"],
"ring_pinky_side": [36, "scs0009"],
"ring_thumb_side": [35, "scs0009"],
"pinky_flexor": [23, "scs0009"],
"pinky_pinky_side": [38, "scs0009"],
"pinky_thumb_side": [37, "scs0009"],
},
protocol_version=1,
group_sync_read=False,
)
def get_hand_calibration(self):
homing_offset = [0] * len(self.hand_bus.motor_names)
drive_mode = [0] * len(self.hand_bus.motor_names)
start_pos = [
500,
900,
1000,
0,
100,
250,
750,
100,
400,
150,
100,
120,
980,
100,
950,
750,
]
end_pos = [
500 - 250,
900 - 300,
1000 - 550,
0 + 550,
1000,
250 + 700,
750 - 700,
1000,
400 + 700,
150 + 700,
1000,
120 + 700,
980 - 700,
1000,
950 - 700,
750 - 700,
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.hand_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.hand_bus.motor_names,
}
return calib_dict
def connect(self):
self.arm_bus.connect()
self.hand_bus.connect()
ESCAPE_KEY_ID = 27
def capture_and_display_processed_frames(
frame_processor: Callable[[np.ndarray], np.ndarray],
window_display_name: str,
cap_device: int = 0,
) -> None:
"""
Capture frames from the given input camera device, run them through
the frame processor, and display the outputs in a window with the given name.
User should press Esc to exit.
Inputs:
frame_processor: Callable[[np.ndarray], np.ndarray]
Processes frames.
Input and output are numpy arrays of shape (H W C) with BGR channel layout and dtype uint8 / byte.
window_display_name: str
Name of the window used to display frames.
cap_device: int
Identifier for the camera to use to capture frames.
"""
cv2.namedWindow(window_display_name)
capture = cv2.VideoCapture(cap_device)
if not capture.isOpened():
raise ValueError("Unable to open video capture.")
frame_count = 0
has_frame, frame = capture.read()
while has_frame:
assert isinstance(frame, np.ndarray)
frame_count = frame_count + 1
# mirror frame
frame = np.ascontiguousarray(frame[:, ::-1, ::-1])
# process & show frame
processed_frame = frame_processor(frame)
cv2.imshow(window_display_name, processed_frame[:, :, ::-1])
has_frame, frame = capture.read()
key = cv2.waitKey(1)
if key == ESCAPE_KEY_ID:
break
capture.release()
def main():
robot = HopeJuniorRobot()
robot.connect()
# robot.hand_bus.calibration = None
# breakpoint()
# print(robot.arm_bus.read("Present_Position"))
robot.arm_bus.write("Torque_Enable", 1)
robot.arm_bus.write("Acceleration", 20)
robot.arm_bus.read("Acceleration")
calibration = robot.get_hand_calibration()
robot.hand_bus.write("Goal_Position", calibration["start_pos"])
# robot.hand_bus.write("Goal_Position", calibration["end_pos"][:4], robot.hand_bus.motor_names[:4])
robot.hand_bus.set_calibration(calibration)
lol = 1
# # print(motors_bus.write("Goal_Position", 500))
# print(robot.hand_bus.read("Present_Position"))
# # pos = hand_bus.read("Present_Position")
# # hand_bus.write("Goal_Position", pos[0]+20, hand_bus.motor_names[0])
# # hand_bus.write("Goal_Position", pos[i]+delta, hand_bus.motor_names[i])
# robot.hand_bus.read("Acceleration")
# robot.hand_bus.write("Acceleration", 10)
# sleep = 1
# # robot.hand_bus.write(
# # "Goal_Position", [glove.last_d['index_2']-1500,300,300], ["index_pinky_side", "index_flexor", "index_thumb_side"]
# # )
# #time.sleep(sleep)
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [100, 100, 100], ["index_flexor", "index_pinky_side", "index_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [100, 0, 0], ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [200, 200, 0], ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [200, 100, 600], ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
# )
# time.sleep(sleep)
# breakpoint()
glove = HomonculusGlove()
glove.run_calibration()
# while True:
# joint_names = ["index_1", "index_2"]
# joint_values = glove.read(joint_names)
# print(joint_values)
input()
while True:
joint_names = []
joint_names += ["thumb_0", "thumb_2", "thumb_3"]
joint_names += ["index_1", "index_2"]
joint_names += ["middle_1", "middle_2"]
joint_names += ["ring_1", "ring_2"]
joint_names += ["pinky_1", "pinky_2"]
joint_values = glove.read(joint_names)
joint_values = joint_values.round().astype(int)
joint_dict = {k: v for k, v in zip(joint_names, joint_values, strict=False)}
motor_values = []
motor_names = []
motor_names += ["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"]
motor_values += [
joint_dict["thumb_3"],
joint_dict["thumb_0"],
joint_dict["thumb_2"],
joint_dict["thumb_2"],
]
motor_names += ["index_flexor", "index_pinky_side", "index_thumb_side"]
motor_values += [joint_dict["index_2"], joint_dict["index_1"], joint_dict["index_1"]]
motor_names += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
motor_values += [joint_dict["middle_2"], joint_dict["middle_1"], joint_dict["middle_1"]]
motor_names += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
motor_values += [joint_dict["ring_2"], joint_dict["ring_1"], joint_dict["ring_1"]]
motor_names += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
motor_values += [joint_dict["pinky_2"], joint_dict["pinky_1"], joint_dict["pinky_1"]]
motor_values = np.array(motor_values)
motor_values = np.clip(motor_values, 0, 100)
robot.hand_bus.write("Goal_Position", motor_values, motor_names)
time.sleep(0.02)
while True:
# print(glove.read()['index_2']-1500)
glove_index_flexor = glove.read()["index_2"] - 1500
glove_index_subflexor = glove.read()["index_1"] - 1500
glove_index_side = glove.read()["index_0"] - 2100
vals = [glove_index_flexor, 1000 - (glove_index_subflexor), glove_index_subflexor]
keys = ["index_flexor", "index_pinky_side", "index_thumb_side"]
glove_middle_flexor = glove.read()["middle_2"] - 1500
glove_middle_subflexor = 1000 - (glove.read()["middle_1"] - 1700)
vals += [glove_middle_flexor, glove_middle_subflexor, glove_middle_subflexor - 200]
keys += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
glove_ring_flexor = glove.read()["ring_2"] - 1300
print(glove_ring_flexor)
glove_ring_subflexor = glove.read()["ring_1"] - 1100
vals += [glove_ring_flexor, 1000 - glove_ring_subflexor, glove_ring_subflexor]
keys += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
glove_pinky_flexor = glove.read()["pinky_2"] - 1500
glove_pinky_subflexor = glove.read()["pinky_1"] - 1300
vals += [300 + glove_pinky_flexor, max(1000 - glove_pinky_subflexor - 100, 0), glove_pinky_subflexor]
keys += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
robot.hand_bus.write("Goal_Position", vals, keys)
time.sleep(0.1)
time.sleep(3)
def move_arm(loop=10):
sleep = 1
for i in range(loop):
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1195])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 2195])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1457, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 2357, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 974, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 2674, 1957, 1695])
time.sleep(sleep + 2)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 1632, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 1369, 1632, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 1330, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [2381, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1681, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
def move_hand(loop=10):
sleep = 0.5
for i in range(loop):
robot.hand_bus.write(
"Goal_Position",
[500, 1000, 0, 1000],
["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [100, 100, 100], ["index_flexor", "index_pinky_side", "index_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [100, 1000, 150], ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [200, 200, 0], ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [200, 100, 700], ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[500, 1000 - 250, 0 + 300, 1000 - 200],
["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[100 + 450, 100 + 400, 100 + 400],
["index_flexor", "index_pinky_side", "index_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[100 + 350, 1000 - 450, 150 + 450],
["middle_flexor", "middle_pinky_side", "middle_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[200 + 650, 200 + 350, 0 + 350],
["ring_flexor", "ring_pinky_side", "ring_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[200 + 450, 100 + 400, 700 - 400],
["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"],
)
time.sleep(sleep)
move_hand(3)
move_arm(1)
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor() as executor:
executor.submit(move_arm)
executor.submit(move_hand)
# initial position
for i in range(3):
robot.hand_bus.write(
"Goal_Position", [500, 1000, 0, 1000, 100, 950, 100, 100, 1000, 150, 200, 200, 0, 200, 100, 700]
)
time.sleep(1)
# for i in range(3):
# robot.hand_bus.write("Goal_Position", [500, 1000-150, 0+250, 1000-150,
# 100+300, 950-250, 100+250,
# 100+200, 1000-300, 150+300,
# 200+500, 200+200, 0+200,
# 200+300, 100+200, 700-200])
# time.sleep(1)
# camera = 0
# score_threshold = 0.95
# iou_threshold = 0.3
# app = MediaPipeHandApp(MediaPipeHand.from_pretrained(), score_threshold, iou_threshold)
# def frame_processor(frame: np.ndarray) -> np.ndarray:
# # Input Prep
# NHWC_int_numpy_frames, NCHW_fp32_torch_frames = app_to_net_image_inputs(frame)
# # Run Bounding Box & Keypoint Detector
# batched_selected_boxes, batched_selected_keypoints = app._run_box_detector(NCHW_fp32_torch_frames)
# # The region of interest ( bounding box of 4 (x, y) corners).
# # list[torch.Tensor(shape=[Num Boxes, 4, 2])],
# # where 2 == (x, y)
# #
# # A list element will be None if there is no selected ROI.
# batched_roi_4corners = app._compute_object_roi(batched_selected_boxes, batched_selected_keypoints)
# # selected landmarks for the ROI (if any)
# # list[torch.Tensor(shape=[Num Selected Landmarks, K, 3])],
# # where K == number of landmark keypoints, 3 == (x, y, confidence)
# #
# # A list element will be None if there is no ROI.
# landmarks_out = app._run_landmark_detector(NHWC_int_numpy_frames, batched_roi_4corners)
# app._draw_predictions(
# NHWC_int_numpy_frames,
# batched_selected_boxes,
# batched_selected_keypoints,
# batched_roi_4corners,
# *landmarks_out,
# )
# return NHWC_int_numpy_frames[0]
# capture_and_display_processed_frames(frame_processor, "QAIHM Mediapipe Hand Demo", camera)
if __name__ == "__main__":
main()

133
examples/test2.py Normal file
View File

@@ -0,0 +1,133 @@
#!/usr/bin/env python
#
# ********* Ping Example *********
#
#
# Available SCServo model on this example : All models using Protocol SCS
# This example is tested with a SCServo(STS/SMS/SCS), and an URT
# Be sure that SCServo(STS/SMS/SCS) properties are already set as %% ID : 1 / Baudnum : 6 (Baudrate : 1000000)
#
import os
if os.name == "nt":
import msvcrt
def getch():
return msvcrt.getch().decode()
else:
import sys
import termios
import tty
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
def getch():
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
from scservo_sdk import * # Uses SCServo SDK library
# Default setting
SCS_ID = 1 # SCServo ID : 1
BAUDRATE = 1000000 # SCServo default baudrate : 1000000
DEVICENAME = "/dev/tty.usbserial-2130" # Check which port is being used on your controller
# ex) Windows: "COM1" Linux: "/dev/ttyUSB0" Mac: "/dev/tty.usbserial-*"
protocol_end = 1 # SCServo bit end(STS/SMS=0, SCS=1)
# Initialize PortHandler instance
# Set the port path
# Get methods and members of PortHandlerLinux or PortHandlerWindows
portHandler = PortHandler(DEVICENAME)
# Initialize PacketHandler instance
# Get methods and members of Protocol
packetHandler = PacketHandler(protocol_end)
# Open port
if portHandler.openPort():
print("Succeeded to open the port")
else:
print("Failed to open the port")
print("Press any key to terminate...")
getch()
quit()
# Set port baudrate
if portHandler.setBaudRate(BAUDRATE):
print("Succeeded to change the baudrate")
else:
print("Failed to change the baudrate")
print("Press any key to terminate...")
getch()
quit()
# Try to ping the SCServo
# Get SCServo model number
scs_model_number, scs_comm_result, scs_error = packetHandler.ping(portHandler, SCS_ID)
if scs_comm_result != COMM_SUCCESS:
print("%s" % packetHandler.getTxRxResult(scs_comm_result))
elif scs_error != 0:
print("%s" % packetHandler.getRxPacketError(scs_error))
else:
print("[ID:%03d] ping Succeeded. SCServo model number : %d" % (SCS_ID, scs_model_number))
ADDR_SCS_PRESENT_POSITION = 56
scs_present_position, scs_comm_result, scs_error = packetHandler.read2ByteTxRx(
portHandler, SCS_ID, ADDR_SCS_PRESENT_POSITION
)
if scs_comm_result != COMM_SUCCESS:
print(packetHandler.getTxRxResult(scs_comm_result))
elif scs_error != 0:
print(packetHandler.getRxPacketError(scs_error))
breakpoint()
scs_present_position = SCS_LOWORD(scs_present_position)
# scs_present_speed = SCS_HIWORD(scs_present_position_speed)
# print("[ID:%03d] PresPos:%03d PresSpd:%03d" % (SCS_ID, scs_present_position, SCS_TOHOST(scs_present_speed, 15)))
print("[ID:%03d] PresPos:%03d" % (SCS_ID, scs_present_position))
groupSyncRead = GroupSyncRead(portHandler, packetHandler, ADDR_SCS_PRESENT_POSITION, 2)
scs_addparam_result = groupSyncRead.addParam(SCS_ID)
if scs_addparam_result != True:
print("[ID:%03d] groupSyncRead addparam failed" % SCS_ID)
quit()
# Syncread present position
scs_comm_result = groupSyncRead.txRxPacket()
if scs_comm_result != COMM_SUCCESS:
print("%s" % packetHandler.getTxRxResult(scs_comm_result))
# Check if groupsyncread data of SCServo#1 is available
scs_getdata_result = groupSyncRead.isAvailable(SCS_ID, ADDR_SCS_PRESENT_POSITION, 2)
if scs_getdata_result == True:
# Get SCServo#1 present position value
scs_present_position = groupSyncRead.getData(SCS_ID, ADDR_SCS_PRESENT_POSITION, 2)
else:
scs_present_position = 0
print("[ID:%03d] groupSyncRead getdata failed" % SCS_ID)
# # Check if groupsyncread data of SCServo#2 is available
# scs_getdata_result = groupSyncRead.isAvailable(SCS2_ID, ADDR_SCS_PRESENT_POSITION, 2)
# if scs_getdata_result == True:
# # Get SCServo#2 present position value
# scs2_present_position_speed = groupSyncRead.getData(SCS2_ID, ADDR_SCS_PRESENT_POSITION, 2)
# else:
# print("[ID:%03d] groupSyncRead getdata failed" % SCS2_ID)
scs_present_position = SCS_LOWORD(scs_present_position)
print("[ID:%03d] PresPos:%03d" % (SCS_ID, scs_present_position))
# Close port
portHandler.closePort()

45
examples/test3.py Normal file
View File

@@ -0,0 +1,45 @@
import serial
class HomonculusGlove:
def __init__(self):
self.serial_port = "/dev/tty.usbmodem1101"
self.baud_rate = 115200
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
def read(self):
while True:
if self.serial.in_waiting > 0:
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
vals = [int(val) for val in vals]
d = {
"thumb_0": vals[0],
"thumb_1": vals[1],
"thumb_2": vals[2],
"thumb_3": vals[3],
"index_0": vals[4],
"index_1": vals[5],
"index_2": vals[6],
"middle_0": vals[7],
"middle_1": vals[8],
"middle_2": vals[9],
"ring_0": vals[10],
"ring_1": vals[11],
"ring_2": vals[12],
"pinky_0": vals[13],
"pinky_1": vals[14],
"pinky_2": vals[15],
}
return d
# if ser.in_waiting > 0:
# line = ser.readline().decode('utf-8').strip()
# print(line)
if __name__ == "__main__":
glove = HomonculusGlove()
d = glove.read()
lol = 1

693
examples/test4.py Normal file
View File

@@ -0,0 +1,693 @@
import threading
import time
from typing import Callable
import cv2
import numpy as np
# from qai_hub_models.models.mediapipe_hand.app import MediaPipeHandApp
# from qai_hub_models.models.mediapipe_hand.model import (
# MediaPipeHand,
# )
# from qai_hub_models.utils.image_processing import (
# app_to_net_image_inputs,
# )
from lerobot.common.robot_devices.motors.feetech import (
CalibrationMode,
FeetechMotorsBus,
)
LOWER_BOUND_LINEAR = -100
UPPER_BOUND_LINEAR = 200
import serial
class HomonculusGlove:
def __init__(self):
self.serial_port = "/dev/tty.usbmodem1401"
self.baud_rate = 115200
self.serial = serial.Serial(self.serial_port, self.baud_rate, timeout=1)
self.thread = threading.Thread(target=self.async_read)
self.thread.start()
self.last_d = {
"thumb_0": 100,
"thumb_1": 100,
"thumb_2": 100,
"thumb_3": 100,
"index_0": 100,
"index_1": 100,
"index_2": 100,
"middle_0": 100,
"middle_1": 100,
"middle_2": 100,
"ring_0": 100,
"ring_1": 100,
"ring_2": 100,
"pinky_0": 100,
"pinky_1": 100,
"pinky_2": 100,
"battery_voltage": 100,
}
self.calibration = None
@property
def joint_names(self):
return list(self.last_d.keys())
def read(self, motor_names: list[str] | None = None):
if motor_names is None:
motor_names = self.joint_names
values = np.array([self.last_d[k] for k in motor_names])
print(motor_names)
print(values)
if self.calibration is not None:
values = self.apply_calibration(values, motor_names)
print(values)
return values
def async_read(self):
while True:
if self.serial.in_waiting > 0:
self.serial.flush()
vals = self.serial.readline().decode("utf-8").strip()
vals = vals.split(" ")
if len(vals) != 17:
continue
vals = [int(val) for val in vals]
d = {
"thumb_0": vals[0],
"thumb_1": vals[1],
"thumb_2": vals[2],
"thumb_3": vals[3],
"index_0": vals[4],
"index_1": vals[5],
"index_2": vals[6],
"middle_0": vals[7],
"middle_1": vals[8],
"middle_2": vals[9],
"ring_0": vals[10],
"ring_1": vals[11],
"ring_2": vals[12],
"pinky_0": vals[13],
"pinky_1": vals[14],
"pinky_2": vals[15],
"battery_voltage": vals[16],
}
self.last_d = d
# print(d.values())
def run_calibration(self):
print("\nMove arm to open position")
input("Press Enter to continue...")
open_pos_list = []
for _ in range(300):
open_pos = self.read()
open_pos_list.append(open_pos)
time.sleep(0.01)
open_pos = np.array(open_pos_list)
max_open_pos = open_pos.max(axis=0)
min_open_pos = open_pos.min(axis=0)
print(f"{max_open_pos=}")
print(f"{min_open_pos=}")
print("\nMove arm to closed position")
input("Press Enter to continue...")
closed_pos_list = []
for _ in range(300):
closed_pos = self.read()
closed_pos_list.append(closed_pos)
time.sleep(0.01)
closed_pos = np.array(closed_pos_list)
max_closed_pos = closed_pos.max(axis=0)
closed_pos[closed_pos < 1000] = 60000
min_closed_pos = closed_pos.min(axis=0)
print(f"{max_closed_pos=}")
print(f"{min_closed_pos=}")
open_pos = np.array([max_open_pos, max_closed_pos]).max(axis=0)
closed_pos = np.array([min_open_pos, min_closed_pos]).min(axis=0)
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
# INVERTION
for i, jname in enumerate(self.joint_names):
if jname in [
"thumb_0",
"thumb_3",
"index_2",
"middle_2",
"ring_2",
"pinky_0",
"pinky_2",
"index_0",
]:
tmp_pos = open_pos[i]
open_pos[i] = closed_pos[i]
closed_pos[i] = tmp_pos
print()
print(f"{open_pos=}")
print(f"{closed_pos=}")
homing_offset = [0] * len(self.joint_names)
drive_mode = [0] * len(self.joint_names)
calib_modes = [CalibrationMode.LINEAR.name] * len(self.joint_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": open_pos,
"end_pos": closed_pos,
"calib_mode": calib_modes,
"motor_names": self.joint_names,
}
# return calib_dict
self.set_calibration(calib_dict)
def set_calibration(self, calibration: dict[str, list]):
self.calibration = calibration
def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
"""Convert from unsigned int32 joint position range [0, 2**32[ to the universal float32 nominal degree range ]-180.0, 180.0[ with
a "zero position" at 0 degree.
Note: We say "nominal degree range" since the motors can take values outside this range. For instance, 190 degrees, if the motor
rotate more than a half a turn from the zero position. However, most motors can't rotate more than 180 degrees and will stay in this range.
Joints values are original in [0, 2**32[ (unsigned int32). Each motor are expected to complete a full rotation
when given a goal position that is + or - their resolution. For instance, feetech xl330-m077 have a resolution of 4096, and
at any position in their original range, let's say the position 56734, they complete a full rotation clockwise by moving to 60830,
or anticlockwise by moving to 52638. The position in the original range is arbitrary and might change a lot between each motor.
To harmonize between motors of the same model, different robots, or even models of different brands, we propose to work
in the centered nominal degree range ]-180, 180[.
"""
if motor_names is None:
motor_names = self.motor_names
# Convert from unsigned int32 original range [0, 2**32] to signed float32 range
values = values.astype(np.float32)
for i, name in enumerate(motor_names):
calib_idx = self.calibration["motor_names"].index(name)
calib_mode = self.calibration["calib_mode"][calib_idx]
if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
start_pos = self.calibration["start_pos"][calib_idx]
end_pos = self.calibration["end_pos"][calib_idx]
# Rescale the present position to a nominal range [0, 100] %,
# useful for joints with linear motions like Aloha gripper
values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
if name == "pinky_1" and (values[i] < LOWER_BOUND_LINEAR):
values[i] = end_pos
else:
msg = (
f"Wrong motor position range detected for {name}. "
f"Expected to be in nominal range of [0, 100] % (a full linear translation), "
f"with a maximum range of [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}] % to account for some imprecision during calibration, "
f"but present value is {values[i]} %. "
"This might be due to a cable connection issue creating an artificial jump in motor values. "
"You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
)
print(msg)
# raise JointOutOfRangeError(msg)
return values
# def revert_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
# """Inverse of `apply_calibration`."""
# if motor_names is None:
# motor_names = self.motor_names
# for i, name in enumerate(motor_names):
# calib_idx = self.calibration["motor_names"].index(name)
# calib_mode = self.calibration["calib_mode"][calib_idx]
# if CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
# start_pos = self.calibration["start_pos"][calib_idx]
# end_pos = self.calibration["end_pos"][calib_idx]
# # Convert from nominal lnear range of [0, 100] % to
# # actual motor range of values which can be arbitrary.
# values[i] = values[i] / 100 * (end_pos - start_pos) + start_pos
# values = np.round(values).astype(np.int32)
# return values
class HopeJuniorRobot:
def __init__(self):
self.arm_bus = FeetechMotorsBus(
port="/dev/tty.usbmodem58760429571",
motors={
# "motor1": (2, "sts3250"),
# "motor2": (1, "scs0009"),
"shoulder_pitch": [1, "sts3250"],
"shoulder_yaw": [2, "sts3215"], # TODO: sts3250
"shoulder_roll": [3, "sts3215"], # TODO: sts3250
"elbow_flex": [4, "sts3250"],
"wrist_roll": [5, "sts3215"],
"wrist_yaw": [6, "sts3215"],
"wrist_pitch": [7, "sts3215"],
},
protocol_version=0,
)
self.hand_bus = FeetechMotorsBus(
port="/dev/tty.usbmodem585A0077581",
motors={
"thumb_basel_rotation": [30, "scs0009"],
"thumb_flexor": [27, "scs0009"],
"thumb_pinky_side": [26, "scs0009"],
"thumb_thumb_side": [28, "scs0009"],
"index_flexor": [25, "scs0009"],
"index_pinky_side": [31, "scs0009"],
"index_thumb_side": [32, "scs0009"],
"middle_flexor": [24, "scs0009"],
"middle_pinky_side": [33, "scs0009"],
"middle_thumb_side": [34, "scs0009"],
"ring_flexor": [21, "scs0009"],
"ring_pinky_side": [36, "scs0009"],
"ring_thumb_side": [35, "scs0009"],
"pinky_flexor": [23, "scs0009"],
"pinky_pinky_side": [38, "scs0009"],
"pinky_thumb_side": [37, "scs0009"],
},
protocol_version=1,
group_sync_read=False,
)
def get_hand_calibration(self):
homing_offset = [0] * len(self.hand_bus.motor_names)
drive_mode = [0] * len(self.hand_bus.motor_names)
start_pos = [
500,
900,
1000,
0,
100,
250,
750,
100,
400,
150,
100,
120,
980,
100,
950,
750,
]
end_pos = [
500 - 250,
900 - 300,
1000 - 550,
0 + 550,
1000,
start_pos[5] + 500,
start_pos[6] - 500,
1000,
400 + 700,
150 + 700,
1000,
120 + 700,
980 - 700,
1000,
950 - 700,
750 - 700,
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.hand_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.hand_bus.motor_names,
}
return calib_dict
def connect(self):
self.arm_bus.connect()
self.hand_bus.connect()
ESCAPE_KEY_ID = 27
def capture_and_display_processed_frames(
frame_processor: Callable[[np.ndarray], np.ndarray],
window_display_name: str,
cap_device: int = 0,
) -> None:
"""
Capture frames from the given input camera device, run them through
the frame processor, and display the outputs in a window with the given name.
User should press Esc to exit.
Inputs:
frame_processor: Callable[[np.ndarray], np.ndarray]
Processes frames.
Input and output are numpy arrays of shape (H W C) with BGR channel layout and dtype uint8 / byte.
window_display_name: str
Name of the window used to display frames.
cap_device: int
Identifier for the camera to use to capture frames.
"""
cv2.namedWindow(window_display_name)
capture = cv2.VideoCapture(cap_device)
if not capture.isOpened():
raise ValueError("Unable to open video capture.")
frame_count = 0
has_frame, frame = capture.read()
while has_frame:
assert isinstance(frame, np.ndarray)
frame_count = frame_count + 1
# mirror frame
frame = np.ascontiguousarray(frame[:, ::-1, ::-1])
# process & show frame
processed_frame = frame_processor(frame)
cv2.imshow(window_display_name, processed_frame[:, :, ::-1])
has_frame, frame = capture.read()
key = cv2.waitKey(1)
if key == ESCAPE_KEY_ID:
break
capture.release()
def main():
robot = HopeJuniorRobot()
robot.connect()
# robot.hand_bus.calibration = None
# breakpoint()
# print(robot.arm_bus.read("Present_Position"))
robot.arm_bus.write("Torque_Enable", 1)
robot.arm_bus.write("Acceleration", 20)
robot.arm_bus.read("Acceleration")
calibration = robot.get_hand_calibration()
robot.hand_bus.write("Goal_Position", calibration["start_pos"])
# robot.hand_bus.write("Goal_Position", calibration["end_pos"][:4], robot.hand_bus.motor_names[:4])
robot.hand_bus.set_calibration(calibration)
lol = 1
# # print(motors_bus.write("Goal_Position", 500))
# print(robot.hand_bus.read("Present_Position"))
# # pos = hand_bus.read("Present_Position")
# # hand_bus.write("Goal_Position", pos[0]+20, hand_bus.motor_names[0])
# # hand_bus.write("Goal_Position", pos[i]+delta, hand_bus.motor_names[i])
# robot.hand_bus.read("Acceleration")
# robot.hand_bus.write("Acceleration", 10)
# sleep = 1
# # robot.hand_bus.write(
# # "Goal_Position", [glove.last_d['index_2']-1500,300,300], ["index_pinky_side", "index_flexor", "index_thumb_side"]
# # )
# #time.sleep(sleep)
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [100, 100, 100], ["index_flexor", "index_pinky_side", "index_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [100, 0, 0], ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [200, 200, 0], ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
# )
# time.sleep(sleep)
# robot.hand_bus.write(
# "Goal_Position", [200, 100, 600], ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
# )
# time.sleep(sleep)
# breakpoint()
glove = HomonculusGlove()
glove.run_calibration()
# while True:
# joint_names = ["index_1", "index_2"]
# joint_values = glove.read(joint_names)
# print(joint_values)
input()
while True:
joint_names = []
# joint_names += ["thumb_0", "thumb_2", "thumb_3"]
joint_names += ["index_0", "index_1"]
# joint_names += ["middle_1", "middle_2"]
# joint_names += ["ring_1", "ring_2"]
# joint_names += ["pinky_0", "pinky_2"]
joint_values = glove.read(joint_names)
joint_values = joint_values.round().astype(int)
joint_dict = {k: v for k, v in zip(joint_names, joint_values, strict=False)}
motor_values = []
motor_names = []
# motor_names += ["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"]
# motor_values += [joint_dict["thumb_3"], joint_dict["thumb_0"], joint_dict["thumb_2"], joint_dict["thumb_2"]]
motor_names += ["index_pinky_side", "index_thumb_side"]
# if joint_dict["index_0"] -2100 > 0:
splayamount = 0.5
motor_values += [
(100 - joint_dict["index_0"]) * splayamount + joint_dict["index_1"] * (1 - splayamount),
(joint_dict["index_0"]) * splayamount + joint_dict["index_1"] * (1 - splayamount),
]
# else:
# motor_values += [100-joint_dict["index_0"], joint_dict["index_0"]]
# motor_names += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
# motor_values += [joint_dict["middle_2"], joint_dict["middle_1"], joint_dict["middle_1"]]
# motor_names += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
# motor_values += [joint_dict["ring_2"], joint_dict["ring_1"], joint_dict["ring_1"]]
# motor_names += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
# motor_values += [joint_dict["pinky_2"], joint_dict["pinky_0"], joint_dict["pinky_0"]]
motor_values = np.array(motor_values)
motor_values = np.clip(motor_values, 0, 100)
robot.hand_bus.write("Goal_Position", motor_values, motor_names)
time.sleep(0.02)
while True:
# print(glove.read()['index_2']-1500)
glove_index_flexor = glove.read()["index_2"] - 1500
glove_index_subflexor = glove.read()["index_1"] - 1500
glove_index_side = glove.read()["index_0"] - 2100
vals = [glove_index_flexor, 1000 - (glove_index_subflexor), glove_index_subflexor]
keys = ["index_flexor", "index_pinky_side", "index_thumb_side"]
glove_middle_flexor = glove.read()["middle_2"] - 1500
glove_middle_subflexor = 1000 - (glove.read()["middle_1"] - 1700)
vals += [glove_middle_flexor, glove_middle_subflexor, glove_middle_subflexor - 200]
keys += ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
glove_ring_flexor = glove.read()["ring_2"] - 1300
print(glove_ring_flexor)
glove_ring_subflexor = glove.read()["ring_1"] - 1100
vals += [glove_ring_flexor, 1000 - glove_ring_subflexor, glove_ring_subflexor]
keys += ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
glove_pinky_flexor = glove.read()["pinky_2"] - 1500
glove_pinky_subflexor = glove.read()["pinky_1"] - 1300
vals += [300 + glove_pinky_flexor, max(1000 - glove_pinky_subflexor - 100, 0), glove_pinky_subflexor]
keys += ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
robot.hand_bus.write("Goal_Position", vals, keys)
time.sleep(0.1)
time.sleep(3)
def move_arm(loop=10):
sleep = 1
for i in range(loop):
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1195])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 2195])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1457, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 2357, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 974, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 2674, 1957, 1695])
time.sleep(sleep + 2)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 1632, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 1369, 1632, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 1330, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [2381, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1681, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
robot.arm_bus.write("Goal_Position", [1981, 2030, 2069, 2032, 1874, 1957, 1695])
time.sleep(sleep)
def move_hand(loop=10):
sleep = 0.5
for i in range(loop):
robot.hand_bus.write(
"Goal_Position",
[500, 1000, 0, 1000],
["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [100, 100, 100], ["index_flexor", "index_pinky_side", "index_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [100, 1000, 150], ["middle_flexor", "middle_pinky_side", "middle_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [200, 200, 0], ["ring_flexor", "ring_pinky_side", "ring_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position", [200, 100, 700], ["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"]
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[500, 1000 - 250, 0 + 300, 1000 - 200],
["thumb_basel_rotation", "thumb_flexor", "thumb_pinky_side", "thumb_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[100 + 450, 100 + 400, 100 + 400],
["index_flexor", "index_pinky_side", "index_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[100 + 350, 1000 - 450, 150 + 450],
["middle_flexor", "middle_pinky_side", "middle_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[200 + 650, 200 + 350, 0 + 350],
["ring_flexor", "ring_pinky_side", "ring_thumb_side"],
)
time.sleep(sleep)
robot.hand_bus.write(
"Goal_Position",
[200 + 450, 100 + 400, 700 - 400],
["pinky_flexor", "pinky_pinky_side", "pinky_thumb_side"],
)
time.sleep(sleep)
move_hand(3)
move_arm(1)
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor() as executor:
executor.submit(move_arm)
executor.submit(move_hand)
# initial position
for i in range(3):
robot.hand_bus.write(
"Goal_Position", [500, 1000, 0, 1000, 100, 950, 100, 100, 1000, 150, 200, 200, 0, 200, 100, 700]
)
time.sleep(1)
# for i in range(3):
# robot.hand_bus.write("Goal_Position", [500, 1000-150, 0+250, 1000-150,
# 100+300, 950-250, 100+250,
# 100+200, 1000-300, 150+300,
# 200+500, 200+200, 0+200,
# 200+300, 100+200, 700-200])
# time.sleep(1)
# camera = 0
# score_threshold = 0.95
# iou_threshold = 0.3
# app = MediaPipeHandApp(MediaPipeHand.from_pretrained(), score_threshold, iou_threshold)
# def frame_processor(frame: np.ndarray) -> np.ndarray:
# # Input Prep
# NHWC_int_numpy_frames, NCHW_fp32_torch_frames = app_to_net_image_inputs(frame)
# # Run Bounding Box & Keypoint Detector
# batched_selected_boxes, batched_selected_keypoints = app._run_box_detector(NCHW_fp32_torch_frames)
# # The region of interest ( bounding box of 4 (x, y) corners).
# # list[torch.Tensor(shape=[Num Boxes, 4, 2])],
# # where 2 == (x, y)
# #
# # A list element will be None if there is no selected ROI.
# batched_roi_4corners = app._compute_object_roi(batched_selected_boxes, batched_selected_keypoints)
# # selected landmarks for the ROI (if any)
# # list[torch.Tensor(shape=[Num Selected Landmarks, K, 3])],
# # where K == number of landmark keypoints, 3 == (x, y, confidence)
# #
# # A list element will be None if there is no ROI.
# landmarks_out = app._run_landmark_detector(NHWC_int_numpy_frames, batched_roi_4corners)
# app._draw_predictions(
# NHWC_int_numpy_frames,
# batched_selected_boxes,
# batched_selected_keypoints,
# batched_roi_4corners,
# *landmarks_out,
# )
# return NHWC_int_numpy_frames[0]
# capture_and_display_processed_frames(frame_processor, "QAIHM Mediapipe Hand Demo", camera)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,97 @@
#faulty servo
Model = [777]
ID = [7]
Baud_Rate = [0]
Return_Delay = [0]
Response_Status_Level = [1]
Min_Angle_Limit = [1140]
Max_Angle_Limit = [2750]
Max_Temperature_Limit = [70]
Max_Voltage_Limit = [140]
Min_Voltage_Limit = [40]
Max_Torque_Limit = [1000]
Phase = [12]
Unloading_Condition = [44]
LED_Alarm_Condition = [47]
P_Coefficient = [32]
D_Coefficient = [32]
I_Coefficient = [0]
Minimum_Startup_Force = [16]
CW_Dead_Zone = [1]
CCW_Dead_Zone = [1]
Protection_Current = [310]
Angular_Resolution = [1]
Offset = [1047]
Mode = [0]
Protective_Torque = [20]
Protection_Time = [200]
Overload_Torque = [80]
Speed_closed_loop_P_proportional_coefficient = [10]
Over_Current_Protection_Time = [200]
Velocity_closed_loop_I_integral_coefficient = [200]
Torque_Enable = [1]
Acceleration = [20]
Goal_Position = [0]
Goal_Time = [0]
Goal_Speed = [0]
Torque_Limit = [1000]
Lock = [1]
Present_Position = [1494]
Present_Speed = [0]
Present_Load = [0]
Present_Voltage = [123]
Present_Temperature = [24]
Status = [0]
Moving = [0]
Present_Current = [0]
Maximum_Acceleration = [306]
#all servos of hopejr
Model = [2825 777 777 2825 777 777 777]
ID = [1 2 3 4 5 6 7]
Baud_Rate = [0 0 0 0 0 0 0]
Return_Delay = [0 0 0 0 0 0 0]
Response_Status_Level = [1 1 1 1 1 1 1]
Min_Angle_Limit = [ 650 1300 1300 1200 600 1725 0]
Max_Angle_Limit = [2600 2050 2800 2500 4096 2250 4095]
Max_Temperature_Limit = [80 70 70 80 70 70 70]
Max_Voltage_Limit = [160 140 140 160 140 140 80]
Min_Voltage_Limit = [60 40 40 60 40 40 40]
Max_Torque_Limit = [1000 1000 1000 1000 1000 1000 1000]
Phase = [12 12 12 12 12 12 12]
Unloading_Condition = [45 44 44 45 44 44 44]
LED_Alarm_Condition = [45 47 47 45 47 47 47]
P_Coefficient = [32 32 32 32 32 32 32]
D_Coefficient = [32 32 32 32 32 32 32]
I_Coefficient = [0 0 0 0 0 0 0]
Minimum_Startup_Force = [15 16 16 12 16 16 16]
CW_Dead_Zone = [0 1 1 0 1 1 1]
CCW_Dead_Zone = [0 1 1 0 1 1 1]
Protection_Current = [310 310 310 310 310 310 500]
Angular_Resolution = [1 1 1 1 1 1 1]
Offset = [ 0 1047 1024 1047 1024 1024 0]
Mode = [0 0 0 0 0 0 0]
Protective_Torque = [20 20 20 20 20 20 20]
Protection_Time = [200 200 200 200 200 200 200]
Overload_Torque = [80 80 80 80 80 80 80]
Speed_closed_loop_P_proportional_coefficient = [10 10 10 10 10 10 10]
Over_Current_Protection_Time = [250 200 200 250 200 200 200]
Velocity_closed_loop_I_integral_coefficient = [200 200 200 200 200 200 200]
Torque_Enable = [1 1 1 1 1 1 1]
Acceleration = [20 20 20 20 20 20 20]
Goal_Position = [1909 1977 1820 1000 707 1941 1036]
Goal_Time = [0 0 0 0 0 0 0]
Goal_Speed = [0 0 0 0 0 0 0]
Torque_Limit = [1000 1000 1000 200 1000 1000 1000]
Lock = [1 1 1 1 1 1 1]
Present_Position = [1909 1982 1821 1200 710 1941 1036]
Present_Speed = [0 0 0 0 0 0 0]
Present_Load = [ 0 48 0 0 32 0 0]
Present_Voltage = [122 123 122 123 122 122 122]
Present_Temperature = [23 28 28 26 29 28 28]
Status = [0 0 0 0 0 0 1]
Moving = [0 0 0 0 0 0 0]
Present_Current = [0 1 0 1 1 0 1]
Maximum_Acceleration = [1530 306 306 1530 306 306 254]

View File

@@ -0,0 +1,192 @@
import threading
import time
from typing import Callable
import cv2
import numpy as np
import serial
from lerobot.common.robot_devices.motors.feetech import (
CalibrationMode,
FeetechMotorsBus,
)
LOWER_BOUND_LINEAR = -100
UPPER_BOUND_LINEAR = 200
ESCAPE_KEY_ID = 27
class HopeJuniorRobot:
def __init__(self):
self.arm_bus = FeetechMotorsBus(
port="/dev/ttyACM1",
motors={
# "motor1": (2, "sts3250"),
# "motor2": (1, "scs0009"),
#"shoulder_pitch": [1, "sts3250"],
#"shoulder_yaw": [2, "sts3215"], # TODO: sts3250
#"shoulder_roll": [3, "sts3215"], # TODO: sts3250
#"elbow_flex": [4, "sts3250"],
#"wrist_roll": [5, "sts3215"],
#"wrist_yaw": [6, "sts3215"],
"wrist_pitch": [7, "sts3215"],
},
protocol_version=0,
)
self.hand_bus = FeetechMotorsBus(
port="/dev/ttyACM1",
motors={
"thumb_basel_rotation": [30, "scs0009"],
"thumb_flexor": [27, "scs0009"],
"thumb_pinky_side": [26, "scs0009"],
"thumb_thumb_side": [28, "scs0009"],
"index_flexor": [25, "scs0009"],
"index_pinky_side": [31, "scs0009"],
"index_thumb_side": [32, "scs0009"],
"middle_flexor": [24, "scs0009"],
"middle_pinky_side": [33, "scs0009"],
"middle_thumb_side": [34, "scs0009"],
"ring_flexor": [21, "scs0009"],
"ring_pinky_side": [36, "scs0009"],
"ring_thumb_side": [35, "scs0009"],
"pinky_flexor": [23, "scs0009"],
"pinky_pinky_side": [38, "scs0009"],
"pinky_thumb_side": [37, "scs0009"],
},
protocol_version=1,
group_sync_read=False,
)
def get_hand_calibration(self):
"""
Returns a dictionary containing calibration settings for each motor
on the hand bus.
"""
homing_offset = [0] * len(self.hand_bus.motor_names)
drive_mode = [0] * len(self.hand_bus.motor_names)
start_pos = [
500, 900, 0, 1000, 100, 250, 750, 100, 400, 150, 100, 120, 980, 100, 950, 750,
]
end_pos = [
start_pos[0] - 400, # 500 - 400 = 100
start_pos[1] - 300, # 900 - 300 = 600
start_pos[2] + 550, # 0 + 550 = 550
start_pos[3] - 550, # 1000 - 550 = 450
start_pos[4] + 900, # 100 + 900 = 1000
start_pos[5] + 500, # 250 + 500 = 750
start_pos[6] - 500, # 750 - 500 = 250
start_pos[7] + 900, # 100 + 900 = 1000
start_pos[8] + 700, # 400 + 700 = 1100
start_pos[9] + 700, # 150 + 700 = 850
start_pos[10] + 900, # 100 + 900 = 1000
start_pos[11] + 700, # 120 + 700 = 820
start_pos[12] - 700, # 980 - 700 = 280
start_pos[13] + 900, # 100 + 900 = 1000
start_pos[14] - 700, # 950 - 700 = 250
start_pos[15] - 700, # 750 - 700 = 50
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.hand_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.hand_bus.motor_names,
}
return calib_dict
def get_arm_calibration(self):
"""
Returns a dictionary containing calibration settings for each motor
on the arm bus.
"""
homing_offset = [0] * len(self.arm_bus.motor_names)
drive_mode = [0] * len(self.arm_bus.motor_names)
# Example placeholders
start_pos = [
600, # shoulder_up
1500, # shoulder_forward
1300, # shoulder_yaw
1000, # bend_elbow
1600, # wrist_roll
1700, # wrist_yaw
600, # wrist_pitch
]
end_pos = [
2300, # shoulder_up
2300, # shoulder_forward
2800, # shoulder_yaw
2500, # bend_elbow
2800, # wrist_roll
2200, # wrist_yaw
1700, # wrist_pitch
]
calib_modes = [CalibrationMode.LINEAR.name] * len(self.arm_bus.motor_names)
calib_dict = {
"homing_offset": homing_offset,
"drive_mode": drive_mode,
"start_pos": start_pos,
"end_pos": end_pos,
"calib_mode": calib_modes,
"motor_names": self.arm_bus.motor_names,
}
return calib_dict
def connect(self):
"""Connect to the Feetech buses."""
self.arm_bus.connect()
# self.hand_bus.connect()
def capture_and_display_processed_frames(
frame_processor: Callable[[np.ndarray], np.ndarray],
window_display_name: str,
cap_device: int = 0,
) -> None:
"""
Capture frames from the given input camera device, run them through
the frame processor, and display the outputs in a window with the given name.
User should press Esc to exit.
Inputs:
frame_processor: Callable[[np.ndarray], np.ndarray]
Processes frames.
Input and output are numpy arrays of shape (H W C) with BGR channel layout and dtype uint8 / byte.
window_display_name: str
Name of the window used to display frames.
cap_device: int
Identifier for the camera to use to capture frames.
"""
cv2.namedWindow(window_display_name)
capture = cv2.VideoCapture(cap_device)
if not capture.isOpened():
raise ValueError("Unable to open video capture.")
frame_count = 0
has_frame, frame = capture.read()
while has_frame:
frame_count = frame_count + 1
# Mirror frame horizontally and flip color for demonstration
frame = np.ascontiguousarray(frame[:, ::-1, ::-1])
# process & show frame
processed_frame = frame_processor(frame)
cv2.imshow(window_display_name, processed_frame[:, :, ::-1])
has_frame, frame = capture.read()
key = cv2.waitKey(1)
if key == ESCAPE_KEY_ID:
break
capture.release()

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -0,0 +1,44 @@
import matplotlib.pyplot as plt
import time
from typing import List, Tuple
def log_and_plot_params(bus, params_to_log: list, servo_names: list,
test_id="servo_log", interval=0.1, duration=5, save_plot=True) -> Tuple[dict, List[float]]:
"""
Logs specific servo parameters for a given duration and generates a plot.
"""
servo_data = {servo_name: {param: [] for param in params_to_log} for servo_name in servo_names}
timestamps = []
start_time = time.time()
while time.time() - start_time < duration:
timestamp = time.time() - start_time
timestamps.append(timestamp)
for param in params_to_log:
values = bus.read(param, servo_names)
for servo_name, value in zip(servo_names, values):
servo_data[servo_name][param].append(value)
time.sleep(interval)
if save_plot:
for servo_name, data in servo_data.items():
plt.figure(figsize=(10, 6))
for param in params_to_log:
if all(v is not None for v in data[param]):
plt.plot(timestamps, data[param], label=param)
plt.xlabel("Time (s)")
plt.ylabel("Parameter Values")
plt.title(f"Parameter Trends for Servo: {servo_name}")
plt.legend()
plt.grid(True)
plt.tight_layout()
plot_filename = f"{test_id}_{servo_name}.png"
plt.savefig(plot_filename)
print(f"Plot saved as {plot_filename}")
return servo_data, timestamps

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View File

@@ -0,0 +1,68 @@
STS_SERIES_CONTROL_TABLE = {
"Model": (3, 2),
"ID": (5, 1),
"Baud_Rate": (6, 1),
"Return_Delay": (7, 1),
"Response_Status_Level": (8, 1),
"Min_Angle_Limit": (9, 2),
"Max_Angle_Limit": (11, 2),
"Max_Temperature_Limit": (13, 1),
"Max_Voltage_Limit": (14, 1),
"Min_Voltage_Limit": (15, 1),
"Max_Torque_Limit": (16, 2),
"Phase": (18, 1),
"Unloading_Condition": (19, 1),
"LED_Alarm_Condition": (20, 1),
"P_Coefficient": (21, 1),
"D_Coefficient": (22, 1),
"I_Coefficient": (23, 1),
"Minimum_Startup_Force": (24, 2),
"CW_Dead_Zone": (26, 1),
"CCW_Dead_Zone": (27, 1),
"Protection_Current": (28, 2),
"Angular_Resolution": (30, 1),
"Offset": (31, 2),
"Mode": (33, 1),
"Protective_Torque": (34, 1),
"Protection_Time": (35, 1),
"Overload_Torque": (36, 1),
"Speed_closed_loop_P_proportional_coefficient": (37, 1),
"Over_Current_Protection_Time": (38, 1),
"Velocity_closed_loop_I_integral_coefficient": (39, 1),
"Torque_Enable": (40, 1),
"Acceleration": (41, 1),
"Goal_Position": (42, 2),
"Goal_Time": (44, 2),
"Goal_Speed": (46, 2),
"Torque_Limit": (48, 2),
"Lock": (55, 1),
"Present_Position": (56, 2),
"Present_Speed": (58, 2),
"Present_Load": (60, 2),
"Present_Voltage": (62, 1),
"Present_Temperature": (63, 1),
"Status": (65, 1),
"Moving": (66, 1),
"Present_Current": (69, 2),
# Not in the Memory Table
"Maximum_Acceleration": (85, 2),
}
import time
# Assuming STS_SERIES_CONTROL_TABLE is defined globally
def print_all_params(robot):
"""
Reads all parameters from the STS_SERIES_CONTROL_TABLE and prints their values.
"""
for param in STS_SERIES_CONTROL_TABLE.keys():
try:
val = robot.arm_bus.read(param)
print(f"{param} = {val}")
except Exception as e:
print(f"{param} read failed: {e}")
# Example usage:
print_all_params(robot)

View File

@@ -0,0 +1,26 @@
#include <DFRobot_VisualRotaryEncoder.h>
DFRobot_VisualRotaryEncoder_I2C sensor(0x54, &Wire);
void setup()
{
Serial.begin(115200);
// Attempt to initialize the sensor
while (NO_ERR != sensor.begin()) {
// Failed? Just wait a bit and try again
delay(3000);
}
}
void loop()
{
// Read the encoder value
uint16_t encoderValue = sensor.getEncoderValue();
// Print it followed by a newline
Serial.println(encoderValue);
// Delay 10ms between readings
delay(10);
}

View File

@@ -0,0 +1,544 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"what are the actual interest values on the hopejr? make like a map"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"can change these dynamically so if the arm is moving down we can relax it instead of tensing it? so for example decreasing torque if the target position is lower than the actual position, for example. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"arm_calibration = robot.get_arm_calibration()\n",
"robot.arm_bus.write(\"Goal_Position\", arm_calibration[\"start_pos\"])"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Present Position: [1494]\n",
"Acceleration Read: [20]\n"
]
}
],
"source": [
"import time\n",
"from hopejr import HopeJuniorRobot\n",
"\n",
"\n",
"robot = HopeJuniorRobot()\n",
"robot.connect()\n",
"\n",
"# Example read of the current position\n",
"print(\"Present Position:\", robot.arm_bus.read(\"Present_Position\"))\n",
"\n",
"# Enable torque and set acceleration\n",
"robot.arm_bus.write(\"Torque_Enable\", 1)\n",
"robot.arm_bus.write(\"Acceleration\", 20)\n",
"print(\"Acceleration Read:\", robot.arm_bus.read(\"Acceleration\"))\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"Torque_Limit\", 100,[\"elbow_flex\"])\n",
"robot.arm_bus.write(\"Protective_Torque\", 0, [\"elbow_flex\"])\n",
"robot.arm_bus.write(\"Acceleration\", 20)\n",
"robot.arm_bus.write(\"Goal_Position\", [2000], [\"elbow_flex\"])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1000, 1000, 1000, 1000, 1000, 1000, 1000])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"robot.arm_bus.read(\"Max_Torque_Limit\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"Goal_Position\", [1909, 1977, 1820, 1000, 707, 1941, 1036]) #"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"Max_Voltage_Limit\", [160, 140, 140, 160, 140, 140, 140]) #so its not torque limit nor max torque limit, , no protective torque or overload torque\n",
"#it's 1) max voltage limit, min-max angle limits are arbitrairly set for all the motors, max temp is only set for the shoulder\n",
"#max acceleration is also set, we could lower that in the elbow to make it less responsive to commands basically\n",
"#so we limit speed and temperature, maybe we should limit torque thouhg, minimum startup force is also important. protection current as well\n",
"#changed that to 310.\n",
"#\"Max_Voltage_Limit\" also needs to be changed, different from torque_limit"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model = [777]\n",
"ID = [7]\n",
"Baud_Rate = [0]\n",
"Return_Delay = [0]\n",
"Response_Status_Level = [1]\n",
"Min_Angle_Limit = [1140]\n",
"Max_Angle_Limit = [2750]\n",
"Max_Temperature_Limit = [70]\n",
"Max_Voltage_Limit = [140]\n",
"Min_Voltage_Limit = [40]\n",
"Max_Torque_Limit = [1000]\n",
"Phase = [12]\n",
"Unloading_Condition = [44]\n",
"LED_Alarm_Condition = [47]\n",
"P_Coefficient = [32]\n",
"D_Coefficient = [32]\n",
"I_Coefficient = [0]\n",
"Minimum_Startup_Force = [16]\n",
"CW_Dead_Zone = [1]\n",
"CCW_Dead_Zone = [1]\n",
"Protection_Current = [310]\n",
"Angular_Resolution = [1]\n",
"Offset = [1047]\n",
"Mode = [0]\n",
"Protective_Torque = [20]\n",
"Protection_Time = [200]\n",
"Overload_Torque = [80]\n",
"Speed_closed_loop_P_proportional_coefficient = [10]\n",
"Over_Current_Protection_Time = [200]\n",
"Velocity_closed_loop_I_integral_coefficient = [200]\n",
"Torque_Enable = [1]\n",
"Acceleration = [20]\n",
"Goal_Position = [0]\n",
"Goal_Time = [0]\n",
"Goal_Speed = [0]\n",
"Torque_Limit = [1000]\n",
"Lock = [1]\n",
"Present_Position = [1494]\n",
"Present_Speed = [0]\n",
"Present_Load = [0]\n",
"Present_Voltage = [123]\n",
"Present_Temperature = [24]\n",
"Status = [0]\n",
"Moving = [0]\n",
"Present_Current = [0]\n",
"Maximum_Acceleration = [306]\n"
]
}
],
"source": [
"STS_SERIES_CONTROL_TABLE = {\n",
" \"Model\": (3, 2),\n",
" \"ID\": (5, 1),\n",
" \"Baud_Rate\": (6, 1),\n",
" \"Return_Delay\": (7, 1),\n",
" \"Response_Status_Level\": (8, 1),\n",
" \"Min_Angle_Limit\": (9, 2),\n",
" \"Max_Angle_Limit\": (11, 2),\n",
" \"Max_Temperature_Limit\": (13, 1),\n",
" \"Max_Voltage_Limit\": (14, 1),\n",
" \"Min_Voltage_Limit\": (15, 1),\n",
" \"Max_Torque_Limit\": (16, 2),\n",
" \"Phase\": (18, 1),\n",
" \"Unloading_Condition\": (19, 1),\n",
" \"LED_Alarm_Condition\": (20, 1),\n",
" \"P_Coefficient\": (21, 1),\n",
" \"D_Coefficient\": (22, 1),\n",
" \"I_Coefficient\": (23, 1),\n",
" \"Minimum_Startup_Force\": (24, 2),\n",
" \"CW_Dead_Zone\": (26, 1),\n",
" \"CCW_Dead_Zone\": (27, 1),\n",
" \"Protection_Current\": (28, 2),\n",
" \"Angular_Resolution\": (30, 1),\n",
" \"Offset\": (31, 2),\n",
" \"Mode\": (33, 1),\n",
" \"Protective_Torque\": (34, 1),\n",
" \"Protection_Time\": (35, 1),\n",
" \"Overload_Torque\": (36, 1),\n",
" \"Speed_closed_loop_P_proportional_coefficient\": (37, 1),\n",
" \"Over_Current_Protection_Time\": (38, 1),\n",
" \"Velocity_closed_loop_I_integral_coefficient\": (39, 1),\n",
" \"Torque_Enable\": (40, 1),\n",
" \"Acceleration\": (41, 1),\n",
" \"Goal_Position\": (42, 2),\n",
" \"Goal_Time\": (44, 2),\n",
" \"Goal_Speed\": (46, 2),\n",
" \"Torque_Limit\": (48, 2),\n",
" \"Lock\": (55, 1),\n",
" \"Present_Position\": (56, 2),\n",
" \"Present_Speed\": (58, 2),\n",
" \"Present_Load\": (60, 2),\n",
" \"Present_Voltage\": (62, 1),\n",
" \"Present_Temperature\": (63, 1),\n",
" \"Status\": (65, 1),\n",
" \"Moving\": (66, 1),\n",
" \"Present_Current\": (69, 2),\n",
" # Not in the Memory Table\n",
" \"Maximum_Acceleration\": (85, 2),\n",
"}\n",
"\n",
"import time\n",
"\n",
"# Assuming STS_SERIES_CONTROL_TABLE is defined globally\n",
"\n",
"def print_all_params(robot):\n",
" \"\"\"\n",
" Reads all parameters from the STS_SERIES_CONTROL_TABLE and prints their values.\n",
" \"\"\"\n",
" for param in STS_SERIES_CONTROL_TABLE.keys():\n",
" try:\n",
" val = robot.arm_bus.read(param)\n",
" print(f\"{param} = {val}\")\n",
" except Exception as e:\n",
" print(f\"{param} read failed: {e}\")\n",
"\n",
"\n",
"# Example usage:\n",
"print_all_params(robot)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"probably max input voltage, we can also look at temperature and "
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Acceleration Read: [20 20]\n"
]
}
],
"source": [
"\n",
"print(\"Acceleration Read:\", robot.arm_bus.read(\"Acceleration\"))"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"LED_Alarm_Condition\", 2, [\"elbow_flex\"])"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"Acceleration\", 20, [\"elbow_flex\"])\n",
"robot.arm_bus.write(\"Acceleration\", 100, [\"wrist_yaw\"])"
]
},
{
"cell_type": "code",
"execution_count": 73,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"Goal_Position\", [1000, 1000], [\"elbow_flex\", \"wrist_yaw\"])\n",
"time.sleep(2)\n",
"robot.arm_bus.write(\"Goal_Position\", [2000, 2000], [\"elbow_flex\", \"wrist_yaw\"])"
]
},
{
"cell_type": "code",
"execution_count": 68,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [0]\n",
"Elbow Flex Current: [3]\n",
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [2]\n",
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [2]\n",
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [1]\n",
"Elbow Flex Current: [0]\n"
]
},
{
"ename": "KeyboardInterrupt",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[68], line 25\u001b[0m\n\u001b[1;32m 22\u001b[0m time\u001b[38;5;241m.\u001b[39msleep(\u001b[38;5;241m2\u001b[39m)\n\u001b[1;32m 23\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 24\u001b[0m \u001b[38;5;66;03m# If current is zero, hold at pos_a for a bit\u001b[39;00m\n\u001b[0;32m---> 25\u001b[0m \u001b[43mtime\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msleep\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n",
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
]
}
],
"source": [
"import time\n",
"\n",
"# Enable torque on elbow_flex\n",
"robot.arm_bus.write(\"Torque_Enable\", 1, [\"elbow_flex\"])\n",
"\n",
"pos_a = 2500\n",
"pos_b = 1000\n",
"\n",
"robot.arm_bus.write(\"Goal_Position\", pos_a, [\"elbow_flex\"])\n",
"time.sleep(2)\n",
"\n",
"while True:\n",
" current_val = robot.arm_bus.read(\"Present_Current\", \"elbow_flex\")\n",
" print(\"Elbow Flex Current:\", current_val)\n",
" \n",
" # If the servo is under non-zero load/current, switch position\n",
" if current_val > 1:\n",
" robot.arm_bus.write(\"Goal_Position\", pos_b, [\"elbow_flex\"])\n",
" time.sleep(2)\n",
" # Go back to pos_a again\n",
" robot.arm_bus.write(\"Goal_Position\", pos_a, [\"elbow_flex\"])\n",
" time.sleep(2)\n",
" else:\n",
" # If current is zero, hold at pos_a for a bit\n",
" time.sleep(1)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\"Acceleration\" = 0, infinitely fast\n",
"\"Acceleration\" = 20 slow\n",
"elbow_flex is the LED one (4)\n",
"\n",
"robot.arm_bus.write(\"LED_Alarm_Condition\", 2, [\"elbow_flex\"]) #on constantly\n",
"robot.arm_bus.write(\"LED_Alarm_Condition\", 1, [\"elbow_flex\"]) #beeping\n",
"robot.arm_bus.write(\"LED_Alarm_Condition\", 0, [\"elbow_flex\"]) #off\n",
"\n",
"\"Max_Torque_Limit\": (16, 2), is what i have o play around with or \"Protective_Torque\": (37, 1), maybe\n",
"\n",
"robot.arm_bus.write(\"Torque_Enable\", 1, [\"elbow_flex\"]) 1 can move 0 cant move\n",
"\n",
"robot.arm_bus.write(\"Torque_Limit\", 300, [\"elbow_flex\"]) #how strong/weak the servo is. 0 makes it so that it cannot move basically, but i'd like to have that value change honestly and for it to be waeaker\n",
"\n",
"robot.arm_bus.write(\"Torque_Limit\", 20,[\"elbow_flex\"]) 20 in ordre to get some motion\n",
"\n",
"default is 1000\n",
"\n",
"robot.arm_bus.write(\"Goal_Speed\", -s, [\"elbow_flex\"]) #changes how fast the servo moves when going to the target, does not make it move with a specific speed "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"import time\n",
"\n",
"# Enable torque on elbow_flex\n",
"robot.arm_bus.write(\"Torque_Enable\", 1, [\"elbow_flex\"])\n",
"\n",
"pos_a = 1000\n",
"pos_b = 2500\n",
"\n",
"robot.arm_bus.write(\"Goal_Position\", pos_a, [\"elbow_flex\"])\n",
"time.sleep(2)\n",
"\n",
"while True:\n",
" current_val = robot.arm_bus.read(\"Present_Current\", \"elbow_flex\")\n",
" print(\"Elbow Flex Current:\", current_val)\n",
" \n",
" # If the servo is under non-zero load/current, switch position\n",
" if current_val > 1:\n",
" robot.arm_bus.write(\"Goal_Position\", pos_b, [\"elbow_flex\"])\n",
" time.sleep(2)\n",
" # Go back to pos_a again\n",
" robot.arm_bus.write(\"Goal_Position\", pos_a, [\"elbow_flex\"])\n",
" time.sleep(2)\n",
" else:\n",
" # If current is zero, hold at pos_a for a bit\n",
" time.sleep(1)\n",
"\n",
"\n",
"so if current is larger than x then you disable it \n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"robot.arm_bus.write(\"LED_Alarm_Condition\", 2, [\"elbow_flex\"])"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0]\n",
"[0]\n",
"[2]\n",
"[4]\n",
"[0]\n",
"[0]\n",
"[0]\n"
]
},
{
"ename": "KeyboardInterrupt",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[43], line 3\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28mprint\u001b[39m(robot\u001b[38;5;241m.\u001b[39marm_bus\u001b[38;5;241m.\u001b[39mread(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mPresent_Current\u001b[39m\u001b[38;5;124m\"\u001b[39m, [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124melbow_flex\u001b[39m\u001b[38;5;124m\"\u001b[39m]))\n\u001b[0;32m----> 3\u001b[0m \u001b[43mtime\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msleep\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n",
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
]
}
],
"source": [
"while True:\n",
" print(robot.arm_bus.read(\"Present_Current\", [\"elbow_flex\"]))\n",
" time.sleep(1)"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Max_Voltage_Limit = [160 140 140 160 140 140 80]\n",
"Min_Angle_Limit = [ 650 1300 1300 1200 600 1725 0]\n",
"Max_Angle_Limit = [2600 2050 2800 2500 4096 2250 4095]\n",
"Max_Temperature_Limit = [80 70 70 80 70 70 70]\n",
"Acceleration = [20 20 20 20 20 20 20]\n",
"Torque_Limit = [1000 1000 1000 200 1000 1000 1000]\n",
"Minimum_Startup_Force = [15 16 16 12 16 16 16]\n",
"Protection_Current = [310 310 310 310 310 310 500]\n"
]
}
],
"source": [
"import time\n",
"\n",
"def print_important_params(robot):\n",
"\n",
" # Example parameters you mentioned; adjust as needed\n",
" param_list = [\n",
" \"Max_Voltage_Limit\",\n",
" \"Min_Angle_Limit\",\n",
" \"Max_Angle_Limit\",\n",
" \"Max_Temperature_Limit\",\n",
" \"Acceleration\", # or \"Maximum_Acceleration\" if you prefer that register\n",
" \"Torque_Limit\", # or \"Max_Torque_Limit\" if your table uses that\n",
" \"Minimum_Startup_Force\",\n",
" \"Protection_Current\",\n",
" ]\n",
" \n",
" for param in param_list:\n",
" try:\n",
" val = robot.arm_bus.read(param)\n",
" print(f\"{param} = {val}\")\n",
" except Exception as e:\n",
" print(f\"{param} read failed: {e}\")\n",
"\n",
"# -------------------------------\n",
"# Example usage\n",
"\n",
"print_important_params(robot)\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "lerobot",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,27 @@
import time
from hopejr import HopeJuniorRobot
def main():
# Instantiate and connect to the robot
robot = HopeJuniorRobot()
robot.connect()
# Example read of the current position
print("Present Position:", robot.arm_bus.read("Present_Position"))
# Enable torque and set acceleration
robot.arm_bus.write("Torque_Enable", 1)
robot.arm_bus.write("Acceleration", 20)
print("Acceleration Read:", robot.arm_bus.read("Acceleration"))
# Move elbow_flex and wrist_yaw a few times
robot.arm_bus.write("Goal_Position", [1000, 1000], ["elbow_flex", "wrist_yaw"])
time.sleep(2)
robot.arm_bus.write("Goal_Position", [1500, 1500], ["elbow_flex", "wrist_yaw"])
time.sleep(2)
robot.arm_bus.write("Goal_Position", [1000, 1000], ["elbow_flex", "wrist_yaw"])
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,49 @@
STS_SERIES_CONTROL_TABLE = {
"Model": (3, 2),
"ID": (5, 1),
"Baud_Rate": (6, 1),
"Return_Delay": (7, 1),
"Response_Status_Level": (8, 1),
"Min_Angle_Limit": (9, 2),
"Max_Angle_Limit": (11, 2),
"Max_Temperature_Limit": (13, 1),
"Max_Voltage_Limit": (14, 1),
"Min_Voltage_Limit": (15, 1),
"Max_Torque_Limit": (16, 2),
"Phase": (18, 1),
"Unloading_Condition": (19, 1),
"LED_Alarm_Condition": (20, 1),
"P_Coefficient": (21, 1),
"D_Coefficient": (22, 1),
"I_Coefficient": (23, 1),
"Minimum_Startup_Force": (24, 2),
"CW_Dead_Zone": (26, 1),
"CCW_Dead_Zone": (27, 1),
"Protection_Current": (28, 2),
"Angular_Resolution": (30, 1),
"Offset": (31, 2),
"Mode": (33, 1),
"Protective_Torque": (34, 1),
"Protection_Time": (35, 1),
"Overload_Torque": (36, 1),
"Speed_closed_loop_P_proportional_coefficient": (37, 1),
"Over_Current_Protection_Time": (38, 1),
"Velocity_closed_loop_I_integral_coefficient": (39, 1),
"Torque_Enable": (40, 1),
"Acceleration": (41, 1),
"Goal_Position": (42, 2),
"Goal_Time": (44, 2),
"Goal_Speed": (46, 2),
"Torque_Limit": (48, 2),
"Lock": (55, 1),
"Present_Position": (56, 2),
"Present_Speed": (58, 2),
"Present_Load": (60, 2),
"Present_Voltage": (62, 1),
"Present_Temperature": (63, 1),
"Status": (65, 1),
"Moving": (66, 1),
"Present_Current": (69, 2),
# Not in the Memory Table
"Maximum_Acceleration": (85, 2),
}

View File

@@ -0,0 +1,16 @@
First check that kicks in is current:
Protection_Current (310) amperes or sth
Present_Current, compared against protection crrent
Over_Current_Protection_Time, how long until you shut it down
make a quick update about this
variables of interest are
Max_Torque_Limit = 1000,
Present_Load = 1000-something, which triggered the overload torque mechanism
Overload_Torque = 80, how much of the max torque limit do we allow?
Protection_Time = 200, after how long do we set Torque_Enable to 1? *not true lol
Protective_Torque = 20, after we trigger the safety mechanism, how much torque do we allow the motor to have?
theres actually no temperature or voltage check that the feetechs perform, the only two are current and torque, which works like i said above

View File

@@ -27,6 +27,9 @@ Example:
print(lerobot.available_real_world_datasets)
print(lerobot.available_policies)
print(lerobot.available_policies_per_env)
print(lerobot.available_robots)
print(lerobot.available_cameras)
print(lerobot.available_motors)
```
When implementing a new dataset loadable with LeRobotDataset follow these steps:
@@ -45,6 +48,9 @@ import itertools
from lerobot.__version__ import __version__ # noqa: F401
# TODO(rcadene): Improve policies and envs. As of now, an item in `available_policies`
# refers to a yaml file AND a modeling name. Same for `available_envs` which refers to
# a yaml file AND a environment name. The difference should be more obvious.
available_tasks_per_env = {
"aloha": [
"AlohaInsertion-v0",
@@ -52,6 +58,7 @@ available_tasks_per_env = {
],
"pusht": ["PushT-v0"],
"xarm": ["XarmLift-v0"],
"dora_aloha_real": ["DoraAloha-v0", "DoraKoch-v0", "DoraReachy2-v0"],
}
available_envs = list(available_tasks_per_env.keys())
@@ -66,6 +73,8 @@ available_datasets_per_env = {
"lerobot/aloha_sim_transfer_cube_human_image",
"lerobot/aloha_sim_transfer_cube_scripted_image",
],
# TODO(alexander-soare): Add "lerobot/pusht_keypoints". Right now we can't because this is too tightly
# coupled with tests.
"pusht": ["lerobot/pusht", "lerobot/pusht_image"],
"xarm": [
"lerobot/xarm_lift_medium",
@@ -77,6 +86,23 @@ available_datasets_per_env = {
"lerobot/xarm_push_medium_image",
"lerobot/xarm_push_medium_replay_image",
],
"dora_aloha_real": [
"lerobot/aloha_static_battery",
"lerobot/aloha_static_candy",
"lerobot/aloha_static_coffee",
"lerobot/aloha_static_coffee_new",
"lerobot/aloha_static_cups_open",
"lerobot/aloha_static_fork_pick_up",
"lerobot/aloha_static_pingpong_test",
"lerobot/aloha_static_pro_pencil",
"lerobot/aloha_static_screw_driver",
"lerobot/aloha_static_tape",
"lerobot/aloha_static_thread_velcro",
"lerobot/aloha_static_towel",
"lerobot/aloha_static_vinh_cup",
"lerobot/aloha_static_vinh_cup_left",
"lerobot/aloha_static_ziploc_slide",
],
}
available_real_world_datasets = [
@@ -102,22 +128,100 @@ available_real_world_datasets = [
"lerobot/aloha_static_vinh_cup_left",
"lerobot/aloha_static_ziploc_slide",
"lerobot/umi_cup_in_the_wild",
"lerobot/unitreeh1_fold_clothes",
"lerobot/unitreeh1_rearrange_objects",
"lerobot/unitreeh1_two_robot_greeting",
"lerobot/unitreeh1_warehouse",
"lerobot/nyu_rot_dataset",
"lerobot/utokyo_saytap",
"lerobot/imperialcollege_sawyer_wrist_cam",
"lerobot/utokyo_xarm_bimanual",
"lerobot/tokyo_u_lsmo",
"lerobot/utokyo_pr2_opening_fridge",
"lerobot/cmu_franka_exploration_dataset",
"lerobot/cmu_stretch",
"lerobot/asu_table_top",
"lerobot/utokyo_pr2_tabletop_manipulation",
"lerobot/utokyo_xarm_pick_and_place",
"lerobot/ucsd_kitchen_dataset",
"lerobot/austin_buds_dataset",
"lerobot/dlr_sara_grid_clamp",
"lerobot/conq_hose_manipulation",
"lerobot/columbia_cairlab_pusht_real",
"lerobot/dlr_sara_pour",
"lerobot/dlr_edan_shared_control",
"lerobot/ucsd_pick_and_place_dataset",
"lerobot/berkeley_cable_routing",
"lerobot/nyu_franka_play_dataset",
"lerobot/austin_sirius_dataset",
"lerobot/cmu_play_fusion",
"lerobot/berkeley_gnm_sac_son",
"lerobot/nyu_door_opening_surprising_effectiveness",
"lerobot/berkeley_fanuc_manipulation",
"lerobot/jaco_play",
"lerobot/viola",
"lerobot/kaist_nonprehensile",
"lerobot/berkeley_mvp",
"lerobot/uiuc_d3field",
"lerobot/berkeley_gnm_recon",
"lerobot/austin_sailor_dataset",
"lerobot/utaustin_mutex",
"lerobot/roboturk",
"lerobot/stanford_hydra_dataset",
"lerobot/berkeley_autolab_ur5",
"lerobot/stanford_robocook",
"lerobot/toto",
"lerobot/fmb",
"lerobot/droid_100",
"lerobot/berkeley_rpt",
"lerobot/stanford_kuka_multimodal_dataset",
"lerobot/iamlab_cmu_pickup_insert",
"lerobot/taco_play",
"lerobot/berkeley_gnm_cory_hall",
"lerobot/usc_cloth_sim",
]
available_datasets = list(
itertools.chain(*available_datasets_per_env.values(), available_real_world_datasets)
)
# lists all available policies from `lerobot/common/policies`
available_policies = [
"act",
"diffusion",
"tdmpc",
"vqbet",
]
# lists all available robots from `lerobot/common/robot_devices/robots`
available_robots = [
"koch",
"koch_bimanual",
"aloha",
"so100",
"moss",
]
# lists all available cameras from `lerobot/common/robot_devices/cameras`
available_cameras = [
"opencv",
"intelrealsense",
]
# lists all available motors from `lerobot/common/robot_devices/motors`
available_motors = [
"dynamixel",
"feetech",
]
# keys and values refer to yaml files
available_policies_per_env = {
"aloha": ["act"],
"pusht": ["diffusion"],
"pusht": ["diffusion", "vqbet"],
"xarm": ["tdmpc"],
"koch_real": ["act_koch_real"],
"aloha_real": ["act_aloha_real"],
"dora_aloha_real": ["act_aloha_real"],
}
env_task_pairs = [(env, task) for env, tasks in available_tasks_per_env.items() for task in tasks]

View File

@@ -1,334 +0,0 @@
# Video benchmark
## Questions
What is the optimal trade-off between:
- maximizing loading time with random access,
- minimizing memory space on disk,
- maximizing success rate of policies?
How to encode videos?
- How much compression (`-crf`)? Low compression with `0`, normal compression with `20` or extreme with `56`?
- What pixel format to use (`-pix_fmt`)? `yuv444p` or `yuv420p`?
- How many key frames (`-g`)? A key frame every `10` frames?
How to decode videos?
- Which `decoder`? `torchvision`, `torchaudio`, `ffmpegio`, `decord`, or `nvc`?
## Metrics
**Percentage of data compression (higher is better)**
`compression_factor` is the ratio of the memory space on disk taken by the original images to encode, to the memory space taken by the encoded video. For instance, `compression_factor=4` means that the video takes 4 times less memory space on disk compared to the original images.
**Percentage of loading time (higher is better)**
`load_time_factor` is the ratio of the time it takes to load original images at given timestamps, to the time it takes to decode the exact same frames from the video. Higher is better. For instance, `load_time_factor=0.5` means that decoding from video is 2 times slower than loading the original images.
**Average L2 error per pixel (lower is better)**
`avg_per_pixel_l2_error` is the average L2 error between each decoded frame and its corresponding original image over all requested timestamps, and also divided by the number of pixels in the image to be comparable when switching to different image sizes.
**Loss of a pretrained policy (higher is better)** (not available)
`loss_pretrained` is the result of evaluating with the selected encoding/decoding settings a policy pretrained on original images. It is easier to understand than `avg_l2_error`.
**Success rate after retraining (higher is better)** (not available)
`success_rate` is the result of training and evaluating a policy with the selected encoding/decoding settings. It is the most difficult metric to get but also the very best.
## Variables
**Image content**
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, etc. Hence, we run this benchmark on two datasets: `pusht` (simulation) and `umi` (real-world outdoor).
**Requested timestamps**
In this benchmark, we focus on the loading time of random access, so we are not interested in sequentially loading all frames of a video like in a movie. However, the number of consecutive timestamps requested and their spacing can greatly affect the `load_time_factor`. In fact, it is expected to get faster loading time by decoding a large number of consecutive frames from a video, than to load the same data from individual images. To reflect our robotics use case, we consider a few settings:
- `single_frame`: 1 frame,
- `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
- `2_frames_4_space`: 2 consecutive frames with 4 frames of spacing (e.g `[t, t + 4 / fps]`),
**Data augmentations**
We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
## Results
**`decoder`**
| repo_id | decoder | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- |
| lerobot/pusht | <span style="color: #32CD32;">torchvision</span> | 0.166 | 0.0000119 |
| lerobot/pusht | ffmpegio | 0.009 | 0.0001182 |
| lerobot/pusht | torchaudio | 0.138 | 0.0000359 |
| lerobot/umi_cup_in_the_wild | <span style="color: #32CD32;">torchvision</span> | 0.174 | 0.0000174 |
| lerobot/umi_cup_in_the_wild | ffmpegio | 0.010 | 0.0000735 |
| lerobot/umi_cup_in_the_wild | torchaudio | 0.154 | 0.0000340 |
### `1_frame`
**`pix_fmt`**
| repo_id | pix_fmt | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | yuv420p | 3.788 | 0.224 | 0.0000760 |
| lerobot/pusht | yuv444p | 3.646 | 0.185 | 0.0000443 |
| lerobot/umi_cup_in_the_wild | yuv420p | 14.391 | 0.388 | 0.0000469 |
| lerobot/umi_cup_in_the_wild | yuv444p | 14.932 | 0.329 | 0.0000397 |
**`g`**
| repo_id | g | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 1 | 2.543 | 0.204 | 0.0000556 |
| lerobot/pusht | 2 | 3.646 | 0.182 | 0.0000443 |
| lerobot/pusht | 3 | 4.431 | 0.174 | 0.0000450 |
| lerobot/pusht | 4 | 5.103 | 0.163 | 0.0000448 |
| lerobot/pusht | 5 | 5.625 | 0.163 | 0.0000436 |
| lerobot/pusht | 6 | 5.974 | 0.155 | 0.0000427 |
| lerobot/pusht | 10 | 6.814 | 0.130 | 0.0000410 |
| lerobot/pusht | 15 | 7.431 | 0.105 | 0.0000406 |
| lerobot/pusht | 20 | 7.662 | 0.097 | 0.0000400 |
| lerobot/pusht | 40 | 8.163 | 0.061 | 0.0000405 |
| lerobot/pusht | 100 | 8.761 | 0.039 | 0.0000422 |
| lerobot/pusht | None | 8.909 | 0.024 | 0.0000431 |
| lerobot/umi_cup_in_the_wild | 1 | 14.411 | 0.444 | 0.0000601 |
| lerobot/umi_cup_in_the_wild | 2 | 14.932 | 0.345 | 0.0000397 |
| lerobot/umi_cup_in_the_wild | 3 | 20.174 | 0.282 | 0.0000416 |
| lerobot/umi_cup_in_the_wild | 4 | 24.889 | 0.271 | 0.0000415 |
| lerobot/umi_cup_in_the_wild | 5 | 28.825 | 0.260 | 0.0000415 |
| lerobot/umi_cup_in_the_wild | 6 | 31.635 | 0.249 | 0.0000415 |
| lerobot/umi_cup_in_the_wild | 10 | 39.418 | 0.195 | 0.0000399 |
| lerobot/umi_cup_in_the_wild | 15 | 44.577 | 0.169 | 0.0000394 |
| lerobot/umi_cup_in_the_wild | 20 | 47.907 | 0.140 | 0.0000390 |
| lerobot/umi_cup_in_the_wild | 40 | 52.554 | 0.096 | 0.0000384 |
| lerobot/umi_cup_in_the_wild | 100 | 58.241 | 0.046 | 0.0000390 |
| lerobot/umi_cup_in_the_wild | None | 60.530 | 0.022 | 0.0000400 |
**`crf`**
| repo_id | crf | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 0 | 1.699 | 0.175 | 0.0000035 |
| lerobot/pusht | 5 | 1.409 | 0.181 | 0.0000080 |
| lerobot/pusht | 10 | 1.842 | 0.172 | 0.0000123 |
| lerobot/pusht | 15 | 2.322 | 0.187 | 0.0000211 |
| lerobot/pusht | 20 | 3.050 | 0.181 | 0.0000346 |
| lerobot/pusht | None | 3.646 | 0.189 | 0.0000443 |
| lerobot/pusht | 25 | 3.969 | 0.186 | 0.0000521 |
| lerobot/pusht | 30 | 5.687 | 0.184 | 0.0000850 |
| lerobot/pusht | 40 | 10.818 | 0.193 | 0.0001726 |
| lerobot/pusht | 50 | 18.185 | 0.183 | 0.0002606 |
| lerobot/umi_cup_in_the_wild | 0 | 1.918 | 0.165 | 0.0000056 |
| lerobot/umi_cup_in_the_wild | 5 | 3.207 | 0.171 | 0.0000111 |
| lerobot/umi_cup_in_the_wild | 10 | 4.818 | 0.212 | 0.0000153 |
| lerobot/umi_cup_in_the_wild | 15 | 7.329 | 0.261 | 0.0000218 |
| lerobot/umi_cup_in_the_wild | 20 | 11.361 | 0.312 | 0.0000317 |
| lerobot/umi_cup_in_the_wild | None | 14.932 | 0.339 | 0.0000397 |
| lerobot/umi_cup_in_the_wild | 25 | 17.741 | 0.297 | 0.0000452 |
| lerobot/umi_cup_in_the_wild | 30 | 27.983 | 0.406 | 0.0000629 |
| lerobot/umi_cup_in_the_wild | 40 | 82.449 | 0.468 | 0.0001184 |
| lerobot/umi_cup_in_the_wild | 50 | 186.145 | 0.515 | 0.0001879 |
**best**
| repo_id | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- |
| lerobot/pusht | 3.646 | 0.188 | 0.0000443 |
| lerobot/umi_cup_in_the_wild | 14.932 | 0.339 | 0.0000397 |
### `2_frames`
**`pix_fmt`**
| repo_id | pix_fmt | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | yuv420p | 3.788 | 0.314 | 0.0000799 |
| lerobot/pusht | yuv444p | 3.646 | 0.303 | 0.0000496 |
| lerobot/umi_cup_in_the_wild | yuv420p | 14.391 | 0.642 | 0.0000503 |
| lerobot/umi_cup_in_the_wild | yuv444p | 14.932 | 0.529 | 0.0000436 |
**`g`**
| repo_id | g | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 1 | 2.543 | 0.308 | 0.0000599 |
| lerobot/pusht | 2 | 3.646 | 0.279 | 0.0000496 |
| lerobot/pusht | 3 | 4.431 | 0.259 | 0.0000498 |
| lerobot/pusht | 4 | 5.103 | 0.243 | 0.0000501 |
| lerobot/pusht | 5 | 5.625 | 0.235 | 0.0000492 |
| lerobot/pusht | 6 | 5.974 | 0.230 | 0.0000481 |
| lerobot/pusht | 10 | 6.814 | 0.194 | 0.0000468 |
| lerobot/pusht | 15 | 7.431 | 0.152 | 0.0000460 |
| lerobot/pusht | 20 | 7.662 | 0.151 | 0.0000455 |
| lerobot/pusht | 40 | 8.163 | 0.095 | 0.0000454 |
| lerobot/pusht | 100 | 8.761 | 0.062 | 0.0000472 |
| lerobot/pusht | None | 8.909 | 0.037 | 0.0000479 |
| lerobot/umi_cup_in_the_wild | 1 | 14.411 | 0.638 | 0.0000625 |
| lerobot/umi_cup_in_the_wild | 2 | 14.932 | 0.537 | 0.0000436 |
| lerobot/umi_cup_in_the_wild | 3 | 20.174 | 0.493 | 0.0000437 |
| lerobot/umi_cup_in_the_wild | 4 | 24.889 | 0.458 | 0.0000446 |
| lerobot/umi_cup_in_the_wild | 5 | 28.825 | 0.438 | 0.0000445 |
| lerobot/umi_cup_in_the_wild | 6 | 31.635 | 0.424 | 0.0000444 |
| lerobot/umi_cup_in_the_wild | 10 | 39.418 | 0.345 | 0.0000435 |
| lerobot/umi_cup_in_the_wild | 15 | 44.577 | 0.313 | 0.0000417 |
| lerobot/umi_cup_in_the_wild | 20 | 47.907 | 0.264 | 0.0000421 |
| lerobot/umi_cup_in_the_wild | 40 | 52.554 | 0.185 | 0.0000414 |
| lerobot/umi_cup_in_the_wild | 100 | 58.241 | 0.090 | 0.0000420 |
| lerobot/umi_cup_in_the_wild | None | 60.530 | 0.042 | 0.0000424 |
**`crf`**
| repo_id | crf | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 0 | 1.699 | 0.302 | 0.0000097 |
| lerobot/pusht | 5 | 1.409 | 0.287 | 0.0000142 |
| lerobot/pusht | 10 | 1.842 | 0.283 | 0.0000184 |
| lerobot/pusht | 15 | 2.322 | 0.305 | 0.0000268 |
| lerobot/pusht | 20 | 3.050 | 0.285 | 0.0000402 |
| lerobot/pusht | None | 3.646 | 0.285 | 0.0000496 |
| lerobot/pusht | 25 | 3.969 | 0.293 | 0.0000572 |
| lerobot/pusht | 30 | 5.687 | 0.293 | 0.0000893 |
| lerobot/pusht | 40 | 10.818 | 0.319 | 0.0001762 |
| lerobot/pusht | 50 | 18.185 | 0.304 | 0.0002626 |
| lerobot/umi_cup_in_the_wild | 0 | 1.918 | 0.235 | 0.0000112 |
| lerobot/umi_cup_in_the_wild | 5 | 3.207 | 0.261 | 0.0000166 |
| lerobot/umi_cup_in_the_wild | 10 | 4.818 | 0.333 | 0.0000207 |
| lerobot/umi_cup_in_the_wild | 15 | 7.329 | 0.406 | 0.0000267 |
| lerobot/umi_cup_in_the_wild | 20 | 11.361 | 0.489 | 0.0000361 |
| lerobot/umi_cup_in_the_wild | None | 14.932 | 0.537 | 0.0000436 |
| lerobot/umi_cup_in_the_wild | 25 | 17.741 | 0.578 | 0.0000487 |
| lerobot/umi_cup_in_the_wild | 30 | 27.983 | 0.453 | 0.0000655 |
| lerobot/umi_cup_in_the_wild | 40 | 82.449 | 0.767 | 0.0001192 |
| lerobot/umi_cup_in_the_wild | 50 | 186.145 | 0.816 | 0.0001881 |
**best**
| repo_id | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- |
| lerobot/pusht | 3.646 | 0.283 | 0.0000496 |
| lerobot/umi_cup_in_the_wild | 14.932 | 0.543 | 0.0000436 |
### `2_frames_4_space`
**`pix_fmt`**
| repo_id | pix_fmt | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | yuv420p | 3.788 | 0.257 | 0.0000855 |
| lerobot/pusht | yuv444p | 3.646 | 0.261 | 0.0000556 |
| lerobot/umi_cup_in_the_wild | yuv420p | 14.391 | 0.493 | 0.0000476 |
| lerobot/umi_cup_in_the_wild | yuv444p | 14.932 | 0.371 | 0.0000404 |
**`g`**
| repo_id | g | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 1 | 2.543 | 0.226 | 0.0000670 |
| lerobot/pusht | 2 | 3.646 | 0.222 | 0.0000556 |
| lerobot/pusht | 3 | 4.431 | 0.217 | 0.0000567 |
| lerobot/pusht | 4 | 5.103 | 0.204 | 0.0000555 |
| lerobot/pusht | 5 | 5.625 | 0.179 | 0.0000556 |
| lerobot/pusht | 6 | 5.974 | 0.188 | 0.0000544 |
| lerobot/pusht | 10 | 6.814 | 0.160 | 0.0000531 |
| lerobot/pusht | 15 | 7.431 | 0.150 | 0.0000521 |
| lerobot/pusht | 20 | 7.662 | 0.123 | 0.0000519 |
| lerobot/pusht | 40 | 8.163 | 0.092 | 0.0000519 |
| lerobot/pusht | 100 | 8.761 | 0.053 | 0.0000533 |
| lerobot/pusht | None | 8.909 | 0.034 | 0.0000541 |
| lerobot/umi_cup_in_the_wild | 1 | 14.411 | 0.409 | 0.0000607 |
| lerobot/umi_cup_in_the_wild | 2 | 14.932 | 0.381 | 0.0000404 |
| lerobot/umi_cup_in_the_wild | 3 | 20.174 | 0.355 | 0.0000418 |
| lerobot/umi_cup_in_the_wild | 4 | 24.889 | 0.346 | 0.0000425 |
| lerobot/umi_cup_in_the_wild | 5 | 28.825 | 0.354 | 0.0000419 |
| lerobot/umi_cup_in_the_wild | 6 | 31.635 | 0.336 | 0.0000419 |
| lerobot/umi_cup_in_the_wild | 10 | 39.418 | 0.314 | 0.0000402 |
| lerobot/umi_cup_in_the_wild | 15 | 44.577 | 0.269 | 0.0000397 |
| lerobot/umi_cup_in_the_wild | 20 | 47.907 | 0.246 | 0.0000395 |
| lerobot/umi_cup_in_the_wild | 40 | 52.554 | 0.171 | 0.0000390 |
| lerobot/umi_cup_in_the_wild | 100 | 58.241 | 0.091 | 0.0000399 |
| lerobot/umi_cup_in_the_wild | None | 60.530 | 0.043 | 0.0000409 |
**`crf`**
| repo_id | crf | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 0 | 1.699 | 0.212 | 0.0000193 |
| lerobot/pusht | 5 | 1.409 | 0.211 | 0.0000232 |
| lerobot/pusht | 10 | 1.842 | 0.199 | 0.0000270 |
| lerobot/pusht | 15 | 2.322 | 0.198 | 0.0000347 |
| lerobot/pusht | 20 | 3.050 | 0.211 | 0.0000469 |
| lerobot/pusht | None | 3.646 | 0.206 | 0.0000556 |
| lerobot/pusht | 25 | 3.969 | 0.210 | 0.0000626 |
| lerobot/pusht | 30 | 5.687 | 0.223 | 0.0000927 |
| lerobot/pusht | 40 | 10.818 | 0.227 | 0.0001763 |
| lerobot/pusht | 50 | 18.185 | 0.223 | 0.0002625 |
| lerobot/umi_cup_in_the_wild | 0 | 1.918 | 0.147 | 0.0000071 |
| lerobot/umi_cup_in_the_wild | 5 | 3.207 | 0.182 | 0.0000125 |
| lerobot/umi_cup_in_the_wild | 10 | 4.818 | 0.222 | 0.0000166 |
| lerobot/umi_cup_in_the_wild | 15 | 7.329 | 0.270 | 0.0000229 |
| lerobot/umi_cup_in_the_wild | 20 | 11.361 | 0.325 | 0.0000326 |
| lerobot/umi_cup_in_the_wild | None | 14.932 | 0.362 | 0.0000404 |
| lerobot/umi_cup_in_the_wild | 25 | 17.741 | 0.390 | 0.0000459 |
| lerobot/umi_cup_in_the_wild | 30 | 27.983 | 0.437 | 0.0000633 |
| lerobot/umi_cup_in_the_wild | 40 | 82.449 | 0.499 | 0.0001186 |
| lerobot/umi_cup_in_the_wild | 50 | 186.145 | 0.564 | 0.0001879 |
**best**
| repo_id | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- |
| lerobot/pusht | 3.646 | 0.224 | 0.0000556 |
| lerobot/umi_cup_in_the_wild | 14.932 | 0.368 | 0.0000404 |
### `6_frames`
**`pix_fmt`**
| repo_id | pix_fmt | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | yuv420p | 3.788 | 0.660 | 0.0000839 |
| lerobot/pusht | yuv444p | 3.646 | 0.546 | 0.0000542 |
| lerobot/umi_cup_in_the_wild | yuv420p | 14.391 | 1.225 | 0.0000497 |
| lerobot/umi_cup_in_the_wild | yuv444p | 14.932 | 0.908 | 0.0000428 |
**`g`**
| repo_id | g | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 1 | 2.543 | 0.552 | 0.0000646 |
| lerobot/pusht | 2 | 3.646 | 0.534 | 0.0000542 |
| lerobot/pusht | 3 | 4.431 | 0.563 | 0.0000546 |
| lerobot/pusht | 4 | 5.103 | 0.537 | 0.0000545 |
| lerobot/pusht | 5 | 5.625 | 0.477 | 0.0000532 |
| lerobot/pusht | 6 | 5.974 | 0.515 | 0.0000530 |
| lerobot/pusht | 10 | 6.814 | 0.410 | 0.0000512 |
| lerobot/pusht | 15 | 7.431 | 0.405 | 0.0000503 |
| lerobot/pusht | 20 | 7.662 | 0.345 | 0.0000500 |
| lerobot/pusht | 40 | 8.163 | 0.247 | 0.0000496 |
| lerobot/pusht | 100 | 8.761 | 0.147 | 0.0000510 |
| lerobot/pusht | None | 8.909 | 0.100 | 0.0000519 |
| lerobot/umi_cup_in_the_wild | 1 | 14.411 | 0.997 | 0.0000620 |
| lerobot/umi_cup_in_the_wild | 2 | 14.932 | 0.911 | 0.0000428 |
| lerobot/umi_cup_in_the_wild | 3 | 20.174 | 0.869 | 0.0000433 |
| lerobot/umi_cup_in_the_wild | 4 | 24.889 | 0.874 | 0.0000438 |
| lerobot/umi_cup_in_the_wild | 5 | 28.825 | 0.864 | 0.0000439 |
| lerobot/umi_cup_in_the_wild | 6 | 31.635 | 0.834 | 0.0000440 |
| lerobot/umi_cup_in_the_wild | 10 | 39.418 | 0.781 | 0.0000421 |
| lerobot/umi_cup_in_the_wild | 15 | 44.577 | 0.679 | 0.0000411 |
| lerobot/umi_cup_in_the_wild | 20 | 47.907 | 0.652 | 0.0000410 |
| lerobot/umi_cup_in_the_wild | 40 | 52.554 | 0.465 | 0.0000404 |
| lerobot/umi_cup_in_the_wild | 100 | 58.241 | 0.245 | 0.0000413 |
| lerobot/umi_cup_in_the_wild | None | 60.530 | 0.116 | 0.0000417 |
**`crf`**
| repo_id | crf | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- | --- |
| lerobot/pusht | 0 | 1.699 | 0.534 | 0.0000163 |
| lerobot/pusht | 5 | 1.409 | 0.524 | 0.0000205 |
| lerobot/pusht | 10 | 1.842 | 0.510 | 0.0000245 |
| lerobot/pusht | 15 | 2.322 | 0.512 | 0.0000324 |
| lerobot/pusht | 20 | 3.050 | 0.508 | 0.0000452 |
| lerobot/pusht | None | 3.646 | 0.518 | 0.0000542 |
| lerobot/pusht | 25 | 3.969 | 0.534 | 0.0000616 |
| lerobot/pusht | 30 | 5.687 | 0.530 | 0.0000927 |
| lerobot/pusht | 40 | 10.818 | 0.552 | 0.0001777 |
| lerobot/pusht | 50 | 18.185 | 0.564 | 0.0002644 |
| lerobot/umi_cup_in_the_wild | 0 | 1.918 | 0.401 | 0.0000101 |
| lerobot/umi_cup_in_the_wild | 5 | 3.207 | 0.499 | 0.0000156 |
| lerobot/umi_cup_in_the_wild | 10 | 4.818 | 0.599 | 0.0000197 |
| lerobot/umi_cup_in_the_wild | 15 | 7.329 | 0.704 | 0.0000258 |
| lerobot/umi_cup_in_the_wild | 20 | 11.361 | 0.834 | 0.0000352 |
| lerobot/umi_cup_in_the_wild | None | 14.932 | 0.925 | 0.0000428 |
| lerobot/umi_cup_in_the_wild | 25 | 17.741 | 0.978 | 0.0000480 |
| lerobot/umi_cup_in_the_wild | 30 | 27.983 | 1.088 | 0.0000648 |
| lerobot/umi_cup_in_the_wild | 40 | 82.449 | 1.324 | 0.0001190 |
| lerobot/umi_cup_in_the_wild | 50 | 186.145 | 1.436 | 0.0001880 |
**best**
| repo_id | compression_factor | load_time_factor | avg_per_pixel_l2_error |
| --- | --- | --- | --- |
| lerobot/pusht | 3.646 | 0.546 | 0.0000542 |
| lerobot/umi_cup_in_the_wild | 14.932 | 0.934 | 0.0000428 |

View File

@@ -1,372 +0,0 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import random
import shutil
import subprocess
import time
from pathlib import Path
import einops
import numpy
import PIL
import torch
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.datasets.video_utils import (
decode_video_frames_torchvision,
)
def get_directory_size(directory):
total_size = 0
# Iterate over all files and subdirectories recursively
for item in directory.rglob("*"):
if item.is_file():
# Add the file size to the total
total_size += item.stat().st_size
return total_size
def run_video_benchmark(
output_dir,
cfg,
timestamps_mode,
seed=1337,
):
output_dir = Path(output_dir)
if output_dir.exists():
shutil.rmtree(output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
repo_id = cfg["repo_id"]
# TODO(rcadene): rewrite with hardcoding of original images and episodes
dataset = LeRobotDataset(repo_id)
# Get fps
fps = dataset.fps
# we only load first episode
ep_num_images = dataset.episode_data_index["to"][0].item()
# Save/Load image directory for the first episode
imgs_dir = Path(f"tmp/data/images/{repo_id}/observation.image_episode_000000")
if not imgs_dir.exists():
imgs_dir.mkdir(parents=True, exist_ok=True)
hf_dataset = dataset.hf_dataset.with_format(None)
imgs_dataset = hf_dataset.select_columns("observation.image")
for i, item in enumerate(imgs_dataset):
img = item["observation.image"]
img.save(str(imgs_dir / f"frame_{i:06d}.png"), quality=100)
if i >= ep_num_images - 1:
break
sum_original_frames_size_bytes = get_directory_size(imgs_dir)
# Encode images into video
video_path = output_dir / "episode_0.mp4"
g = cfg.get("g")
crf = cfg.get("crf")
pix_fmt = cfg["pix_fmt"]
cmd = f"ffmpeg -r {fps} "
cmd += "-f image2 "
cmd += "-loglevel error "
cmd += f"-i {str(imgs_dir / 'frame_%06d.png')} "
cmd += "-vcodec libx264 "
if g is not None:
cmd += f"-g {g} " # ensures at least 1 keyframe every 10 frames
# cmd += "-keyint_min 10 " set a minimum of 10 frames between 2 key frames
# cmd += "-sc_threshold 0 " disable scene change detection to lower the number of key frames
if crf is not None:
cmd += f"-crf {crf} "
cmd += f"-pix_fmt {pix_fmt} "
cmd += f"{str(video_path)}"
subprocess.run(cmd.split(" "), check=True)
video_size_bytes = video_path.stat().st_size
# Set decoder
decoder = cfg["decoder"]
decoder_kwgs = cfg["decoder_kwgs"]
device = cfg["device"]
if decoder == "torchvision":
decode_frames_fn = decode_video_frames_torchvision
else:
raise ValueError(decoder)
# Estimate average loading time
def load_original_frames(imgs_dir, timestamps):
frames = []
for ts in timestamps:
idx = int(ts * fps)
frame = PIL.Image.open(imgs_dir / f"frame_{idx:06d}.png")
frame = torch.from_numpy(numpy.array(frame))
frame = frame.type(torch.float32) / 255
frame = einops.rearrange(frame, "h w c -> c h w")
frames.append(frame)
return frames
list_avg_load_time = []
list_avg_load_time_from_images = []
per_pixel_l2_errors = []
random.seed(seed)
for t in range(50):
# test loading 2 frames that are 4 frames appart, which might be a common setting
ts = random.randint(fps, ep_num_images - fps) / fps
if timestamps_mode == "1_frame":
timestamps = [ts]
elif timestamps_mode == "2_frames":
timestamps = [ts - 1 / fps, ts]
elif timestamps_mode == "2_frames_4_space":
timestamps = [ts - 4 / fps, ts]
elif timestamps_mode == "6_frames":
timestamps = [ts - i / fps for i in range(6)][::-1]
else:
raise ValueError(timestamps_mode)
num_frames = len(timestamps)
start_time_s = time.monotonic()
frames = decode_frames_fn(
video_path, timestamps=timestamps, tolerance_s=1e-4, device=device, **decoder_kwgs
)
avg_load_time = (time.monotonic() - start_time_s) / num_frames
list_avg_load_time.append(avg_load_time)
start_time_s = time.monotonic()
original_frames = load_original_frames(imgs_dir, timestamps)
avg_load_time_from_images = (time.monotonic() - start_time_s) / num_frames
list_avg_load_time_from_images.append(avg_load_time_from_images)
# Estimate average L2 error between original frames and decoded frames
for i, ts in enumerate(timestamps):
# are_close = torch.allclose(frames[i], original_frames[i], atol=0.02)
num_pixels = original_frames[i].numel()
per_pixel_l2_error = torch.norm(frames[i] - original_frames[i], p=2).item() / num_pixels
# save decoded frames
if t == 0:
frame_hwc = (frames[i].permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy()
PIL.Image.fromarray(frame_hwc).save(output_dir / f"frame_{i:06d}.png")
# save original_frames
idx = int(ts * fps)
if t == 0:
original_frame = PIL.Image.open(imgs_dir / f"frame_{idx:06d}.png")
original_frame.save(output_dir / f"original_frame_{i:06d}.png")
per_pixel_l2_errors.append(per_pixel_l2_error)
avg_load_time = float(numpy.array(list_avg_load_time).mean())
avg_load_time_from_images = float(numpy.array(list_avg_load_time_from_images).mean())
avg_per_pixel_l2_error = float(numpy.array(per_pixel_l2_errors).mean())
# Save benchmark info
info = {
"sum_original_frames_size_bytes": sum_original_frames_size_bytes,
"video_size_bytes": video_size_bytes,
"avg_load_time_from_images": avg_load_time_from_images,
"avg_load_time": avg_load_time,
"compression_factor": sum_original_frames_size_bytes / video_size_bytes,
"load_time_factor": avg_load_time_from_images / avg_load_time,
"avg_per_pixel_l2_error": avg_per_pixel_l2_error,
}
with open(output_dir / "info.json", "w") as f:
json.dump(info, f)
return info
def display_markdown_table(headers, rows):
for i, row in enumerate(rows):
new_row = []
for col in row:
if col is None:
new_col = "None"
elif isinstance(col, float):
new_col = f"{col:.3f}"
if new_col == "0.000":
new_col = f"{col:.7f}"
elif isinstance(col, int):
new_col = f"{col}"
else:
new_col = col
new_row.append(new_col)
rows[i] = new_row
header_line = "| " + " | ".join(headers) + " |"
separator_line = "| " + " | ".join(["---" for _ in headers]) + " |"
body_lines = ["| " + " | ".join(row) + " |" for row in rows]
markdown_table = "\n".join([header_line, separator_line] + body_lines)
print(markdown_table)
print()
def load_info(out_dir):
with open(out_dir / "info.json") as f:
info = json.load(f)
return info
def main():
out_dir = Path("tmp/run_video_benchmark")
dry_run = False
repo_ids = ["lerobot/pusht", "lerobot/umi_cup_in_the_wild"]
timestamps_modes = [
"1_frame",
"2_frames",
"2_frames_4_space",
"6_frames",
]
for timestamps_mode in timestamps_modes:
bench_dir = out_dir / timestamps_mode
print(f"### `{timestamps_mode}`")
print()
print("**`pix_fmt`**")
headers = ["repo_id", "pix_fmt", "compression_factor", "load_time_factor", "avg_per_pixel_l2_error"]
rows = []
for repo_id in repo_ids:
for pix_fmt in ["yuv420p", "yuv444p"]:
cfg = {
"repo_id": repo_id,
# video encoding
"g": 2,
"crf": None,
"pix_fmt": pix_fmt,
# video decoding
"device": "cpu",
"decoder": "torchvision",
"decoder_kwgs": {},
}
if not dry_run:
run_video_benchmark(bench_dir / repo_id / f"torchvision_{pix_fmt}", cfg, timestamps_mode)
info = load_info(bench_dir / repo_id / f"torchvision_{pix_fmt}")
rows.append(
[
repo_id,
pix_fmt,
info["compression_factor"],
info["load_time_factor"],
info["avg_per_pixel_l2_error"],
]
)
display_markdown_table(headers, rows)
print("**`g`**")
headers = ["repo_id", "g", "compression_factor", "load_time_factor", "avg_per_pixel_l2_error"]
rows = []
for repo_id in repo_ids:
for g in [1, 2, 3, 4, 5, 6, 10, 15, 20, 40, 100, None]:
cfg = {
"repo_id": repo_id,
# video encoding
"g": g,
"pix_fmt": "yuv444p",
# video decoding
"device": "cpu",
"decoder": "torchvision",
"decoder_kwgs": {},
}
if not dry_run:
run_video_benchmark(bench_dir / repo_id / f"torchvision_g_{g}", cfg, timestamps_mode)
info = load_info(bench_dir / repo_id / f"torchvision_g_{g}")
rows.append(
[
repo_id,
g,
info["compression_factor"],
info["load_time_factor"],
info["avg_per_pixel_l2_error"],
]
)
display_markdown_table(headers, rows)
print("**`crf`**")
headers = ["repo_id", "crf", "compression_factor", "load_time_factor", "avg_per_pixel_l2_error"]
rows = []
for repo_id in repo_ids:
for crf in [0, 5, 10, 15, 20, None, 25, 30, 40, 50]:
cfg = {
"repo_id": repo_id,
# video encoding
"g": 2,
"crf": crf,
"pix_fmt": "yuv444p",
# video decoding
"device": "cpu",
"decoder": "torchvision",
"decoder_kwgs": {},
}
if not dry_run:
run_video_benchmark(bench_dir / repo_id / f"torchvision_crf_{crf}", cfg, timestamps_mode)
info = load_info(bench_dir / repo_id / f"torchvision_crf_{crf}")
rows.append(
[
repo_id,
crf,
info["compression_factor"],
info["load_time_factor"],
info["avg_per_pixel_l2_error"],
]
)
display_markdown_table(headers, rows)
print("**best**")
headers = ["repo_id", "compression_factor", "load_time_factor", "avg_per_pixel_l2_error"]
rows = []
for repo_id in repo_ids:
cfg = {
"repo_id": repo_id,
# video encoding
"g": 2,
"crf": None,
"pix_fmt": "yuv444p",
# video decoding
"device": "cpu",
"decoder": "torchvision",
"decoder_kwgs": {},
}
if not dry_run:
run_video_benchmark(bench_dir / repo_id / "torchvision_best", cfg, timestamps_mode)
info = load_info(bench_dir / repo_id / "torchvision_best")
rows.append(
[
repo_id,
info["compression_factor"],
info["load_time_factor"],
info["avg_per_pixel_l2_error"],
]
)
display_markdown_table(headers, rows)
if __name__ == "__main__":
main()

View File

@@ -16,17 +16,15 @@
from copy import deepcopy
from math import ceil
import datasets
import einops
import torch
import tqdm
from datasets import Image
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.datasets.video_utils import VideoFrame
def get_stats_einops_patterns(dataset: LeRobotDataset | datasets.Dataset, num_workers=0):
def get_stats_einops_patterns(dataset, num_workers=0):
"""These einops patterns will be used to aggregate batches and compute statistics.
Note: We assume the images are in channel first format
@@ -42,6 +40,10 @@ def get_stats_einops_patterns(dataset: LeRobotDataset | datasets.Dataset, num_wo
stats_patterns = {}
for key, feats_type in dataset.features.items():
# NOTE: skip language_instruction embedding in stats computation
if key == "language_instruction":
continue
# sanity check that tensors are not float64
assert batch[key].dtype != torch.float64
@@ -66,9 +68,8 @@ def get_stats_einops_patterns(dataset: LeRobotDataset | datasets.Dataset, num_wo
return stats_patterns
def compute_stats(
dataset: LeRobotDataset | datasets.Dataset, batch_size=32, num_workers=16, max_num_samples=None
):
def compute_stats(dataset, batch_size=8, num_workers=8, max_num_samples=None):
"""Compute mean/std and min/max statistics of all data keys in a LeRobotDataset."""
if max_num_samples is None:
max_num_samples = len(dataset)
@@ -159,3 +160,54 @@ def compute_stats(
"min": min[key],
}
return stats
def aggregate_stats(ls_datasets) -> dict[str, torch.Tensor]:
"""Aggregate stats of multiple LeRobot datasets into one set of stats without recomputing from scratch.
The final stats will have the union of all data keys from each of the datasets.
The final stats will have the union of all data keys from each of the datasets. For instance:
- new_max = max(max_dataset_0, max_dataset_1, ...)
- new_min = min(min_dataset_0, min_dataset_1, ...)
- new_mean = (mean of all data)
- new_std = (std of all data)
"""
data_keys = set()
for dataset in ls_datasets:
data_keys.update(dataset.stats.keys())
stats = {k: {} for k in data_keys}
for data_key in data_keys:
for stat_key in ["min", "max"]:
# compute `max(dataset_0["max"], dataset_1["max"], ...)`
stats[data_key][stat_key] = einops.reduce(
torch.stack([d.stats[data_key][stat_key] for d in ls_datasets if data_key in d.stats], dim=0),
"n ... -> ...",
stat_key,
)
total_samples = sum(d.num_samples for d in ls_datasets if data_key in d.stats)
# Compute the "sum" statistic by multiplying each mean by the number of samples in the respective
# dataset, then divide by total_samples to get the overall "mean".
# NOTE: the brackets around (d.num_samples / total_samples) are needed tor minimize the risk of
# numerical overflow!
stats[data_key]["mean"] = sum(
d.stats[data_key]["mean"] * (d.num_samples / total_samples)
for d in ls_datasets
if data_key in d.stats
)
# The derivation for standard deviation is a little more involved but is much in the same spirit as
# the computation of the mean.
# Given two sets of data where the statistics are known:
# σ_combined = sqrt[ (n1 * (σ1^2 + d1^2) + n2 * (σ2^2 + d2^2)) / (n1 + n2) ]
# where d1 = μ1 - μ_combined, d2 = μ2 - μ_combined
# NOTE: the brackets around (d.num_samples / total_samples) are needed tor minimize the risk of
# numerical overflow!
stats[data_key]["std"] = torch.sqrt(
sum(
(d.stats[data_key]["std"] ** 2 + (d.stats[data_key]["mean"] - stats[data_key]["mean"]) ** 2)
* (d.num_samples / total_samples)
for d in ls_datasets
if data_key in d.stats
)
)
return stats

View File

@@ -16,34 +16,96 @@
import logging
import torch
from omegaconf import OmegaConf
from omegaconf import ListConfig, OmegaConf
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset, MultiLeRobotDataset
from lerobot.common.datasets.transforms import get_image_transforms
def make_dataset(
cfg,
split="train",
):
if cfg.env.name not in cfg.dataset_repo_id:
logging.warning(
f"There might be a mismatch between your training dataset ({cfg.dataset_repo_id=}) and your "
f"environment ({cfg.env.name=})."
)
def resolve_delta_timestamps(cfg):
"""Resolves delta_timestamps config key (in-place) by using `eval`.
Doesn't do anything if delta_timestamps is not specified or has already been resolve (as evidenced by
the data type of its values).
"""
delta_timestamps = cfg.training.get("delta_timestamps")
if delta_timestamps is not None:
for key in delta_timestamps:
if isinstance(delta_timestamps[key], str):
delta_timestamps[key] = eval(delta_timestamps[key])
# TODO(rcadene, alexander-soare): remove `eval` to avoid exploit
cfg.training.delta_timestamps[key] = eval(delta_timestamps[key])
# TODO(rcadene): add data augmentations
dataset = LeRobotDataset(
cfg.dataset_repo_id,
split=split,
delta_timestamps=delta_timestamps,
)
def make_dataset(cfg, split: str = "train") -> LeRobotDataset | MultiLeRobotDataset:
"""
Args:
cfg: A Hydra config as per the LeRobot config scheme.
split: Select the data subset used to create an instance of LeRobotDataset.
All datasets hosted on [lerobot](https://huggingface.co/lerobot) contain only one subset: "train".
Thus, by default, `split="train"` selects all the available data. `split` aims to work like the
slicer in the hugging face datasets:
https://huggingface.co/docs/datasets/v2.19.0/loading#slice-splits
As of now, it only supports `split="train[:n]"` to load the first n frames of the dataset or
`split="train[n:]"` to load the last n frames. For instance `split="train[:1000]"`.
Returns:
The LeRobotDataset.
"""
if not isinstance(cfg.dataset_repo_id, (str, ListConfig)):
raise ValueError(
"Expected cfg.dataset_repo_id to be either a single string to load one dataset or a list of "
"strings to load multiple datasets."
)
# A soft check to warn if the environment matches the dataset. Don't check if we are using a real world env (dora).
if cfg.env.name != "dora":
if isinstance(cfg.dataset_repo_id, str):
dataset_repo_ids = [cfg.dataset_repo_id] # single dataset
else:
dataset_repo_ids = cfg.dataset_repo_id # multiple datasets
for dataset_repo_id in dataset_repo_ids:
if cfg.env.name not in dataset_repo_id:
logging.warning(
f"There might be a mismatch between your training dataset ({dataset_repo_id=}) and your "
f"environment ({cfg.env.name=})."
)
resolve_delta_timestamps(cfg)
image_transforms = None
if cfg.training.image_transforms.enable:
cfg_tf = cfg.training.image_transforms
image_transforms = get_image_transforms(
brightness_weight=cfg_tf.brightness.weight,
brightness_min_max=cfg_tf.brightness.min_max,
contrast_weight=cfg_tf.contrast.weight,
contrast_min_max=cfg_tf.contrast.min_max,
saturation_weight=cfg_tf.saturation.weight,
saturation_min_max=cfg_tf.saturation.min_max,
hue_weight=cfg_tf.hue.weight,
hue_min_max=cfg_tf.hue.min_max,
sharpness_weight=cfg_tf.sharpness.weight,
sharpness_min_max=cfg_tf.sharpness.min_max,
max_num_transforms=cfg_tf.max_num_transforms,
random_order=cfg_tf.random_order,
)
if isinstance(cfg.dataset_repo_id, str):
dataset = LeRobotDataset(
cfg.dataset_repo_id,
split=split,
delta_timestamps=cfg.training.get("delta_timestamps"),
image_transforms=image_transforms,
video_backend=cfg.video_backend,
)
else:
dataset = MultiLeRobotDataset(
cfg.dataset_repo_id,
split=split,
delta_timestamps=cfg.training.get("delta_timestamps"),
image_transforms=image_transforms,
video_backend=cfg.video_backend,
)
if cfg.get("override_dataset_stats"):
for key, stats_dict in cfg.override_dataset_stats.items():

View File

@@ -13,12 +13,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from pathlib import Path
from typing import Callable
import datasets
import torch
import torch.utils
from lerobot.common.datasets.compute_stats import aggregate_stats
from lerobot.common.datasets.utils import (
calculate_episode_data_index,
load_episode_data_index,
@@ -31,40 +35,41 @@ from lerobot.common.datasets.utils import (
)
from lerobot.common.datasets.video_utils import VideoFrame, load_from_videos
# For maintainers, see lerobot/common/datasets/push_dataset_to_hub/CODEBASE_VERSION.md
CODEBASE_VERSION = "v1.6"
DATA_DIR = Path(os.environ["DATA_DIR"]) if "DATA_DIR" in os.environ else None
CODEBASE_VERSION = "v1.4"
class LeRobotDataset(torch.utils.data.Dataset):
def __init__(
self,
repo_id: str,
version: str | None = CODEBASE_VERSION,
root: Path | None = DATA_DIR,
split: str = "train",
transform: callable = None,
image_transforms: Callable | None = None,
delta_timestamps: dict[list[float]] | None = None,
video_backend: str | None = None,
):
super().__init__()
self.repo_id = repo_id
self.version = version
self.root = root
self.split = split
self.transform = transform
self.image_transforms = image_transforms
self.delta_timestamps = delta_timestamps
# load data from hub or locally when root is provided
# TODO(rcadene, aliberts): implement faster transfer
# https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads
self.hf_dataset = load_hf_dataset(repo_id, version, root, split)
self.hf_dataset = load_hf_dataset(repo_id, CODEBASE_VERSION, root, split)
if split == "train":
self.episode_data_index = load_episode_data_index(repo_id, version, root)
self.episode_data_index = load_episode_data_index(repo_id, CODEBASE_VERSION, root)
else:
self.episode_data_index = calculate_episode_data_index(self.hf_dataset)
self.hf_dataset = reset_episode_index(self.hf_dataset)
self.stats = load_stats(repo_id, version, root)
self.info = load_info(repo_id, version, root)
self.stats = load_stats(repo_id, CODEBASE_VERSION, root)
self.info = load_info(repo_id, CODEBASE_VERSION, root)
if self.video:
self.videos_dir = load_videos(repo_id, version, root)
self.videos_dir = load_videos(repo_id, CODEBASE_VERSION, root)
self.video_backend = video_backend if video_backend is not None else "pyav"
@property
def fps(self) -> int:
@@ -145,10 +150,12 @@ class LeRobotDataset(torch.utils.data.Dataset):
self.video_frame_keys,
self.videos_dir,
self.tolerance_s,
self.video_backend,
)
if self.transform is not None:
item = self.transform(item)
if self.image_transforms is not None:
for cam in self.camera_keys:
item[cam] = self.image_transforms(item[cam])
return item
@@ -156,7 +163,6 @@ class LeRobotDataset(torch.utils.data.Dataset):
return (
f"{self.__class__.__name__}(\n"
f" Repository ID: '{self.repo_id}',\n"
f" Version: '{self.version}',\n"
f" Split: '{self.split}',\n"
f" Number of Samples: {self.num_samples},\n"
f" Number of Episodes: {self.num_episodes},\n"
@@ -164,15 +170,15 @@ class LeRobotDataset(torch.utils.data.Dataset):
f" Recorded Frames per Second: {self.fps},\n"
f" Camera Keys: {self.camera_keys},\n"
f" Video Frame Keys: {self.video_frame_keys if self.video else 'N/A'},\n"
f" Transformations: {self.transform},\n"
f" Transformations: {self.image_transforms},\n"
f" Codebase Version: {self.info.get('codebase_version', '< v1.6')},\n"
f")"
)
@classmethod
def from_preloaded(
cls,
repo_id: str,
version: str | None = CODEBASE_VERSION,
repo_id: str = "from_preloaded",
root: Path | None = None,
split: str = "train",
transform: callable = None,
@@ -183,18 +189,213 @@ class LeRobotDataset(torch.utils.data.Dataset):
stats=None,
info=None,
videos_dir=None,
):
video_backend=None,
) -> "LeRobotDataset":
"""Create a LeRobot Dataset from existing data and attributes instead of loading from the filesystem.
It is especially useful when converting raw data into LeRobotDataset before saving the dataset
on the filesystem or uploading to the hub.
Note: Meta-data attributes like `repo_id`, `version`, `root`, etc are optional and potentially
meaningless depending on the downstream usage of the return dataset.
"""
# create an empty object of type LeRobotDataset
obj = cls.__new__(cls)
obj.repo_id = repo_id
obj.version = version
obj.root = root
obj.split = split
obj.transform = transform
obj.image_transforms = transform
obj.delta_timestamps = delta_timestamps
obj.hf_dataset = hf_dataset
obj.episode_data_index = episode_data_index
obj.stats = stats
obj.info = info
obj.info = info if info is not None else {}
obj.videos_dir = videos_dir
obj.video_backend = video_backend if video_backend is not None else "pyav"
return obj
class MultiLeRobotDataset(torch.utils.data.Dataset):
"""A dataset consisting of multiple underlying `LeRobotDataset`s.
The underlying `LeRobotDataset`s are effectively concatenated, and this class adopts much of the API
structure of `LeRobotDataset`.
"""
def __init__(
self,
repo_ids: list[str],
root: Path | None = DATA_DIR,
split: str = "train",
image_transforms: Callable | None = None,
delta_timestamps: dict[list[float]] | None = None,
video_backend: str | None = None,
):
super().__init__()
self.repo_ids = repo_ids
# Construct the underlying datasets passing everything but `transform` and `delta_timestamps` which
# are handled by this class.
self._datasets = [
LeRobotDataset(
repo_id,
root=root,
split=split,
delta_timestamps=delta_timestamps,
image_transforms=image_transforms,
video_backend=video_backend,
)
for repo_id in repo_ids
]
# Check that some properties are consistent across datasets. Note: We may relax some of these
# consistency requirements in future iterations of this class.
for repo_id, dataset in zip(self.repo_ids, self._datasets, strict=True):
if dataset.info != self._datasets[0].info:
raise ValueError(
f"Detected a mismatch in dataset info between {self.repo_ids[0]} and {repo_id}. This is "
"not yet supported."
)
# Disable any data keys that are not common across all of the datasets. Note: we may relax this
# restriction in future iterations of this class. For now, this is necessary at least for being able
# to use PyTorch's default DataLoader collate function.
self.disabled_data_keys = set()
intersection_data_keys = set(self._datasets[0].hf_dataset.features)
for dataset in self._datasets:
intersection_data_keys.intersection_update(dataset.hf_dataset.features)
if len(intersection_data_keys) == 0:
raise RuntimeError(
"Multiple datasets were provided but they had no keys common to all of them. The "
"multi-dataset functionality currently only keeps common keys."
)
for repo_id, dataset in zip(self.repo_ids, self._datasets, strict=True):
extra_keys = set(dataset.hf_dataset.features).difference(intersection_data_keys)
logging.warning(
f"keys {extra_keys} of {repo_id} were disabled as they are not contained in all the "
"other datasets."
)
self.disabled_data_keys.update(extra_keys)
self.root = root
self.split = split
self.image_transforms = image_transforms
self.delta_timestamps = delta_timestamps
self.stats = aggregate_stats(self._datasets)
@property
def repo_id_to_index(self):
"""Return a mapping from dataset repo_id to a dataset index automatically created by this class.
This index is incorporated as a data key in the dictionary returned by `__getitem__`.
"""
return {repo_id: i for i, repo_id in enumerate(self.repo_ids)}
@property
def repo_index_to_id(self):
"""Return the inverse mapping if repo_id_to_index."""
return {v: k for k, v in self.repo_id_to_index}
@property
def fps(self) -> int:
"""Frames per second used during data collection.
NOTE: Fow now, this relies on a check in __init__ to make sure all sub-datasets have the same info.
"""
return self._datasets[0].info["fps"]
@property
def video(self) -> bool:
"""Returns True if this dataset loads video frames from mp4 files.
Returns False if it only loads images from png files.
NOTE: Fow now, this relies on a check in __init__ to make sure all sub-datasets have the same info.
"""
return self._datasets[0].info.get("video", False)
@property
def features(self) -> datasets.Features:
features = {}
for dataset in self._datasets:
features.update({k: v for k, v in dataset.features.items() if k not in self.disabled_data_keys})
return features
@property
def camera_keys(self) -> list[str]:
"""Keys to access image and video stream from cameras."""
keys = []
for key, feats in self.features.items():
if isinstance(feats, (datasets.Image, VideoFrame)):
keys.append(key)
return keys
@property
def video_frame_keys(self) -> list[str]:
"""Keys to access video frames that requires to be decoded into images.
Note: It is empty if the dataset contains images only,
or equal to `self.cameras` if the dataset contains videos only,
or can even be a subset of `self.cameras` in a case of a mixed image/video dataset.
"""
video_frame_keys = []
for key, feats in self.features.items():
if isinstance(feats, VideoFrame):
video_frame_keys.append(key)
return video_frame_keys
@property
def num_samples(self) -> int:
"""Number of samples/frames."""
return sum(d.num_samples for d in self._datasets)
@property
def num_episodes(self) -> int:
"""Number of episodes."""
return sum(d.num_episodes for d in self._datasets)
@property
def tolerance_s(self) -> float:
"""Tolerance in seconds used to discard loaded frames when their timestamps
are not close enough from the requested frames. It is only used when `delta_timestamps`
is provided or when loading video frames from mp4 files.
"""
# 1e-4 to account for possible numerical error
return 1 / self.fps - 1e-4
def __len__(self):
return self.num_samples
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
if idx >= len(self):
raise IndexError(f"Index {idx} out of bounds.")
# Determine which dataset to get an item from based on the index.
start_idx = 0
dataset_idx = 0
for dataset in self._datasets:
if idx >= start_idx + dataset.num_samples:
start_idx += dataset.num_samples
dataset_idx += 1
continue
break
else:
raise AssertionError("We expect the loop to break out as long as the index is within bounds.")
item = self._datasets[dataset_idx][idx - start_idx]
item["dataset_index"] = torch.tensor(dataset_idx)
for data_key in self.disabled_data_keys:
if data_key in item:
del item[data_key]
return item
def __repr__(self):
return (
f"{self.__class__.__name__}(\n"
f" Repository IDs: '{self.repo_ids}',\n"
f" Split: '{self.split}',\n"
f" Number of Samples: {self.num_samples},\n"
f" Number of Episodes: {self.num_episodes},\n"
f" Type: {'video (.mp4)' if self.video else 'image (.png)'},\n"
f" Recorded Frames per Second: {self.fps},\n"
f" Camera Keys: {self.camera_keys},\n"
f" Video Frame Keys: {self.video_frame_keys if self.video else 'N/A'},\n"
f" Transformations: {self.image_transforms},\n"
f")"
)

View File

@@ -0,0 +1,384 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An online buffer for the online training loop in train.py
Note to maintainers: This duplicates some logic from LeRobotDataset and EpisodeAwareSampler. We should
consider converging to one approach. Here we have opted to use numpy.memmap to back the data buffer. It's much
faster than using HuggingFace Datasets as there's no conversion to an intermediate non-python object. Also it
supports in-place slicing and mutation which is very handy for a dynamic buffer.
"""
import os
from pathlib import Path
from typing import Any
import numpy as np
import torch
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
def _make_memmap_safe(**kwargs) -> np.memmap:
"""Make a numpy memmap with checks on available disk space first.
Expected kwargs are: "filename", "dtype" (must by np.dtype), "mode" and "shape"
For information on dtypes:
https://numpy.org/doc/stable/reference/arrays.dtypes.html#arrays-dtypes-constructing
"""
if kwargs["mode"].startswith("w"):
required_space = kwargs["dtype"].itemsize * np.prod(kwargs["shape"]) # bytes
stats = os.statvfs(Path(kwargs["filename"]).parent)
available_space = stats.f_bavail * stats.f_frsize # bytes
if required_space >= available_space * 0.8:
raise RuntimeError(
f"You're about to take up {required_space} of {available_space} bytes available."
)
return np.memmap(**kwargs)
class OnlineBuffer(torch.utils.data.Dataset):
"""FIFO data buffer for the online training loop in train.py.
Follows the protocol of LeRobotDataset as much as is required to have it be used by the online training
loop in the same way that a LeRobotDataset would be used.
The underlying data structure will have data inserted in a circular fashion. Always insert after the
last index, and when you reach the end, wrap around to the start.
The data is stored in a numpy memmap.
"""
NEXT_INDEX_KEY = "_next_index"
OCCUPANCY_MASK_KEY = "_occupancy_mask"
INDEX_KEY = "index"
FRAME_INDEX_KEY = "frame_index"
EPISODE_INDEX_KEY = "episode_index"
TIMESTAMP_KEY = "timestamp"
IS_PAD_POSTFIX = "_is_pad"
def __init__(
self,
write_dir: str | Path,
data_spec: dict[str, Any] | None,
buffer_capacity: int | None,
fps: float | None = None,
delta_timestamps: dict[str, list[float]] | dict[str, np.ndarray] | None = None,
):
"""
The online buffer can be provided from scratch or you can load an existing online buffer by passing
a `write_dir` associated with an existing buffer.
Args:
write_dir: Where to keep the numpy memmap files. One memmap file will be stored for each data key.
Note that if the files already exist, they are opened in read-write mode (used for training
resumption.)
data_spec: A mapping from data key to data specification, like {data_key: {"shape": tuple[int],
"dtype": np.dtype}}. This should include all the data that you wish to record into the buffer,
but note that "index", "frame_index" and "episode_index" are already accounted for by this
class, so you don't need to include them.
buffer_capacity: How many frames should be stored in the buffer as a maximum. Be aware of your
system's available disk space when choosing this.
fps: Same as the fps concept in LeRobot dataset. Here it needs to be provided for the
delta_timestamps logic. You can pass None if you are not using delta_timestamps.
delta_timestamps: Same as the delta_timestamps concept in LeRobotDataset. This is internally
converted to dict[str, np.ndarray] for optimization purposes.
"""
self.set_delta_timestamps(delta_timestamps)
self._fps = fps
# Tolerance in seconds used to discard loaded frames when their timestamps are not close enough from
# the requested frames. It is only used when `delta_timestamps` is provided.
# minus 1e-4 to account for possible numerical error
self.tolerance_s = 1 / self.fps - 1e-4 if fps is not None else None
self._buffer_capacity = buffer_capacity
data_spec = self._make_data_spec(data_spec, buffer_capacity)
Path(write_dir).mkdir(parents=True, exist_ok=True)
self._data = {}
for k, v in data_spec.items():
self._data[k] = _make_memmap_safe(
filename=Path(write_dir) / k,
dtype=v["dtype"] if v is not None else None,
mode="r+" if (Path(write_dir) / k).exists() else "w+",
shape=tuple(v["shape"]) if v is not None else None,
)
@property
def delta_timestamps(self) -> dict[str, np.ndarray] | None:
return self._delta_timestamps
def set_delta_timestamps(self, value: dict[str, list[float]] | None):
"""Set delta_timestamps converting the values to numpy arrays.
The conversion is for an optimization in the __getitem__. The loop is much slower if the arrays
need to be converted into numpy arrays.
"""
if value is not None:
self._delta_timestamps = {k: np.array(v) for k, v in value.items()}
else:
self._delta_timestamps = None
def _make_data_spec(self, data_spec: dict[str, Any], buffer_capacity: int) -> dict[str, dict[str, Any]]:
"""Makes the data spec for np.memmap."""
if any(k.startswith("_") for k in data_spec):
raise ValueError(
"data_spec keys should not start with '_'. This prefix is reserved for internal logic."
)
preset_keys = {
OnlineBuffer.INDEX_KEY,
OnlineBuffer.FRAME_INDEX_KEY,
OnlineBuffer.EPISODE_INDEX_KEY,
OnlineBuffer.TIMESTAMP_KEY,
}
if len(intersection := set(data_spec).intersection(preset_keys)) > 0:
raise ValueError(
f"data_spec should not contain any of {preset_keys} as these are handled internally. "
f"The provided data_spec has {intersection}."
)
complete_data_spec = {
# _next_index will be a pointer to the next index that we should start filling from when we add
# more data.
OnlineBuffer.NEXT_INDEX_KEY: {"dtype": np.dtype("int64"), "shape": ()},
# Since the memmap is initialized with all-zeros, this keeps track of which indices are occupied
# with real data rather than the dummy initialization.
OnlineBuffer.OCCUPANCY_MASK_KEY: {"dtype": np.dtype("?"), "shape": (buffer_capacity,)},
OnlineBuffer.INDEX_KEY: {"dtype": np.dtype("int64"), "shape": (buffer_capacity,)},
OnlineBuffer.FRAME_INDEX_KEY: {"dtype": np.dtype("int64"), "shape": (buffer_capacity,)},
OnlineBuffer.EPISODE_INDEX_KEY: {"dtype": np.dtype("int64"), "shape": (buffer_capacity,)},
OnlineBuffer.TIMESTAMP_KEY: {"dtype": np.dtype("float64"), "shape": (buffer_capacity,)},
}
for k, v in data_spec.items():
complete_data_spec[k] = {"dtype": v["dtype"], "shape": (buffer_capacity, *v["shape"])}
return complete_data_spec
def add_data(self, data: dict[str, np.ndarray]):
"""Add new data to the buffer, which could potentially mean shifting old data out.
The new data should contain all the frames (in order) of any number of episodes. The indices should
start from 0 (note to the developer: this can easily be generalized). See the `rollout` and
`eval_policy` functions in `eval.py` for more information on how the data is constructed.
Shift the incoming data index and episode_index to continue on from the last frame. Note that this
will be done in place!
"""
if len(missing_keys := (set(self.data_keys).difference(set(data)))) > 0:
raise ValueError(f"Missing data keys: {missing_keys}")
new_data_length = len(data[self.data_keys[0]])
if not all(len(data[k]) == new_data_length for k in self.data_keys):
raise ValueError("All data items should have the same length")
next_index = self._data[OnlineBuffer.NEXT_INDEX_KEY]
# Sanity check to make sure that the new data indices start from 0.
assert data[OnlineBuffer.EPISODE_INDEX_KEY][0].item() == 0
assert data[OnlineBuffer.INDEX_KEY][0].item() == 0
# Shift the incoming indices if necessary.
if self.num_samples > 0:
last_episode_index = self._data[OnlineBuffer.EPISODE_INDEX_KEY][next_index - 1]
last_data_index = self._data[OnlineBuffer.INDEX_KEY][next_index - 1]
data[OnlineBuffer.EPISODE_INDEX_KEY] += last_episode_index + 1
data[OnlineBuffer.INDEX_KEY] += last_data_index + 1
# Insert the new data starting from next_index. It may be necessary to wrap around to the start.
n_surplus = max(0, new_data_length - (self._buffer_capacity - next_index))
for k in self.data_keys:
if n_surplus == 0:
slc = slice(next_index, next_index + new_data_length)
self._data[k][slc] = data[k]
self._data[OnlineBuffer.OCCUPANCY_MASK_KEY][slc] = True
else:
self._data[k][next_index:] = data[k][:-n_surplus]
self._data[OnlineBuffer.OCCUPANCY_MASK_KEY][next_index:] = True
self._data[k][:n_surplus] = data[k][-n_surplus:]
if n_surplus == 0:
self._data[OnlineBuffer.NEXT_INDEX_KEY] = next_index + new_data_length
else:
self._data[OnlineBuffer.NEXT_INDEX_KEY] = n_surplus
@property
def data_keys(self) -> list[str]:
keys = set(self._data)
keys.remove(OnlineBuffer.OCCUPANCY_MASK_KEY)
keys.remove(OnlineBuffer.NEXT_INDEX_KEY)
return sorted(keys)
@property
def fps(self) -> float | None:
return self._fps
@property
def num_episodes(self) -> int:
return len(
np.unique(self._data[OnlineBuffer.EPISODE_INDEX_KEY][self._data[OnlineBuffer.OCCUPANCY_MASK_KEY]])
)
@property
def num_samples(self) -> int:
return np.count_nonzero(self._data[OnlineBuffer.OCCUPANCY_MASK_KEY])
def __len__(self):
return self.num_samples
def _item_to_tensors(self, item: dict) -> dict:
item_ = {}
for k, v in item.items():
if isinstance(v, torch.Tensor):
item_[k] = v
elif isinstance(v, np.ndarray):
item_[k] = torch.from_numpy(v)
else:
item_[k] = torch.tensor(v)
return item_
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
if idx >= len(self) or idx < -len(self):
raise IndexError
item = {k: v[idx] for k, v in self._data.items() if not k.startswith("_")}
if self.delta_timestamps is None:
return self._item_to_tensors(item)
episode_index = item[OnlineBuffer.EPISODE_INDEX_KEY]
current_ts = item[OnlineBuffer.TIMESTAMP_KEY]
episode_data_indices = np.where(
np.bitwise_and(
self._data[OnlineBuffer.EPISODE_INDEX_KEY] == episode_index,
self._data[OnlineBuffer.OCCUPANCY_MASK_KEY],
)
)[0]
episode_timestamps = self._data[OnlineBuffer.TIMESTAMP_KEY][episode_data_indices]
for data_key in self.delta_timestamps:
# Note: The logic in this loop is copied from `load_previous_and_future_frames`.
# Get timestamps used as query to retrieve data of previous/future frames.
query_ts = current_ts + self.delta_timestamps[data_key]
# Compute distances between each query timestamp and all timestamps of all the frames belonging to
# the episode.
dist = np.abs(query_ts[:, None] - episode_timestamps[None, :])
argmin_ = np.argmin(dist, axis=1)
min_ = dist[np.arange(dist.shape[0]), argmin_]
is_pad = min_ > self.tolerance_s
# Check violated query timestamps are all outside the episode range.
assert (
(query_ts[is_pad] < episode_timestamps[0]) | (episode_timestamps[-1] < query_ts[is_pad])
).all(), (
f"One or several timestamps unexpectedly violate the tolerance ({min_} > {self.tolerance_s=}"
") inside the episode range."
)
# Load frames for this data key.
item[data_key] = self._data[data_key][episode_data_indices[argmin_]]
item[f"{data_key}{OnlineBuffer.IS_PAD_POSTFIX}"] = is_pad
return self._item_to_tensors(item)
def get_data_by_key(self, key: str) -> torch.Tensor:
"""Returns all data for a given data key as a Tensor."""
return torch.from_numpy(self._data[key][self._data[OnlineBuffer.OCCUPANCY_MASK_KEY]])
def compute_sampler_weights(
offline_dataset: LeRobotDataset,
offline_drop_n_last_frames: int = 0,
online_dataset: OnlineBuffer | None = None,
online_sampling_ratio: float | None = None,
online_drop_n_last_frames: int = 0,
) -> torch.Tensor:
"""Compute the sampling weights for the online training dataloader in train.py.
Args:
offline_dataset: The LeRobotDataset used for offline pre-training.
online_drop_n_last_frames: Number of frames to drop from the end of each offline dataset episode.
online_dataset: The OnlineBuffer used in online training.
online_sampling_ratio: The proportion of data that should be sampled from the online dataset. If an
online dataset is provided, this value must also be provided.
online_drop_n_first_frames: See `offline_drop_n_last_frames`. This is the same, but for the online
dataset.
Returns:
Tensor of weights for [offline_dataset; online_dataset], normalized to 1.
Notes to maintainers:
- This duplicates some logic from EpisodeAwareSampler. We should consider converging to one approach.
- When used with `torch.utils.data.WeightedRandomSampler`, it could completely replace
`EpisodeAwareSampler` as the online dataset related arguments are optional. The only missing feature
is the ability to turn shuffling off.
- Options `drop_first_n_frames` and `episode_indices_to_use` can be added easily. They were not
included here to avoid adding complexity.
"""
if len(offline_dataset) == 0 and (online_dataset is None or len(online_dataset) == 0):
raise ValueError("At least one of `offline_dataset` or `online_dataset` should be contain data.")
if (online_dataset is None) ^ (online_sampling_ratio is None):
raise ValueError(
"`online_dataset` and `online_sampling_ratio` must be provided together or not at all."
)
offline_sampling_ratio = 0 if online_sampling_ratio is None else 1 - online_sampling_ratio
weights = []
if len(offline_dataset) > 0:
offline_data_mask_indices = []
for start_index, end_index in zip(
offline_dataset.episode_data_index["from"],
offline_dataset.episode_data_index["to"],
strict=True,
):
offline_data_mask_indices.extend(
range(start_index.item(), end_index.item() - offline_drop_n_last_frames)
)
offline_data_mask = torch.zeros(len(offline_dataset), dtype=torch.bool)
offline_data_mask[torch.tensor(offline_data_mask_indices)] = True
weights.append(
torch.full(
size=(len(offline_dataset),),
fill_value=offline_sampling_ratio / offline_data_mask.sum(),
)
* offline_data_mask
)
if online_dataset is not None and len(online_dataset) > 0:
online_data_mask_indices = []
episode_indices = online_dataset.get_data_by_key("episode_index")
for episode_idx in torch.unique(episode_indices):
where_episode = torch.where(episode_indices == episode_idx)
start_index = where_episode[0][0]
end_index = where_episode[0][-1] + 1
online_data_mask_indices.extend(
range(start_index.item(), end_index.item() - online_drop_n_last_frames)
)
online_data_mask = torch.zeros(len(online_dataset), dtype=torch.bool)
online_data_mask[torch.tensor(online_data_mask_indices)] = True
weights.append(
torch.full(
size=(len(online_dataset),),
fill_value=online_sampling_ratio / online_data_mask.sum(),
)
* online_data_mask
)
weights = torch.cat(weights)
if weights.sum() == 0:
weights += 1 / len(weights)
else:
weights /= weights.sum()
return weights

View File

@@ -0,0 +1,468 @@
"""Functions to create an empty dataset, and populate it with frames."""
# TODO(rcadene, aliberts): to adapt as class methods of next version of LeRobotDataset
import concurrent
import json
import logging
import multiprocessing
import shutil
from pathlib import Path
import torch
import tqdm
from PIL import Image
from lerobot.common.datasets.compute_stats import compute_stats
from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION, LeRobotDataset
from lerobot.common.datasets.push_dataset_to_hub.aloha_hdf5_format import to_hf_dataset
from lerobot.common.datasets.push_dataset_to_hub.utils import concatenate_episodes, get_default_encoding
from lerobot.common.datasets.utils import calculate_episode_data_index, create_branch
from lerobot.common.datasets.video_utils import encode_video_frames
from lerobot.common.utils.utils import log_say
from lerobot.scripts.push_dataset_to_hub import (
push_dataset_card_to_hub,
push_meta_data_to_hub,
push_videos_to_hub,
save_meta_data,
)
########################################################################################
# Asynchrounous saving of images on disk
########################################################################################
def safe_stop_image_writer(func):
# TODO(aliberts): Allow to pass custom exceptions
# (e.g. ThreadServiceExit, KeyboardInterrupt, SystemExit, UnpluggedError, DynamixelCommError)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
image_writer = kwargs.get("dataset", {}).get("image_writer")
if image_writer is not None:
print("Waiting for image writer to terminate...")
stop_image_writer(image_writer, timeout=20)
raise e
return wrapper
def save_image(img_tensor, key, frame_index, episode_index, videos_dir: str):
img = Image.fromarray(img_tensor.numpy())
path = Path(videos_dir) / f"{key}_episode_{episode_index:06d}" / f"frame_{frame_index:06d}.png"
path.parent.mkdir(parents=True, exist_ok=True)
img.save(str(path), quality=100)
def loop_to_save_images_in_threads(image_queue, num_threads):
if num_threads < 1:
raise NotImplementedError(f"Only `num_threads>=1` is supported for now, but {num_threads=} given.")
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
futures = []
while True:
# Blocks until a frame is available
frame_data = image_queue.get()
# As usually done, exit loop when receiving None to stop the worker
if frame_data is None:
break
image, key, frame_index, episode_index, videos_dir = frame_data
futures.append(executor.submit(save_image, image, key, frame_index, episode_index, videos_dir))
# Before exiting function, wait for all threads to complete
with tqdm.tqdm(total=len(futures), desc="Writing images") as progress_bar:
concurrent.futures.wait(futures)
progress_bar.update(len(futures))
def start_image_writer_processes(image_queue, num_processes, num_threads_per_process):
if num_processes < 1:
raise ValueError(f"Only `num_processes>=1` is supported, but {num_processes=} given.")
if num_threads_per_process < 1:
raise NotImplementedError(
"Only `num_threads_per_process>=1` is supported for now, but {num_threads_per_process=} given."
)
processes = []
for _ in range(num_processes):
process = multiprocessing.Process(
target=loop_to_save_images_in_threads,
args=(image_queue, num_threads_per_process),
)
process.start()
processes.append(process)
return processes
def stop_processes(processes, queue, timeout):
# Send None to each process to signal them to stop
for _ in processes:
queue.put(None)
# Wait maximum 20 seconds for all processes to terminate
for process in processes:
process.join(timeout=timeout)
# If not terminated after 20 seconds, force termination
if process.is_alive():
process.terminate()
# Close the queue, no more items can be put in the queue
queue.close()
# Ensure all background queue threads have finished
queue.join_thread()
def start_image_writer(num_processes, num_threads):
"""This function abstract away the initialisation of processes or/and threads to
save images on disk asynchrounously, which is critical to control a robot and record data
at a high frame rate.
When `num_processes=0`, it returns a dictionary containing a threads pool of size `num_threads`.
When `num_processes>0`, it returns a dictionary containing a processes pool of size `num_processes`,
where each subprocess starts their own threads pool of size `num_threads`.
The optimal number of processes and threads depends on your computer capabilities.
We advise to use 4 threads per camera with 0 processes. If the fps is not stable, try to increase or lower
the number of threads. If it is still not stable, try to use 1 subprocess, or more.
"""
image_writer = {}
if num_processes == 0:
futures = []
threads_pool = concurrent.futures.ThreadPoolExecutor(max_workers=num_threads)
image_writer["threads_pool"], image_writer["futures"] = threads_pool, futures
else:
# TODO(rcadene): When using num_processes>1, `multiprocessing.Manager().Queue()`
# might be better than `multiprocessing.Queue()`. Source: https://www.geeksforgeeks.org/python-multiprocessing-queue-vs-multiprocessing-manager-queue
image_queue = multiprocessing.Queue()
processes_pool = start_image_writer_processes(
image_queue, num_processes=num_processes, num_threads_per_process=num_threads
)
image_writer["processes_pool"], image_writer["image_queue"] = processes_pool, image_queue
return image_writer
def async_save_image(image_writer, image, key, frame_index, episode_index, videos_dir):
"""This function abstract away the saving of an image on disk asynchrounously. It uses a dictionary
called image writer which contains either a pool of processes or a pool of threads.
"""
if "threads_pool" in image_writer:
threads_pool, futures = image_writer["threads_pool"], image_writer["futures"]
futures.append(threads_pool.submit(save_image, image, key, frame_index, episode_index, videos_dir))
else:
image_queue = image_writer["image_queue"]
image_queue.put((image, key, frame_index, episode_index, videos_dir))
def stop_image_writer(image_writer, timeout):
if "threads_pool" in image_writer:
futures = image_writer["futures"]
# Before exiting function, wait for all threads to complete
with tqdm.tqdm(total=len(futures), desc="Writing images") as progress_bar:
concurrent.futures.wait(futures, timeout=timeout)
progress_bar.update(len(futures))
else:
processes_pool, image_queue = image_writer["processes_pool"], image_writer["image_queue"]
stop_processes(processes_pool, image_queue, timeout=timeout)
########################################################################################
# Functions to initialize, resume and populate a dataset
########################################################################################
def init_dataset(
repo_id,
root,
force_override,
fps,
video,
write_images,
num_image_writer_processes,
num_image_writer_threads,
):
local_dir = Path(root) / repo_id
if local_dir.exists() and force_override:
shutil.rmtree(local_dir)
episodes_dir = local_dir / "episodes"
episodes_dir.mkdir(parents=True, exist_ok=True)
videos_dir = local_dir / "videos"
videos_dir.mkdir(parents=True, exist_ok=True)
# Logic to resume data recording
rec_info_path = episodes_dir / "data_recording_info.json"
if rec_info_path.exists():
with open(rec_info_path) as f:
rec_info = json.load(f)
num_episodes = rec_info["last_episode_index"] + 1
else:
num_episodes = 0
dataset = {
"repo_id": repo_id,
"local_dir": local_dir,
"videos_dir": videos_dir,
"episodes_dir": episodes_dir,
"fps": fps,
"video": video,
"rec_info_path": rec_info_path,
"num_episodes": num_episodes,
}
if write_images:
# Initialize processes or/and threads dedicated to save images on disk asynchronously,
# which is critical to control a robot and record data at a high frame rate.
image_writer = start_image_writer(
num_processes=num_image_writer_processes,
num_threads=num_image_writer_threads,
)
dataset["image_writer"] = image_writer
return dataset
def add_frame(dataset, observation, action):
if "current_episode" not in dataset:
# initialize episode dictionary
ep_dict = {}
for key in observation:
if key not in ep_dict:
ep_dict[key] = []
for key in action:
if key not in ep_dict:
ep_dict[key] = []
ep_dict["episode_index"] = []
ep_dict["frame_index"] = []
ep_dict["timestamp"] = []
ep_dict["next.done"] = []
dataset["current_episode"] = ep_dict
dataset["current_frame_index"] = 0
ep_dict = dataset["current_episode"]
episode_index = dataset["num_episodes"]
frame_index = dataset["current_frame_index"]
videos_dir = dataset["videos_dir"]
video = dataset["video"]
fps = dataset["fps"]
ep_dict["episode_index"].append(episode_index)
ep_dict["frame_index"].append(frame_index)
ep_dict["timestamp"].append(frame_index / fps)
ep_dict["next.done"].append(False)
img_keys = [key for key in observation if "image" in key]
non_img_keys = [key for key in observation if "image" not in key]
# Save all observed modalities except images
for key in non_img_keys:
ep_dict[key].append(observation[key])
# Save actions
for key in action:
ep_dict[key].append(action[key])
if "image_writer" not in dataset:
dataset["current_frame_index"] += 1
return
# Save images
image_writer = dataset["image_writer"]
for key in img_keys:
imgs_dir = videos_dir / f"{key}_episode_{episode_index:06d}"
async_save_image(
image_writer,
image=observation[key],
key=key,
frame_index=frame_index,
episode_index=episode_index,
videos_dir=str(videos_dir),
)
if video:
fname = f"{key}_episode_{episode_index:06d}.mp4"
frame_info = {"path": f"videos/{fname}", "timestamp": frame_index / fps}
else:
frame_info = str(imgs_dir / f"frame_{frame_index:06d}.png")
ep_dict[key].append(frame_info)
dataset["current_frame_index"] += 1
def delete_current_episode(dataset):
del dataset["current_episode"]
del dataset["current_frame_index"]
# delete temporary images
episode_index = dataset["num_episodes"]
videos_dir = dataset["videos_dir"]
for tmp_imgs_dir in videos_dir.glob(f"*_episode_{episode_index:06d}"):
shutil.rmtree(tmp_imgs_dir)
def save_current_episode(dataset):
episode_index = dataset["num_episodes"]
ep_dict = dataset["current_episode"]
episodes_dir = dataset["episodes_dir"]
rec_info_path = dataset["rec_info_path"]
ep_dict["next.done"][-1] = True
for key in ep_dict:
if "observation" in key and "image" not in key:
ep_dict[key] = torch.stack(ep_dict[key])
ep_dict["action"] = torch.stack(ep_dict["action"])
ep_dict["episode_index"] = torch.tensor(ep_dict["episode_index"])
ep_dict["frame_index"] = torch.tensor(ep_dict["frame_index"])
ep_dict["timestamp"] = torch.tensor(ep_dict["timestamp"])
ep_dict["next.done"] = torch.tensor(ep_dict["next.done"])
ep_path = episodes_dir / f"episode_{episode_index}.pth"
torch.save(ep_dict, ep_path)
rec_info = {
"last_episode_index": episode_index,
}
with open(rec_info_path, "w") as f:
json.dump(rec_info, f)
# force re-initialization of episode dictionnary during add_frame
del dataset["current_episode"]
dataset["num_episodes"] += 1
def encode_videos(dataset, image_keys, play_sounds):
log_say("Encoding videos", play_sounds)
num_episodes = dataset["num_episodes"]
videos_dir = dataset["videos_dir"]
local_dir = dataset["local_dir"]
fps = dataset["fps"]
# Use ffmpeg to convert frames stored as png into mp4 videos
for episode_index in tqdm.tqdm(range(num_episodes)):
for key in image_keys:
# key = f"observation.images.{name}"
tmp_imgs_dir = videos_dir / f"{key}_episode_{episode_index:06d}"
fname = f"{key}_episode_{episode_index:06d}.mp4"
video_path = local_dir / "videos" / fname
if video_path.exists():
# Skip if video is already encoded. Could be the case when resuming data recording.
continue
# note: `encode_video_frames` is a blocking call. Making it asynchronous shouldn't speedup encoding,
# since video encoding with ffmpeg is already using multithreading.
encode_video_frames(tmp_imgs_dir, video_path, fps, overwrite=True)
shutil.rmtree(tmp_imgs_dir)
def from_dataset_to_lerobot_dataset(dataset, play_sounds):
log_say("Consolidate episodes", play_sounds)
num_episodes = dataset["num_episodes"]
episodes_dir = dataset["episodes_dir"]
videos_dir = dataset["videos_dir"]
video = dataset["video"]
fps = dataset["fps"]
repo_id = dataset["repo_id"]
ep_dicts = []
for episode_index in tqdm.tqdm(range(num_episodes)):
ep_path = episodes_dir / f"episode_{episode_index}.pth"
ep_dict = torch.load(ep_path)
ep_dicts.append(ep_dict)
data_dict = concatenate_episodes(ep_dicts)
if video:
image_keys = [key for key in data_dict if "image" in key]
encode_videos(dataset, image_keys, play_sounds)
hf_dataset = to_hf_dataset(data_dict, video)
episode_data_index = calculate_episode_data_index(hf_dataset)
info = {
"codebase_version": CODEBASE_VERSION,
"fps": fps,
"video": video,
}
if video:
info["encoding"] = get_default_encoding()
lerobot_dataset = LeRobotDataset.from_preloaded(
repo_id=repo_id,
hf_dataset=hf_dataset,
episode_data_index=episode_data_index,
info=info,
videos_dir=videos_dir,
)
return lerobot_dataset
def save_lerobot_dataset_on_disk(lerobot_dataset):
hf_dataset = lerobot_dataset.hf_dataset
info = lerobot_dataset.info
stats = lerobot_dataset.stats
episode_data_index = lerobot_dataset.episode_data_index
local_dir = lerobot_dataset.videos_dir.parent
meta_data_dir = local_dir / "meta_data"
hf_dataset = hf_dataset.with_format(None) # to remove transforms that cant be saved
hf_dataset.save_to_disk(str(local_dir / "train"))
save_meta_data(info, stats, episode_data_index, meta_data_dir)
def push_lerobot_dataset_to_hub(lerobot_dataset, tags):
hf_dataset = lerobot_dataset.hf_dataset
local_dir = lerobot_dataset.videos_dir.parent
videos_dir = lerobot_dataset.videos_dir
repo_id = lerobot_dataset.repo_id
video = lerobot_dataset.video
meta_data_dir = local_dir / "meta_data"
if not (local_dir / "train").exists():
raise ValueError(
"You need to run `save_lerobot_dataset_on_disk(lerobot_dataset)` before pushing to the hub."
)
hf_dataset.push_to_hub(repo_id, revision="main")
push_meta_data_to_hub(repo_id, meta_data_dir, revision="main")
push_dataset_card_to_hub(repo_id, revision="main", tags=tags)
if video:
push_videos_to_hub(repo_id, videos_dir, revision="main")
create_branch(repo_id, repo_type="dataset", branch=CODEBASE_VERSION)
def create_lerobot_dataset(dataset, run_compute_stats, push_to_hub, tags, play_sounds):
if "image_writer" in dataset:
logging.info("Waiting for image writer to terminate...")
image_writer = dataset["image_writer"]
stop_image_writer(image_writer, timeout=20)
lerobot_dataset = from_dataset_to_lerobot_dataset(dataset, play_sounds)
if run_compute_stats:
log_say("Computing dataset statistics", play_sounds)
lerobot_dataset.stats = compute_stats(lerobot_dataset)
else:
logging.info("Skipping computation of the dataset statistics")
lerobot_dataset.stats = {}
save_lerobot_dataset_on_disk(lerobot_dataset)
if push_to_hub:
push_lerobot_dataset_to_hub(lerobot_dataset, tags)
return lerobot_dataset

View File

@@ -0,0 +1,56 @@
## Using / Updating `CODEBASE_VERSION` (for maintainers)
Since our dataset pushed to the hub are decoupled with the evolution of this repo, we ensure compatibility of
the datasets with our code, we use a `CODEBASE_VERSION` (defined in
lerobot/common/datasets/lerobot_dataset.py) variable.
For instance, [`lerobot/pusht`](https://huggingface.co/datasets/lerobot/pusht) has many versions to maintain backward compatibility between LeRobot codebase versions:
- [v1.0](https://huggingface.co/datasets/lerobot/pusht/tree/v1.0)
- [v1.1](https://huggingface.co/datasets/lerobot/pusht/tree/v1.1)
- [v1.2](https://huggingface.co/datasets/lerobot/pusht/tree/v1.2)
- [v1.3](https://huggingface.co/datasets/lerobot/pusht/tree/v1.3)
- [v1.4](https://huggingface.co/datasets/lerobot/pusht/tree/v1.4)
- [v1.5](https://huggingface.co/datasets/lerobot/pusht/tree/v1.5)
- [v1.6](https://huggingface.co/datasets/lerobot/pusht/tree/v1.6) <-- last version
- [main](https://huggingface.co/datasets/lerobot/pusht/tree/main) <-- points to the last version
Starting with v1.6, every dataset pushed to the hub or saved locally also have this version number in their
`info.json` metadata.
### Uploading a new dataset
If you are pushing a new dataset, you don't need to worry about any of the instructions below, nor to be
compatible with previous codebase versions. The `push_dataset_to_hub.py` script will automatically tag your
dataset with the current `CODEBASE_VERSION`.
### Updating an existing dataset
If you want to update an existing dataset, you need to change the `CODEBASE_VERSION` from `lerobot_dataset.py`
before running `push_dataset_to_hub.py`. This is especially useful if you introduce a breaking change
intentionally or not (i.e. something not backward compatible such as modifying the reward functions used,
deleting some frames at the end of an episode, etc.). That way, people running a previous version of the
codebase won't be affected by your change and backward compatibility is maintained.
However, you will need to update the version of ALL the other datasets so that they have the new
`CODEBASE_VERSION` as a branch in their hugging face dataset repository. Don't worry, there is an easy way
that doesn't require to run `push_dataset_to_hub.py`. You can just "branch-out" from the `main` branch on HF
dataset repo by running this script which corresponds to a `git checkout -b` (so no copy or upload needed):
```python
from huggingface_hub import HfApi
from lerobot import available_datasets
from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION
api = HfApi()
for repo_id in available_datasets:
dataset_info = api.list_repo_refs(repo_id, repo_type="dataset")
branches = [b.name for b in dataset_info.branches]
if CODEBASE_VERSION in branches:
print(f"{repo_id} already @{CODEBASE_VERSION}, skipping.")
continue
else:
# Now create a branch named after the new version by branching out from "main"
# which is expected to be the preceding version
api.create_branch(repo_id, repo_type="dataset", branch=CODEBASE_VERSION, revision="main")
print(f"{repo_id} successfully updated @{CODEBASE_VERSION}")
```

View File

@@ -14,156 +14,189 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This file contains all obsolete download scripts. They are centralized here to not have to load
useless dependencies when using datasets.
This file contains download scripts for raw datasets.
Example of usage:
```
python lerobot/common/datasets/push_dataset_to_hub/_download_raw.py \
--raw-dir data/lerobot-raw/pusht_raw \
--repo-id lerobot-raw/pusht_raw
```
"""
import io
import argparse
import logging
import shutil
import warnings
from pathlib import Path
import tqdm
from huggingface_hub import snapshot_download
from lerobot.common.datasets.push_dataset_to_hub.utils import check_repo_id
def download_raw(raw_dir, dataset_id):
if "aloha" in dataset_id or "image" in dataset_id:
download_hub(raw_dir, dataset_id)
elif "pusht" in dataset_id:
download_pusht(raw_dir)
elif "xarm" in dataset_id:
download_xarm(raw_dir)
elif "umi" in dataset_id:
download_umi(raw_dir)
else:
raise ValueError(dataset_id)
# {raw_repo_id: raw_format}
AVAILABLE_RAW_REPO_IDS = {
"lerobot-raw/aloha_mobile_cabinet_raw": "aloha_hdf5",
"lerobot-raw/aloha_mobile_chair_raw": "aloha_hdf5",
"lerobot-raw/aloha_mobile_elevator_raw": "aloha_hdf5",
"lerobot-raw/aloha_mobile_shrimp_raw": "aloha_hdf5",
"lerobot-raw/aloha_mobile_wash_pan_raw": "aloha_hdf5",
"lerobot-raw/aloha_mobile_wipe_wine_raw": "aloha_hdf5",
"lerobot-raw/aloha_sim_insertion_human_raw": "aloha_hdf5",
"lerobot-raw/aloha_sim_insertion_scripted_raw": "aloha_hdf5",
"lerobot-raw/aloha_sim_transfer_cube_human_raw": "aloha_hdf5",
"lerobot-raw/aloha_sim_transfer_cube_scripted_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_battery_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_candy_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_coffee_new_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_coffee_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_cups_open_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_fork_pick_up_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_pingpong_test_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_pro_pencil_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_screw_driver_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_tape_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_thread_velcro_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_towel_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_vinh_cup_left_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_vinh_cup_raw": "aloha_hdf5",
"lerobot-raw/aloha_static_ziploc_slide_raw": "aloha_hdf5",
"lerobot-raw/umi_cup_in_the_wild_raw": "umi_zarr",
"lerobot-raw/pusht_raw": "pusht_zarr",
"lerobot-raw/unitreeh1_fold_clothes_raw": "aloha_hdf5",
"lerobot-raw/unitreeh1_rearrange_objects_raw": "aloha_hdf5",
"lerobot-raw/unitreeh1_two_robot_greeting_raw": "aloha_hdf5",
"lerobot-raw/unitreeh1_warehouse_raw": "aloha_hdf5",
"lerobot-raw/xarm_lift_medium_raw": "xarm_pkl",
"lerobot-raw/xarm_lift_medium_replay_raw": "xarm_pkl",
"lerobot-raw/xarm_push_medium_raw": "xarm_pkl",
"lerobot-raw/xarm_push_medium_replay_raw": "xarm_pkl",
"lerobot-raw/fractal20220817_data_raw": "openx_rlds.fractal20220817_data",
"lerobot-raw/kuka_raw": "openx_rlds.kuka",
"lerobot-raw/bridge_openx_raw": "openx_rlds.bridge_openx",
"lerobot-raw/taco_play_raw": "openx_rlds.taco_play",
"lerobot-raw/jaco_play_raw": "openx_rlds.jaco_play",
"lerobot-raw/berkeley_cable_routing_raw": "openx_rlds.berkeley_cable_routing",
"lerobot-raw/roboturk_raw": "openx_rlds.roboturk",
"lerobot-raw/nyu_door_opening_surprising_effectiveness_raw": "openx_rlds.nyu_door_opening_surprising_effectiveness",
"lerobot-raw/viola_raw": "openx_rlds.viola",
"lerobot-raw/berkeley_autolab_ur5_raw": "openx_rlds.berkeley_autolab_ur5",
"lerobot-raw/toto_raw": "openx_rlds.toto",
"lerobot-raw/language_table_raw": "openx_rlds.language_table",
"lerobot-raw/columbia_cairlab_pusht_real_raw": "openx_rlds.columbia_cairlab_pusht_real",
"lerobot-raw/stanford_kuka_multimodal_dataset_raw": "openx_rlds.stanford_kuka_multimodal_dataset",
"lerobot-raw/nyu_rot_dataset_raw": "openx_rlds.nyu_rot_dataset",
"lerobot-raw/io_ai_tech_raw": "openx_rlds.io_ai_tech",
"lerobot-raw/stanford_hydra_dataset_raw": "openx_rlds.stanford_hydra_dataset",
"lerobot-raw/austin_buds_dataset_raw": "openx_rlds.austin_buds_dataset",
"lerobot-raw/nyu_franka_play_dataset_raw": "openx_rlds.nyu_franka_play_dataset",
"lerobot-raw/maniskill_dataset_raw": "openx_rlds.maniskill_dataset",
"lerobot-raw/furniture_bench_dataset_raw": "openx_rlds.furniture_bench_dataset",
"lerobot-raw/cmu_franka_exploration_dataset_raw": "openx_rlds.cmu_franka_exploration_dataset",
"lerobot-raw/ucsd_kitchen_dataset_raw": "openx_rlds.ucsd_kitchen_dataset",
"lerobot-raw/ucsd_pick_and_place_dataset_raw": "openx_rlds.ucsd_pick_and_place_dataset",
"lerobot-raw/spoc_raw": "openx_rlds.spoc",
"lerobot-raw/austin_sailor_dataset_raw": "openx_rlds.austin_sailor_dataset",
"lerobot-raw/austin_sirius_dataset_raw": "openx_rlds.austin_sirius_dataset",
"lerobot-raw/bc_z_raw": "openx_rlds.bc_z",
"lerobot-raw/utokyo_pr2_opening_fridge_raw": "openx_rlds.utokyo_pr2_opening_fridge",
"lerobot-raw/utokyo_pr2_tabletop_manipulation_raw": "openx_rlds.utokyo_pr2_tabletop_manipulation",
"lerobot-raw/utokyo_xarm_pick_and_place_raw": "openx_rlds.utokyo_xarm_pick_and_place",
"lerobot-raw/utokyo_xarm_bimanual_raw": "openx_rlds.utokyo_xarm_bimanual",
"lerobot-raw/utokyo_saytap_raw": "openx_rlds.utokyo_saytap",
"lerobot-raw/robo_net_raw": "openx_rlds.robo_net",
"lerobot-raw/robo_set_raw": "openx_rlds.robo_set",
"lerobot-raw/berkeley_mvp_raw": "openx_rlds.berkeley_mvp",
"lerobot-raw/berkeley_rpt_raw": "openx_rlds.berkeley_rpt",
"lerobot-raw/kaist_nonprehensile_raw": "openx_rlds.kaist_nonprehensile",
"lerobot-raw/stanford_mask_vit_raw": "openx_rlds.stanford_mask_vit",
"lerobot-raw/tokyo_u_lsmo_raw": "openx_rlds.tokyo_u_lsmo",
"lerobot-raw/dlr_sara_pour_raw": "openx_rlds.dlr_sara_pour",
"lerobot-raw/dlr_sara_grid_clamp_raw": "openx_rlds.dlr_sara_grid_clamp",
"lerobot-raw/dlr_edan_shared_control_raw": "openx_rlds.dlr_edan_shared_control",
"lerobot-raw/asu_table_top_raw": "openx_rlds.asu_table_top",
"lerobot-raw/stanford_robocook_raw": "openx_rlds.stanford_robocook",
"lerobot-raw/imperialcollege_sawyer_wrist_cam_raw": "openx_rlds.imperialcollege_sawyer_wrist_cam",
"lerobot-raw/iamlab_cmu_pickup_insert_raw": "openx_rlds.iamlab_cmu_pickup_insert",
"lerobot-raw/uiuc_d3field_raw": "openx_rlds.uiuc_d3field",
"lerobot-raw/utaustin_mutex_raw": "openx_rlds.utaustin_mutex",
"lerobot-raw/berkeley_fanuc_manipulation_raw": "openx_rlds.berkeley_fanuc_manipulation",
"lerobot-raw/cmu_playing_with_food_raw": "openx_rlds.cmu_playing_with_food",
"lerobot-raw/cmu_play_fusion_raw": "openx_rlds.cmu_play_fusion",
"lerobot-raw/cmu_stretch_raw": "openx_rlds.cmu_stretch",
"lerobot-raw/berkeley_gnm_recon_raw": "openx_rlds.berkeley_gnm_recon",
"lerobot-raw/berkeley_gnm_cory_hall_raw": "openx_rlds.berkeley_gnm_cory_hall",
"lerobot-raw/berkeley_gnm_sac_son_raw": "openx_rlds.berkeley_gnm_sac_son",
"lerobot-raw/droid_raw": "openx_rlds.droid",
"lerobot-raw/droid_100_raw": "openx_rlds.droid100",
"lerobot-raw/fmb_raw": "openx_rlds.fmb",
"lerobot-raw/dobbe_raw": "openx_rlds.dobbe",
"lerobot-raw/usc_cloth_sim_raw": "openx_rlds.usc_cloth_sim",
"lerobot-raw/plex_robosuite_raw": "openx_rlds.plex_robosuite",
"lerobot-raw/conq_hose_manipulation_raw": "openx_rlds.conq_hose_manipulation",
"lerobot-raw/vima_raw": "openx_rlds.vima",
"lerobot-raw/robot_vqa_raw": "openx_rlds.robot_vqa",
"lerobot-raw/mimic_play_raw": "openx_rlds.mimic_play",
"lerobot-raw/tidybot_raw": "openx_rlds.tidybot",
"lerobot-raw/eth_agent_affordances_raw": "openx_rlds.eth_agent_affordances",
}
def download_and_extract_zip(url: str, destination_folder: Path) -> bool:
import zipfile
def download_raw(raw_dir: Path, repo_id: str):
check_repo_id(repo_id)
user_id, dataset_id = repo_id.split("/")
import requests
if not dataset_id.endswith("_raw"):
warnings.warn(
f"""`dataset_id` ({dataset_id}) doesn't end with '_raw' (e.g. 'lerobot/pusht_raw'). Following this
naming convention by renaming your repository is advised, but not mandatory.""",
stacklevel=1,
)
print(f"downloading from {url}")
response = requests.get(url, stream=True)
if response.status_code == 200:
total_size = int(response.headers.get("content-length", 0))
progress_bar = tqdm.tqdm(total=total_size, unit="B", unit_scale=True)
zip_file = io.BytesIO()
for chunk in response.iter_content(chunk_size=1024):
if chunk:
zip_file.write(chunk)
progress_bar.update(len(chunk))
progress_bar.close()
zip_file.seek(0)
with zipfile.ZipFile(zip_file, "r") as zip_ref:
zip_ref.extractall(destination_folder)
def download_pusht(raw_dir: str):
pusht_url = "https://diffusion-policy.cs.columbia.edu/data/training/pusht.zip"
raw_dir = Path(raw_dir)
raw_dir.mkdir(parents=True, exist_ok=True)
download_and_extract_zip(pusht_url, raw_dir)
# file is created inside a useful "pusht" directory, so we move it out and delete the dir
zarr_path = raw_dir / "pusht_cchi_v7_replay.zarr"
shutil.move(raw_dir / "pusht" / "pusht_cchi_v7_replay.zarr", zarr_path)
shutil.rmtree(raw_dir / "pusht")
def download_xarm(raw_dir: Path):
"""Download all xarm datasets at once"""
import zipfile
import gdown
raw_dir = Path(raw_dir)
raw_dir.mkdir(parents=True, exist_ok=True)
# from https://github.com/fyhMer/fowm/blob/main/scripts/download_datasets.py
url = "https://drive.google.com/uc?id=1nhxpykGtPDhmQKm-_B8zBSywVRdgeVya"
zip_path = raw_dir / "data.zip"
gdown.download(url, str(zip_path), quiet=False)
print("Extracting...")
with zipfile.ZipFile(str(zip_path), "r") as zip_f:
for pkl_path in zip_f.namelist():
if pkl_path.startswith("data/xarm") and pkl_path.endswith(".pkl"):
zip_f.extract(member=pkl_path)
# move to corresponding raw directory
extract_dir = pkl_path.replace("/buffer.pkl", "")
raw_pkl_path = raw_dir / "buffer.pkl"
shutil.move(pkl_path, raw_pkl_path)
shutil.rmtree(extract_dir)
zip_path.unlink()
def download_hub(raw_dir: Path, dataset_id: str):
raw_dir = Path(raw_dir)
# Send warning if raw_dir isn't well formated
if raw_dir.parts[-2] != user_id or raw_dir.parts[-1] != dataset_id:
warnings.warn(
f"""`raw_dir` ({raw_dir}) doesn't contain a community or user id `/` the name of the dataset that
match the `repo_id` (e.g. 'data/lerobot/pusht_raw'). Following this naming convention is advised,
but not mandatory.""",
stacklevel=1,
)
raw_dir.mkdir(parents=True, exist_ok=True)
logging.info(f"Start downloading from huggingface.co/cadene for {dataset_id}")
snapshot_download(f"cadene/{dataset_id}_raw", repo_type="dataset", local_dir=raw_dir)
logging.info(f"Finish downloading from huggingface.co/cadene for {dataset_id}")
logging.info(f"Start downloading from huggingface.co/{user_id} for {dataset_id}")
snapshot_download(repo_id, repo_type="dataset", local_dir=raw_dir)
logging.info(f"Finish downloading from huggingface.co/{user_id} for {dataset_id}")
def download_umi(raw_dir: Path):
url_cup_in_the_wild = "https://real.stanford.edu/umi/data/zarr_datasets/cup_in_the_wild.zarr.zip"
zarr_path = raw_dir / "cup_in_the_wild.zarr"
def download_all_raw_datasets(data_dir: Path | None = None):
if data_dir is None:
data_dir = Path("data")
for repo_id in AVAILABLE_RAW_REPO_IDS:
raw_dir = data_dir / repo_id
download_raw(raw_dir, repo_id)
raw_dir = Path(raw_dir)
raw_dir.mkdir(parents=True, exist_ok=True)
download_and_extract_zip(url_cup_in_the_wild, zarr_path)
def main():
parser = argparse.ArgumentParser(
description=f"""A script to download raw datasets from Hugging Face hub to a local directory. Here is a
non exhaustive list of available repositories to use in `--repo-id`: {list(AVAILABLE_RAW_REPO_IDS.keys())}""",
)
parser.add_argument(
"--raw-dir",
type=Path,
required=True,
help="Directory containing input raw datasets (e.g. `data/aloha_mobile_chair_raw` or `data/pusht_raw).",
)
parser.add_argument(
"--repo-id",
type=str,
required=True,
help="""Repositery identifier on Hugging Face: a community or a user name `/` the name of
the dataset (e.g. `lerobot/pusht_raw`, `cadene/aloha_sim_insertion_human_raw`).""",
)
args = parser.parse_args()
download_raw(**vars(args))
if __name__ == "__main__":
data_dir = Path("data")
dataset_ids = [
"pusht_image",
"xarm_lift_medium_image",
"xarm_lift_medium_replay_image",
"xarm_push_medium_image",
"xarm_push_medium_replay_image",
"aloha_sim_insertion_human_image",
"aloha_sim_insertion_scripted_image",
"aloha_sim_transfer_cube_human_image",
"aloha_sim_transfer_cube_scripted_image",
"pusht",
"xarm_lift_medium",
"xarm_lift_medium_replay",
"xarm_push_medium",
"xarm_push_medium_replay",
"aloha_sim_insertion_human",
"aloha_sim_insertion_scripted",
"aloha_sim_transfer_cube_human",
"aloha_sim_transfer_cube_scripted",
"aloha_mobile_cabinet",
"aloha_mobile_chair",
"aloha_mobile_elevator",
"aloha_mobile_shrimp",
"aloha_mobile_wash_pan",
"aloha_mobile_wipe_wine",
"aloha_static_battery",
"aloha_static_candy",
"aloha_static_coffee",
"aloha_static_coffee_new",
"aloha_static_cups_open",
"aloha_static_fork_pick_up",
"aloha_static_pingpong_test",
"aloha_static_pro_pencil",
"aloha_static_screw_driver",
"aloha_static_tape",
"aloha_static_thread_velcro",
"aloha_static_towel",
"aloha_static_vinh_cup",
"aloha_static_vinh_cup_left",
"aloha_static_ziploc_slide",
"umi_cup_in_the_wild",
]
for dataset_id in dataset_ids:
raw_dir = data_dir / f"{dataset_id}_raw"
download_raw(raw_dir, dataset_id)
main()

View File

@@ -0,0 +1,184 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Use this script to batch encode lerobot dataset from their raw format to LeRobotDataset and push their updated
version to the hub. Under the hood, this script reuses 'push_dataset_to_hub.py'. It assumes that you already
downloaded raw datasets, which you can do with the related '_download_raw.py' script.
For instance, for codebase_version = 'v1.6', the following command was run, assuming raw datasets from
lerobot-raw were downloaded in 'raw/datasets/directory':
```bash
python lerobot/common/datasets/push_dataset_to_hub/_encode_datasets.py \
--raw-dir raw/datasets/directory \
--raw-repo-ids lerobot-raw \
--local-dir push/datasets/directory \
--tests-data-dir tests/data \
--push-repo lerobot \
--vcodec libsvtav1 \
--pix-fmt yuv420p \
--g 2 \
--crf 30
```
"""
import argparse
from pathlib import Path
from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION
from lerobot.common.datasets.push_dataset_to_hub._download_raw import AVAILABLE_RAW_REPO_IDS
from lerobot.common.datasets.push_dataset_to_hub.utils import check_repo_id
from lerobot.scripts.push_dataset_to_hub import push_dataset_to_hub
def get_push_repo_id_from_raw(raw_repo_id: str, push_repo: str) -> str:
dataset_id_raw = raw_repo_id.split("/")[1]
dataset_id = dataset_id_raw.removesuffix("_raw")
return f"{push_repo}/{dataset_id}"
def encode_datasets(
raw_dir: Path,
raw_repo_ids: list[str],
push_repo: str,
vcodec: str,
pix_fmt: str,
g: int,
crf: int,
local_dir: Path | None = None,
tests_data_dir: Path | None = None,
raw_format: str | None = None,
dry_run: bool = False,
) -> None:
if len(raw_repo_ids) == 1 and raw_repo_ids[0].lower() == "lerobot-raw":
raw_repo_ids_format = AVAILABLE_RAW_REPO_IDS
else:
if raw_format is None:
raise ValueError(raw_format)
raw_repo_ids_format = {id_: raw_format for id_ in raw_repo_ids}
for raw_repo_id, repo_raw_format in raw_repo_ids_format.items():
check_repo_id(raw_repo_id)
dataset_repo_id_push = get_push_repo_id_from_raw(raw_repo_id, push_repo)
dataset_raw_dir = raw_dir / raw_repo_id
dataset_dir = local_dir / dataset_repo_id_push if local_dir is not None else None
encoding = {
"vcodec": vcodec,
"pix_fmt": pix_fmt,
"g": g,
"crf": crf,
}
if not (dataset_raw_dir).is_dir():
raise NotADirectoryError(dataset_raw_dir)
if not dry_run:
push_dataset_to_hub(
dataset_raw_dir,
raw_format=repo_raw_format,
repo_id=dataset_repo_id_push,
local_dir=dataset_dir,
resume=True,
encoding=encoding,
tests_data_dir=tests_data_dir,
)
else:
print(
f"DRY RUN: {dataset_raw_dir} --> {dataset_dir} --> {dataset_repo_id_push}@{CODEBASE_VERSION}"
)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--raw-dir",
type=Path,
default=Path("data"),
help="Directory where raw datasets are located.",
)
parser.add_argument(
"--raw-repo-ids",
type=str,
nargs="*",
default=["lerobot-raw"],
help="""Raw dataset repo ids. if 'lerobot-raw', the keys from `AVAILABLE_RAW_REPO_IDS` will be
used and raw datasets will be fetched from the 'lerobot-raw/' repo and pushed with their
associated format. It is assumed that each dataset is located at `raw_dir / raw_repo_id` """,
)
parser.add_argument(
"--raw-format",
type=str,
default=None,
help="""Raw format to use for the raw repo-ids. Must be specified if --raw-repo-ids is not
'lerobot-raw'""",
)
parser.add_argument(
"--local-dir",
type=Path,
default=None,
help="""When provided, writes the dataset converted to LeRobotDataset format in this directory
(e.g. `data/lerobot/aloha_mobile_chair`).""",
)
parser.add_argument(
"--push-repo",
type=str,
default="lerobot",
help="Repo to upload datasets to",
)
parser.add_argument(
"--vcodec",
type=str,
default="libsvtav1",
help="Codec to use for encoding videos",
)
parser.add_argument(
"--pix-fmt",
type=str,
default="yuv420p",
help="Pixel formats (chroma subsampling) to be used for encoding",
)
parser.add_argument(
"--g",
type=int,
default=2,
help="Group of pictures sizes to be used for encoding.",
)
parser.add_argument(
"--crf",
type=int,
default=30,
help="Constant rate factors to be used for encoding.",
)
parser.add_argument(
"--tests-data-dir",
type=Path,
default=None,
help=(
"When provided, save tests artifacts into the given directory "
"(e.g. `--tests-data-dir tests/data` will save to tests/data/{--repo-id})."
),
)
parser.add_argument(
"--dry-run",
type=int,
default=0,
help="If not set to 0, this script won't download or upload anything.",
)
args = parser.parse_args()
encode_datasets(**vars(args))
if __name__ == "__main__":
main()

View File

@@ -28,8 +28,14 @@ import tqdm
from datasets import Dataset, Features, Image, Sequence, Value
from PIL import Image as PILImage
from lerobot.common.datasets.push_dataset_to_hub.utils import concatenate_episodes, save_images_concurrently
from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION
from lerobot.common.datasets.push_dataset_to_hub.utils import (
concatenate_episodes,
get_default_encoding,
save_images_concurrently,
)
from lerobot.common.datasets.utils import (
calculate_episode_data_index,
hf_transform_to_torch,
)
from lerobot.common.datasets.video_utils import VideoFrame, encode_video_frames
@@ -70,16 +76,24 @@ def check_format(raw_dir) -> bool:
assert c < h and c < w, f"Expect (h,w,c) image format but ({h=},{w=},{c=}) provided."
def load_from_raw(raw_dir, out_dir, fps, video, debug):
def load_from_raw(
raw_dir: Path,
videos_dir: Path,
fps: int,
video: bool,
episodes: list[int] | None = None,
encoding: dict | None = None,
):
# only frames from simulation are uncompressed
compressed_images = "sim" not in raw_dir.name
hdf5_files = list(raw_dir.glob("*.hdf5"))
ep_dicts = []
episode_data_index = {"from": [], "to": []}
hdf5_files = sorted(raw_dir.glob("episode_*.hdf5"))
num_episodes = len(hdf5_files)
id_from = 0
for ep_idx, ep_path in tqdm.tqdm(enumerate(hdf5_files), total=len(hdf5_files)):
ep_dicts = []
ep_ids = episodes if episodes else range(num_episodes)
for ep_idx in tqdm.tqdm(ep_ids):
ep_path = hdf5_files[ep_idx]
with h5py.File(ep_path, "r") as ep:
num_frames = ep["/action"].shape[0]
@@ -114,13 +128,13 @@ def load_from_raw(raw_dir, out_dir, fps, video, debug):
if video:
# save png images in temporary directory
tmp_imgs_dir = out_dir / "tmp_images"
tmp_imgs_dir = videos_dir / "tmp_images"
save_images_concurrently(imgs_array, tmp_imgs_dir)
# encode images to a mp4 video
fname = f"{img_key}_episode_{ep_idx:06d}.mp4"
video_path = out_dir / "videos" / fname
encode_video_frames(tmp_imgs_dir, video_path, fps)
video_path = videos_dir / fname
encode_video_frames(tmp_imgs_dir, video_path, fps, **(encoding or {}))
# clean temporary images directory
shutil.rmtree(tmp_imgs_dir)
@@ -147,19 +161,13 @@ def load_from_raw(raw_dir, out_dir, fps, video, debug):
assert isinstance(ep_idx, int)
ep_dicts.append(ep_dict)
episode_data_index["from"].append(id_from)
episode_data_index["to"].append(id_from + num_frames)
id_from += num_frames
gc.collect()
# process first episode only
if debug:
break
data_dict = concatenate_episodes(ep_dicts)
return data_dict, episode_data_index
total_frames = data_dict["frame_index"].shape[0]
data_dict["index"] = torch.arange(0, total_frames, 1)
return data_dict
def to_hf_dataset(data_dict, video) -> Dataset:
@@ -197,18 +205,29 @@ def to_hf_dataset(data_dict, video) -> Dataset:
return hf_dataset
def from_raw_to_lerobot_format(raw_dir: Path, out_dir: Path, fps=None, video=True, debug=False):
def from_raw_to_lerobot_format(
raw_dir: Path,
videos_dir: Path,
fps: int | None = None,
video: bool = True,
episodes: list[int] | None = None,
encoding: dict | None = None,
):
# sanity check
check_format(raw_dir)
if fps is None:
fps = 50
data_dir, episode_data_index = load_from_raw(raw_dir, out_dir, fps, video, debug)
hf_dataset = to_hf_dataset(data_dir, video)
data_dict = load_from_raw(raw_dir, videos_dir, fps, video, episodes, encoding)
hf_dataset = to_hf_dataset(data_dict, video)
episode_data_index = calculate_episode_data_index(hf_dataset)
info = {
"codebase_version": CODEBASE_VERSION,
"fps": fps,
"video": video,
}
if video:
info["encoding"] = get_default_encoding()
return hf_dataset, episode_data_index, info

View File

@@ -0,0 +1,104 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Contains utilities to process raw data format of png images files recorded with capture_camera_feed.py
"""
from pathlib import Path
import torch
from datasets import Dataset, Features, Image, Value
from PIL import Image as PILImage
from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION
from lerobot.common.datasets.push_dataset_to_hub.utils import concatenate_episodes
from lerobot.common.datasets.utils import calculate_episode_data_index, hf_transform_to_torch
from lerobot.common.datasets.video_utils import VideoFrame
def check_format(raw_dir: Path) -> bool:
image_paths = list(raw_dir.glob("frame_*.png"))
if len(image_paths) == 0:
raise ValueError
def load_from_raw(raw_dir: Path, fps: int, episodes: list[int] | None = None):
if episodes is not None:
# TODO(aliberts): add support for multi-episodes.
raise NotImplementedError()
ep_dict = {}
ep_idx = 0
image_paths = sorted(raw_dir.glob("frame_*.png"))
num_frames = len(image_paths)
ep_dict["observation.image"] = [PILImage.open(x) for x in image_paths]
ep_dict["episode_index"] = torch.tensor([ep_idx] * num_frames)
ep_dict["frame_index"] = torch.arange(0, num_frames, 1)
ep_dict["timestamp"] = torch.arange(0, num_frames, 1) / fps
ep_dicts = [ep_dict]
data_dict = concatenate_episodes(ep_dicts)
total_frames = data_dict["frame_index"].shape[0]
data_dict["index"] = torch.arange(0, total_frames, 1)
return data_dict
def to_hf_dataset(data_dict, video) -> Dataset:
features = {}
if video:
features["observation.image"] = VideoFrame()
else:
features["observation.image"] = Image()
features["episode_index"] = Value(dtype="int64", id=None)
features["frame_index"] = Value(dtype="int64", id=None)
features["timestamp"] = Value(dtype="float32", id=None)
features["index"] = Value(dtype="int64", id=None)
hf_dataset = Dataset.from_dict(data_dict, features=Features(features))
hf_dataset.set_transform(hf_transform_to_torch)
return hf_dataset
def from_raw_to_lerobot_format(
raw_dir: Path,
videos_dir: Path,
fps: int | None = None,
video: bool = True,
episodes: list[int] | None = None,
encoding: dict | None = None,
):
if video or episodes or encoding is not None:
# TODO(aliberts): support this
raise NotImplementedError
# sanity check
check_format(raw_dir)
if fps is None:
fps = 30
data_dict = load_from_raw(raw_dir, videos_dir, fps, video, episodes)
hf_dataset = to_hf_dataset(data_dict, video)
episode_data_index = calculate_episode_data_index(hf_dataset)
info = {
"codebase_version": CODEBASE_VERSION,
"fps": fps,
"video": video,
}
return hf_dataset, episode_data_index, info

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Contains utilities to process raw data format from dora-record
"""
import re
import warnings
from pathlib import Path
import pandas as pd
import torch
from datasets import Dataset, Features, Image, Sequence, Value
from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION
from lerobot.common.datasets.utils import (
calculate_episode_data_index,
hf_transform_to_torch,
)
from lerobot.common.datasets.video_utils import VideoFrame
def check_format(raw_dir) -> bool:
assert raw_dir.exists()
leader_file = list(raw_dir.glob("*.parquet"))
if len(leader_file) == 0:
raise ValueError(f"Missing parquet files in '{raw_dir}'")
return True
def load_from_raw(raw_dir: Path, videos_dir: Path, fps: int, video: bool, episodes: list[int] | None = None):
# Load data stream that will be used as reference for the timestamps synchronization
reference_files = list(raw_dir.glob("observation.images.cam_*.parquet"))
if len(reference_files) == 0:
raise ValueError(f"Missing reference files for camera, starting with in '{raw_dir}'")
# select first camera in alphanumeric order
reference_key = sorted(reference_files)[0].stem
reference_df = pd.read_parquet(raw_dir / f"{reference_key}.parquet")
reference_df = reference_df[["timestamp_utc", reference_key]]
# Merge all data stream using nearest backward strategy
df = reference_df
for path in raw_dir.glob("*.parquet"):
key = path.stem # action or observation.state or ...
if key == reference_key:
continue
if "failed_episode_index" in key:
# TODO(rcadene): add support for removing episodes that are tagged as "failed"
continue
modality_df = pd.read_parquet(path)
modality_df = modality_df[["timestamp_utc", key]]
df = pd.merge_asof(
df,
modality_df,
on="timestamp_utc",
# "nearest" is the best option over "backward", since the latter can desynchronizes camera timestamps by
# matching timestamps that are too far appart, in order to fit the backward constraints. It's not the case for "nearest".
# However, note that "nearest" might synchronize the reference camera with other cameras on slightly future timestamps.
# are too far appart.
direction="nearest",
tolerance=pd.Timedelta(f"{1/fps} seconds"),
)
# Remove rows with episode_index -1 which indicates data that correspond to in-between episodes
df = df[df["episode_index"] != -1]
image_keys = [key for key in df if "observation.images." in key]
def get_episode_index(row):
episode_index_per_cam = {}
for key in image_keys:
path = row[key][0]["path"]
match = re.search(r"_(\d{6}).mp4", path)
if not match:
raise ValueError(path)
episode_index = int(match.group(1))
episode_index_per_cam[key] = episode_index
if len(set(episode_index_per_cam.values())) != 1:
raise ValueError(
f"All cameras are expected to belong to the same episode, but getting {episode_index_per_cam}"
)
return episode_index
df["episode_index"] = df.apply(get_episode_index, axis=1)
# dora only use arrays, so single values are encapsulated into a list
df["frame_index"] = df.groupby("episode_index").cumcount()
df = df.reset_index()
df["index"] = df.index
# set 'next.done' to True for the last frame of each episode
df["next.done"] = False
df.loc[df.groupby("episode_index").tail(1).index, "next.done"] = True
df["timestamp"] = df["timestamp_utc"].map(lambda x: x.timestamp())
# each episode starts with timestamp 0 to match the ones from the video
df["timestamp"] = df.groupby("episode_index")["timestamp"].transform(lambda x: x - x.iloc[0])
del df["timestamp_utc"]
# sanity check
has_nan = df.isna().any().any()
if has_nan:
raise ValueError("Dataset contains Nan values.")
# sanity check episode indices go from 0 to n-1
ep_ids = [ep_idx for ep_idx, _ in df.groupby("episode_index")]
expected_ep_ids = list(range(df["episode_index"].max() + 1))
if ep_ids != expected_ep_ids:
raise ValueError(f"Episodes indices go from {ep_ids} instead of {expected_ep_ids}")
# Create symlink to raw videos directory (that needs to be absolute not relative)
videos_dir.parent.mkdir(parents=True, exist_ok=True)
videos_dir.symlink_to((raw_dir / "videos").absolute())
# sanity check the video paths are well formated
for key in df:
if "observation.images." not in key:
continue
for ep_idx in ep_ids:
video_path = videos_dir / f"{key}_episode_{ep_idx:06d}.mp4"
if not video_path.exists():
raise ValueError(f"Video file not found in {video_path}")
data_dict = {}
for key in df:
# is video frame
if "observation.images." in key:
# we need `[0] because dora only use arrays, so single values are encapsulated into a list.
# it is the case for video_frame dictionary = [{"path": ..., "timestamp": ...}]
data_dict[key] = [video_frame[0] for video_frame in df[key].values]
# sanity check the video path is well formated
video_path = videos_dir.parent / data_dict[key][0]["path"]
if not video_path.exists():
raise ValueError(f"Video file not found in {video_path}")
# is number
elif df[key].iloc[0].ndim == 0 or df[key].iloc[0].shape[0] == 1:
data_dict[key] = torch.from_numpy(df[key].values)
# is vector
elif df[key].iloc[0].shape[0] > 1:
data_dict[key] = torch.stack([torch.from_numpy(x.copy()) for x in df[key].values])
else:
raise ValueError(key)
return data_dict
def to_hf_dataset(data_dict, video) -> Dataset:
features = {}
keys = [key for key in data_dict if "observation.images." in key]
for key in keys:
if video:
features[key] = VideoFrame()
else:
features[key] = Image()
features["observation.state"] = Sequence(
length=data_dict["observation.state"].shape[1], feature=Value(dtype="float32", id=None)
)
if "observation.velocity" in data_dict:
features["observation.velocity"] = Sequence(
length=data_dict["observation.velocity"].shape[1], feature=Value(dtype="float32", id=None)
)
if "observation.effort" in data_dict:
features["observation.effort"] = Sequence(
length=data_dict["observation.effort"].shape[1], feature=Value(dtype="float32", id=None)
)
features["action"] = Sequence(
length=data_dict["action"].shape[1], feature=Value(dtype="float32", id=None)
)
features["episode_index"] = Value(dtype="int64", id=None)
features["frame_index"] = Value(dtype="int64", id=None)
features["timestamp"] = Value(dtype="float32", id=None)
features["next.done"] = Value(dtype="bool", id=None)
features["index"] = Value(dtype="int64", id=None)
hf_dataset = Dataset.from_dict(data_dict, features=Features(features))
hf_dataset.set_transform(hf_transform_to_torch)
return hf_dataset
def from_raw_to_lerobot_format(
raw_dir: Path,
videos_dir: Path,
fps: int | None = None,
video: bool = True,
episodes: list[int] | None = None,
encoding: dict | None = None,
):
# sanity check
check_format(raw_dir)
if fps is None:
fps = 30
else:
raise NotImplementedError()
if not video:
raise NotImplementedError()
if encoding is not None:
warnings.warn(
"Video encoding is currently done outside of LeRobot for the dora_parquet format.",
stacklevel=1,
)
data_df = load_from_raw(raw_dir, videos_dir, fps, episodes)
hf_dataset = to_hf_dataset(data_df, video)
episode_data_index = calculate_episode_data_index(hf_dataset)
info = {
"codebase_version": CODEBASE_VERSION,
"fps": fps,
"video": video,
}
if video:
info["encoding"] = "unknown"
return hf_dataset, episode_data_index, info

View File

@@ -0,0 +1,639 @@
OPENX_DATASET_CONFIGS:
fractal20220817_data:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- base_pose_tool_reached
- gripper_closed
fps: 3
kuka:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- clip_function_input/base_pose_tool_reached
- gripper_closed
fps: 10
bridge_openx:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- EEF_state
- gripper_state
fps: 5
taco_play:
image_obs_keys:
- rgb_static
- rgb_gripper
depth_obs_keys:
- depth_static
- depth_gripper
state_obs_keys:
- state_eef
- state_gripper
fps: 15
jaco_play:
image_obs_keys:
- image
- image_wrist
depth_obs_keys:
- null
state_obs_keys:
- state_eef
- state_gripper
fps: 10
berkeley_cable_routing:
image_obs_keys:
- image
- top_image
- wrist45_image
- wrist225_image
depth_obs_keys:
- null
state_obs_keys:
- robot_state
fps: 10
roboturk:
image_obs_keys:
- front_rgb
depth_obs_keys:
- null
state_obs_keys:
- null
fps: 10
nyu_door_opening_surprising_effectiveness:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- null
fps: 3
viola:
image_obs_keys:
- agentview_rgb
- eye_in_hand_rgb
depth_obs_keys:
- null
state_obs_keys:
- joint_states
- gripper_states
fps: 20
berkeley_autolab_ur5:
image_obs_keys:
- image
- hand_image
depth_obs_keys:
- image_with_depth
state_obs_keys:
- state
fps: 5
toto:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 30
language_table:
image_obs_keys:
- rgb
depth_obs_keys:
- null
state_obs_keys:
- effector_translation
fps: 10
columbia_cairlab_pusht_real:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- robot_state
fps: 10
stanford_kuka_multimodal_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- depth_image
state_obs_keys:
- ee_position
- ee_orientation
fps: 20
nyu_rot_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 3
io_ai_tech:
image_obs_keys:
- image
- image_fisheye
- image_left_side
- image_right_side
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 3
stanford_hydra_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 10
austin_buds_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 20
nyu_franka_play_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- image_additional_view
depth_obs_keys:
- depth
- depth_additional_view
state_obs_keys:
- eef_state
fps: 3
maniskill_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- depth
- wrist_depth
state_obs_keys:
- tcp_pose
- gripper_state
fps: 20
furniture_bench_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 10
cmu_franka_exploration_dataset_converted_externally_to_rlds:
image_obs_keys:
- highres_image
depth_obs_keys:
- null
state_obs_keys:
- null
fps: 10
ucsd_kitchen_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- joint_state
fps: 2
ucsd_pick_and_place_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 3
spoc:
image_obs_keys:
- image
- image_manipulation
depth_obs_keys:
- null
state_obs_keys:
- null
fps: 3
austin_sailor_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 20
austin_sirius_dataset_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 20
bc_z:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- present/xyz
- present/axis_angle
- present/sensed_close
fps: 10
utokyo_pr2_opening_fridge_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 10
utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 10
utokyo_xarm_pick_and_place_converted_externally_to_rlds:
image_obs_keys:
- image
- image2
- hand_image
depth_obs_keys:
- null
state_obs_keys:
- end_effector_pose
fps: 10
utokyo_xarm_bimanual_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- pose_r
fps: 10
robo_net:
image_obs_keys:
- image
- image1
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 1
robo_set:
image_obs_keys:
- image_left
- image_right
- image_wrist
depth_obs_keys:
- null
state_obs_keys:
- state
- state_velocity
fps: 5
berkeley_mvp_converted_externally_to_rlds:
image_obs_keys:
- hand_image
depth_obs_keys:
- null
state_obs_keys:
- gripper
- pose
- joint_pos
fps: 5
berkeley_rpt_converted_externally_to_rlds:
image_obs_keys:
- hand_image
depth_obs_keys:
- null
state_obs_keys:
- joint_pos
- gripper
fps: 30
kaist_nonprehensile_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 10
stanford_mask_vit_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
tokyo_u_lsmo_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 10
dlr_sara_pour_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 10
dlr_sara_grid_clamp_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 10
dlr_edan_shared_control_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 5
asu_table_top_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 12.5
stanford_robocook_converted_externally_to_rlds:
image_obs_keys:
- image_1
- image_2
depth_obs_keys:
- depth_1
- depth_2
state_obs_keys:
- eef_state
- gripper_state
fps: 5
imperialcollege_sawyer_wrist_cam:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 10
iamlab_cmu_pickup_insert_converted_externally_to_rlds:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- joint_state
- gripper_state
fps: 20
uiuc_d3field:
image_obs_keys:
- image_1
- image_2
depth_obs_keys:
- depth_1
- depth_2
state_obs_keys:
- null
fps: 1
utaustin_mutex:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 20
berkeley_fanuc_manipulation:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- joint_state
- gripper_state
fps: 10
cmu_playing_with_food:
image_obs_keys:
- image
- finger_vision_1
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 10
cmu_play_fusion:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 5
cmu_stretch:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- eef_state
- gripper_state
fps: 10
berkeley_gnm_recon:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
- position
- yaw
fps: 3
berkeley_gnm_cory_hall:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
- position
- yaw
fps: 5
berkeley_gnm_sac_son:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- state
- position
- yaw
fps: 10
droid:
image_obs_keys:
- exterior_image_1_left
- exterior_image_2_left
- wrist_image_left
depth_obs_keys:
- null
state_obs_keys:
- proprio
fps: 15
droid_100:
image_obs_keys:
- exterior_image_1_left
- exterior_image_2_left
- wrist_image_left
depth_obs_keys:
- null
state_obs_keys:
- proprio
fps: 15
fmb:
image_obs_keys:
- image_side_1
- image_side_2
- image_wrist_1
- image_wrist_2
depth_obs_keys:
- image_side_1_depth
- image_side_2_depth
- image_wrist_1_depth
- image_wrist_2_depth
state_obs_keys:
- proprio
fps: 10
dobbe:
image_obs_keys:
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- proprio
fps: 3.75
usc_cloth_sim_converted_externally_to_rlds:
image_obs_keys:
- image
depth_obs_keys:
- null
state_obs_keys:
- null
fps: 10
plex_robosuite:
image_obs_keys:
- image
- wrist_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 20
conq_hose_manipulation:
image_obs_keys:
- frontleft_fisheye_image
- frontright_fisheye_image
- hand_color_image
depth_obs_keys:
- null
state_obs_keys:
- state
fps: 30

Some files were not shown because too many files have changed in this diff Show More