Commit Graph

  • 53ecec5fb2 WIP v21 to v30 user/rcadene/2025_02_19_port_openx Remi Cadene 2025-03-31 07:38:01 +00:00
  • e096754d14 Rename test Simon Alibert 2025-03-31 00:41:25 +02:00
  • 02803f545d Add test_encoding_utils Simon Alibert 2025-03-31 00:37:28 +02:00
  • 8503e8e166 Move encoding functions to encoding_utils Simon Alibert 2025-03-31 00:35:31 +02:00
  • d6007c6e7d Add calibration utilities Simon Alibert 2025-03-30 15:41:39 +02:00
  • 50963fcf13 Add scan_port utility Simon Alibert 2025-03-30 15:32:25 +02:00
  • c05e4835d0 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-28 17:20:38 +00:00
  • 808cf63221 Added support for controlling the gripper with the pygame interface of gamepad Minor modifications in gym_manipulator to quantize the gripper actions clamped the observations after F.resize in ConvertToLeRobotObservation wrapper due to a bug in F.resize, images were returned exceeding the maximum value of 1.0 Michel Aractingi 2025-03-28 18:12:03 +01:00
  • 0150139668 Refactor SACPolicy for improved type annotations and readability AdilZouitine 2025-03-28 16:46:21 +00:00
  • b3ad63cf6e Refactor SACPolicy and learner_server for improved clarity and functionality AdilZouitine 2025-03-28 16:40:45 +00:00
  • 8b02e81bb5 Refactor actor_server.py for improved structure and logging AdilZouitine 2025-03-28 14:17:16 +00:00
  • dcce446a66 Refactor learner_server.py for improved structure and clarity AdilZouitine 2025-03-28 13:28:47 +00:00
  • 82a6b69e0e Refactor imports in modeling_sac.py for improved organization AdilZouitine 2025-03-28 13:26:47 +00:00
  • 6f7024242a Refactor SACConfig properties for improved readability AdilZouitine 2025-03-28 13:26:31 +00:00
  • 3c56ad33c3 fix AdilZouitine 2025-03-28 10:51:32 +00:00
  • 49baa1ff49 Enhance logging for actor and learner servers AdilZouitine 2025-03-28 10:43:03 +00:00
  • 02b9ea9446 Added gripper control mechanism to gym_manipulator Moved HilSerl env config to configs/env/configs.py fixes in actor_server and modeling_sac and configuration_sac added the possibility of ignoring missing keys in env_cfg in get_features_from_env_config function Michel Aractingi 2025-03-28 08:21:36 +01:00
  • 79e0f6e06c Add WrapperConfig for environment wrappers and update SACConfig properties AdilZouitine 2025-03-27 17:07:06 +00:00
  • d0b7690bc0 Change HILSerlRobotEnvConfig to inherit from EnvConfig Added support for hil_serl classifier to be trained with train.py run classifier training by python lerobot/scripts/train.py --policy.type=hilserl_classifier fixes in find_joint_limits, control_robot, end_effector_control_utils Michel Aractingi 2025-03-27 10:23:14 +01:00
  • 052a4acfc2 [WIP] Update SAC configuration and environment settings AdilZouitine 2025-03-27 08:13:20 +00:00
  • 626e5dd35c Add wandb run id in config AdilZouitine 2025-03-27 08:11:56 +00:00
  • dd37bd412e [WIP] Non functional yet Add ManiSkill environment configuration and wrappers AdilZouitine 2025-03-26 08:15:05 +00:00
  • b7b6d8102f Change config logic in: - gym_manipulator - find_joint_limits - end_effector_utils Michel Aractingi 2025-03-25 14:24:46 +01:00
  • ee25fd8afe Add .devcontainer to .gitignore for improved development environment management AdilZouitine 2025-03-25 08:21:07 +00:00
  • 5fbbc65869 Add task field to frame_dict in ReplayBuffer and simplify save_episode calls AdilZouitine 2025-03-24 20:28:14 +00:00
  • f483931fc0 Handle new config with sac AdilZouitine 2025-03-24 20:19:28 +00:00
  • b2025b852c Handle multi optimizers AdilZouitine 2025-03-24 15:34:30 +00:00
  • 7c05755823 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-24 13:41:27 +00:00
  • 2945bbb221 Removed depleted files and scripts Michel Aractingi 2025-03-24 14:41:01 +01:00
  • 8e6d5f504c [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-24 13:16:38 +00:00
  • 761a2dbcb3 Update tensor device assignment in ReplayBuffer class AdilZouitine 2025-03-21 14:21:31 +00:00
  • 81952b2092 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-20 12:58:43 +00:00
  • 0eef49a0f6 Initialize log_alpha with the logarithm of temperature_init in SACPolicy AdilZouitine 2025-03-20 12:55:22 +00:00
  • 2d5effeeba [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-19 18:53:26 +00:00
  • c5c921cd7c Remove unused functions and imports from modeling_sac.py AdilZouitine 2025-03-19 18:53:01 +00:00
  • 80e766c05c Add intervention rate tracking in act_with_policy function AdilZouitine 2025-03-19 18:37:50 +00:00
  • eb6787e159 - Updated the logging condition to use log_freq directly instead of accessing it through cfg.training.log_freq for improved readability and speed. AdilZouitine 2025-03-19 13:40:23 +00:00
  • 659adfc743 [PORT HIL-SERL] Optimize training loop, extract config usage (#855) Eugene Mironov 2025-03-19 20:27:32 +07:00
  • 07cc0662da Enhance training information logging in learner server AdilZouitine 2025-03-19 13:16:31 +00:00
  • a02195249f Update configuration files for improved performance and flexibility AdilZouitine 2025-03-19 09:54:46 +00:00
  • cb272294f5 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-18 14:57:57 +00:00
  • 4bb2077afa Refactor SACPolicy and learner server for improved replay buffer management AdilZouitine 2025-03-18 14:57:15 +00:00
  • b82faf7d8c Add end effector action space to hil-serl (#861) Michel Aractingi 2025-03-17 14:22:33 +01:00
  • 7960f2c3c1 Enhance SAC configuration and policy with gradient clipping and temperature management AdilZouitine 2025-03-17 10:50:28 +00:00
  • dee154a1a5 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-12 10:16:54 +00:00
  • a3ef7dc6c3 Add custom save and load methods for SAC policy AdilZouitine 2025-03-12 10:15:37 +00:00
  • 7e3e1ce173 Remove torch.no_grad decorator and optimize next action prediction in SAC policy AdilZouitine 2025-03-10 10:31:38 +00:00
  • 83b2dc1219 [Port HIL-SERL] Balanced sampler function speed up and refactor to align with train.py (#715) s1lent4gnt 2025-03-12 10:35:30 +01:00
  • db78fee9de [HIL-SERL] Migrate threading to multiprocessing (#759) Eugene Mironov 2025-03-05 17:19:31 +07:00
  • 38f5fa4523 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-04 13:38:47 +00:00
  • 76df8a31b3 Add storage device configuration for SAC policy and replay buffer AdilZouitine 2025-03-04 13:22:35 +00:00
  • 24f93c755a Add memory optimization option to ReplayBuffer AdilZouitine 2025-02-25 19:04:58 +00:00
  • 20fee3d043 Add storage device parameter to replay buffer initialization AdilZouitine 2025-02-25 15:30:39 +00:00
  • 7c366e3223 Refactor ReplayBuffer with tensor-based storage and improved sampling efficiency AdilZouitine 2025-02-25 14:26:44 +00:00
  • 2c799508d7 Update ManiSkill configuration and replay buffer to support truncation and dataset handling AdilZouitine 2025-02-24 16:53:37 +00:00
  • ff223c106d Added caching function in the learner_server and modeling sac in order to limit the number of forward passes through the pretrained encoder when its frozen. Added tensordict dependencies Updated the version of torch and torchvision Michel Aractingi 2025-02-21 10:13:43 +00:00
  • d48161da1b [Port HIL-SERL] Adjust Actor-Learner architecture & clean up dependency management for HIL-SERL (#722) Eugene Mironov 2025-02-21 16:29:00 +07:00
  • 150def839c Refactor SAC policy with performance optimizations and multi-camera support AdilZouitine 2025-02-20 17:14:27 +00:00
  • 795063aa1b - Fixed big issue in the loading of the policy parameters sent by the learner to the actor -- pass only the actor to the update_policy_parameters and remove strict=False - Fixed big issue in the normalization of the actions in the forward function of the critic -- remove the torch.no_grad decorator in normalize.py in the normalization function - Fixed performance issue to boost the optimization frequency by setting the storage device to be the same as the device of learning. Michel Aractingi 2025-02-19 16:22:51 +00:00
  • d9cd85d976 Re-enable parameter push thread in learner server AdilZouitine 2025-02-17 10:26:33 +00:00
  • 279e03b6c8 Improve wandb logging and custom step tracking in logger AdilZouitine 2025-02-17 10:08:49 +00:00
  • b7a0ffc3b8 Add maniskill support. Co-authored-by: Michel Aractingi <michel.aractingi@gmail.com> AdilZouitine 2025-02-14 19:53:29 +00:00
  • 291358d6a2 Fixed bug in the action scale of the intervention actions and offline dataset actions. (scale by inverse delta) Michel Aractingi 2025-02-14 15:17:16 +01:00
  • 2aca830a09 Modified crop_dataset_roi interface to automatically write the cropped parameters to a json file in the meta of the dataset Michel Aractingi 2025-02-14 12:32:45 +01:00
  • 2f34d84298 Optimized the replay buffer from the memory side to store data on cpu instead of a gpu device and send the batches to the gpu. Michel Aractingi 2025-02-13 18:03:57 +01:00
  • 61b0e9539f nit Michel Aractingi 2025-02-13 17:12:57 +01:00
  • 23c6b891a3 removed uncomment in actor server Michel Aractingi 2025-02-13 16:53:33 +01:00
  • 0847b2119b Changed the init_final value to center the starting mean and std of the policy Michel Aractingi 2025-02-13 16:42:43 +01:00
  • 24fb8a7f47 Changed bounds for a new so100 robot Michel Aractingi 2025-02-13 15:43:30 +01:00
  • eb7e28d9d9 Hardcoded some normalization parameters. TODO refactor Added masking actions on the level of the intervention actions and offline dataset Michel Aractingi 2025-02-13 14:27:14 +01:00
  • a0e0a9a9b1 fix log_alpha in modeling_sac: change to nn.parameter added pretrained vision model in policy Michel Aractingi 2025-02-13 11:26:24 +01:00
  • 57e09828ce Added logging for interventions to monitor the rate of interventions through time Added an s keyboard command to force success in the case the reward classifier fails Michel Aractingi 2025-02-13 11:04:49 +01:00
  • 9c14830cd9 Added possiblity to record and replay delta actions during teleoperation rather than absolute actions Michel Aractingi 2025-02-12 19:25:41 +01:00
  • 4057904238 [PORT-Hilserl] classifier fixes (#695) Yoel 2025-02-11 11:39:17 +01:00
  • 3c58867738 [Port HIL-SERL] Add resnet-10 as default encoder for HIL-SERL (#696) Eugene Mironov 2025-02-11 17:37:00 +07:00
  • c623824139 - Added JointMaskingActionSpace wrapper in gym_manipulator in order to select which joints will be controlled. For example, we can disable the gripper actions for some tasks. - Added Nan detection mechanisms in the actor, learner and gym_manipulator for the case where we encounter nans in the loop. - changed the non-blocking in the .to(device) functions to only work for the case of cuda because they were causing nans when running the policy on mps - Added some joint clipping and limits in the env, robot and policy configs. TODO clean this part and make the limits in one config file only. Michel Aractingi 2025-02-11 11:34:46 +01:00
  • 3cb43f801c Added sac_real config file in the policym configs dir. Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com> Michel Aractingi 2025-02-10 16:08:13 +01:00
  • f4f5b26a21 Several fixes to move the actor_server and learner_server code from the maniskill environment to the real robot environment. Michel Aractingi 2025-02-10 16:03:39 +01:00
  • 434d1e0614 [HIL-SERL port] Add Reward classifier benchmark tracking to chose best visual encoder (#688) Eugene Mironov 2025-02-07 00:39:51 +07:00
  • 729b4ed697 - Added lerobot/scripts/server/gym_manipulator.py that contains all the necessary wrappers to run a gym-style env around the real robot. - Added lerobot/scripts/server/find_joint_limits.py to test the min and max angles of the motion you wish the robot to explore during RL training. - Added logic in manipulator.py to limit the maximum possible joint angles to allow motion within a predefined joint position range. The limits are specified in the yaml config for each robot. Checkout the so100.yaml. Michel Aractingi 2025-02-06 16:29:37 +01:00
  • 163bcbcad4 fixed bug in crop_dataset_roi.py added missing buffer.pt in server dir Michel Aractingi 2025-02-05 18:22:50 +00:00
  • 875662f16b Added additional wrappers for the environment: Action repeat, keyboard interface, reset wrapper Tested the reset mechanism and keyboard interface and the convert wrapper on the robots. Michel Aractingi 2025-02-04 17:41:14 +00:00
  • 87c7eca582 Added crop_dataset_roi.py that allows you to load a lerobotdataset -> crop its images -> create a new lerobot dataset with the cropped and resized images. Michel Aractingi 2025-02-03 17:48:35 +00:00
  • 179ee3b1f6 - Added base gym env class for the real robot environment. - Added several wrappers around the base gym env robot class. - Including: time limit, reward classifier, crop images, preprocess observations. - Added an interactive script crop_roi.py where the user can interactively select the roi in the observation images and return the correct crop values that will improve the policy and reward classifier performance. Michel Aractingi 2025-02-03 14:52:45 +00:00
  • b29401e4e2 - Refactor observation encoder in modeling_sac.py - added torch.compile to the actor and learner servers. - organized imports in train_sac.py - optimized the parameters push by not sending the frozen pre-trained encoder. Michel Aractingi 2025-01-31 16:45:52 +00:00
  • faab32fe14 [Port HIL-SERL] Add HF vision encoder option in SAC (#651) Yoel 2025-01-31 09:42:13 +01:00
  • c620b0878f Cleaned learner_server.py. Added several block function to improve readability. Michel Aractingi 2025-01-31 08:33:33 +00:00
  • 2023289ce8 Added support for checkpointing the policy. We can save and load the policy state dict, optimizers state, optimization step and interaction step Added functions for converting the replay buffer from and to LeRobotDataset. When we want to save the replay buffer, we convert it first to LeRobotDataset format and save it locally and vice-versa. Michel Aractingi 2025-01-30 17:39:41 +00:00
  • 9afd093030 Removed unnecessary time.sleep in the streaming server on the learner side Michel Aractingi 2025-01-29 16:31:38 +00:00
  • f3c4d6e1ec Added missing config files env/maniskill_example.yaml and policy/sac_maniskill.yaml that are necessary to run the lerobot implementation of sac with the maniskill baselines. Michel Aractingi 2025-01-29 16:07:32 +00:00
  • 18207d995e - Added additional logging information in wandb around the timings of the policy loop and optimization loop. - Optimized critic design that improves the performance of the learner loop by a factor of 2 - Cleaned the code and fixed style issues Michel Aractingi 2025-01-29 15:50:46 +00:00
  • a0a81c0c12 FREEDOM, added back the optimization loop code in learner_server.py Ran experiment with pushcube env from maniskill. The learning seem to work. Michel Aractingi 2025-01-28 17:25:49 +00:00
  • ef64ba91d9 Added server directory in lerobot/scripts that contains scripts and the protobuf message types to split training into two processes, acting and learning. The actor rollouts the policy and collects interaction data while the learner recieves the data, trains the policy and sends the updated parameters to the actor. The two scripts are ran simultaneously Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com> Michel Aractingi 2025-01-28 15:52:03 +00:00
  • 83dc00683c Stable version of rlpd + drq AdilZouitine 2025-01-22 09:00:16 +00:00
  • 5b92465e38 Add type annotations and restructure SACConfig class fields AdilZouitine 2025-01-21 09:51:12 +00:00
  • 4b78ab2789 Change SAC policy implementation with configuration and modeling classes Adil Zouitine 2025-01-17 09:39:04 +01:00
  • bd8c768f62 SAC works Adil Zouitine 2025-01-14 11:34:52 +01:00
  • 1e9bafc852 [WIP] correct sac implementation Adil Zouitine 2025-01-13 17:54:11 +01:00
  • 921ed960fb Add rlpd tricks Adil Zouitine 2025-01-15 15:49:24 +01:00
  • 67b64e445b SAC works Adil Zouitine 2025-01-14 11:34:52 +01:00