Commit Graph

  • eb44a06a9b [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-28 17:20:38 +00:00
  • 8eb3c1510c Added support for controlling the gripper with the pygame interface of gamepad Minor modifications in gym_manipulator to quantize the gripper actions clamped the observations after F.resize in ConvertToLeRobotObservation wrapper due to a bug in F.resize, images were returned exceeding the maximum value of 1.0 Michel Aractingi 2025-03-28 18:12:03 +01:00
  • 4d5ecb082e Refactor SACPolicy for improved type annotations and readability AdilZouitine 2025-03-28 16:46:21 +00:00
  • 6e687e2910 Refactor SACPolicy and learner_server for improved clarity and functionality AdilZouitine 2025-03-28 16:40:45 +00:00
  • eb710647bf Refactor actor_server.py for improved structure and logging AdilZouitine 2025-03-28 14:17:16 +00:00
  • 176557d770 Refactor learner_server.py for improved structure and clarity AdilZouitine 2025-03-28 13:28:47 +00:00
  • 3beab33fac Refactor imports in modeling_sac.py for improved organization AdilZouitine 2025-03-28 13:26:47 +00:00
  • c0ba4b4954 Refactor SACConfig properties for improved readability AdilZouitine 2025-03-28 13:26:31 +00:00
  • 8fb373aeb2 fix AdilZouitine 2025-03-28 10:51:32 +00:00
  • 5a0ee06651 Enhance logging for actor and learner servers AdilZouitine 2025-03-28 10:43:03 +00:00
  • 05a237ce10 Added gripper control mechanism to gym_manipulator Moved HilSerl env config to configs/env/configs.py fixes in actor_server and modeling_sac and configuration_sac added the possibility of ignoring missing keys in env_cfg in get_features_from_env_config function Michel Aractingi 2025-03-28 08:21:36 +01:00
  • 88cc2b8fc8 Add WrapperConfig for environment wrappers and update SACConfig properties AdilZouitine 2025-03-27 17:07:06 +00:00
  • b69132c79d Change HILSerlRobotEnvConfig to inherit from EnvConfig Added support for hil_serl classifier to be trained with train.py run classifier training by python lerobot/scripts/train.py --policy.type=hilserl_classifier fixes in find_joint_limits, control_robot, end_effector_control_utils Michel Aractingi 2025-03-27 10:23:14 +01:00
  • db897a1619 [WIP] Update SAC configuration and environment settings AdilZouitine 2025-03-27 08:13:20 +00:00
  • 0b5b62c8fb Add wandb run id in config AdilZouitine 2025-03-27 08:11:56 +00:00
  • 056f79d358 [WIP] Non functional yet Add ManiSkill environment configuration and wrappers AdilZouitine 2025-03-26 08:15:05 +00:00
  • 114ec644d0 Change config logic in: - gym_manipulator - find_joint_limits - end_effector_utils Michel Aractingi 2025-03-25 14:24:46 +01:00
  • 26ee8b6ae5 Add .devcontainer to .gitignore for improved development environment management AdilZouitine 2025-03-25 08:21:07 +00:00
  • 38e8864284 Add task field to frame_dict in ReplayBuffer and simplify save_episode calls AdilZouitine 2025-03-24 20:28:14 +00:00
  • 80d566eb56 Handle new config with sac AdilZouitine 2025-03-24 20:19:28 +00:00
  • bb5a95889f Handle multi optimizers AdilZouitine 2025-03-24 15:34:30 +00:00
  • 0ea27704f6 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-24 13:41:27 +00:00
  • 2abbd60a0d Removed depleted files and scripts Michel Aractingi 2025-03-24 14:41:01 +01:00
  • 1c8daf11fd [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-24 13:16:38 +00:00
  • cdcf346061 Update tensor device assignment in ReplayBuffer class AdilZouitine 2025-03-21 14:21:31 +00:00
  • 42f95e827d [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-20 12:58:43 +00:00
  • 618ed00d45 Initialize log_alpha with the logarithm of temperature_init in SACPolicy AdilZouitine 2025-03-20 12:55:22 +00:00
  • 50d8db481e [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-19 18:53:26 +00:00
  • e4a5971ffd Remove unused functions and imports from modeling_sac.py AdilZouitine 2025-03-19 18:53:01 +00:00
  • 36f9ccd851 Add intervention rate tracking in act_with_policy function AdilZouitine 2025-03-19 18:37:50 +00:00
  • 787aee0e60 - Updated the logging condition to use log_freq directly instead of accessing it through cfg.training.log_freq for improved readability and speed. AdilZouitine 2025-03-19 13:40:23 +00:00
  • 0341a38fdd [PORT HIL-SERL] Optimize training loop, extract config usage (#855) Eugene Mironov 2025-03-19 20:27:32 +07:00
  • ffbed4a141 Enhance training information logging in learner server AdilZouitine 2025-03-19 13:16:31 +00:00
  • 03fe0f054b Update configuration files for improved performance and flexibility AdilZouitine 2025-03-19 09:54:46 +00:00
  • fd74c194b6 [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-18 14:57:57 +00:00
  • 0959694bab Refactor SACPolicy and learner server for improved replay buffer management AdilZouitine 2025-03-18 14:57:15 +00:00
  • 7b01e16439 Add end effector action space to hil-serl (#861) Michel Aractingi 2025-03-17 14:22:33 +01:00
  • 66816fd871 Enhance SAC configuration and policy with gradient clipping and temperature management AdilZouitine 2025-03-17 10:50:28 +00:00
  • 599326508f [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-12 10:16:54 +00:00
  • 2f04d0d2b9 Add custom save and load methods for SAC policy AdilZouitine 2025-03-12 10:15:37 +00:00
  • e002c5ec56 Remove torch.no_grad decorator and optimize next action prediction in SAC policy AdilZouitine 2025-03-10 10:31:38 +00:00
  • 3dfb37e976 [Port HIL-SERL] Balanced sampler function speed up and refactor to align with train.py (#715) s1lent4gnt 2025-03-12 10:35:30 +01:00
  • b6a2200983 [HIL-SERL] Migrate threading to multiprocessing (#759) Eugene Mironov 2025-03-05 17:19:31 +07:00
  • 85fe8a3f4e [pre-commit.ci] auto fixes from pre-commit.com hooks pre-commit-ci[bot] 2025-03-04 13:38:47 +00:00
  • bb69cb3c8c Add storage device configuration for SAC policy and replay buffer AdilZouitine 2025-03-04 13:22:35 +00:00
  • ae51c19b3c Add memory optimization option to ReplayBuffer AdilZouitine 2025-02-25 19:04:58 +00:00
  • 9ea79f8a76 Add storage device parameter to replay buffer initialization AdilZouitine 2025-02-25 15:30:39 +00:00
  • 1d4ec50a58 Refactor ReplayBuffer with tensor-based storage and improved sampling efficiency AdilZouitine 2025-02-25 14:26:44 +00:00
  • 4c73891575 Update ManiSkill configuration and replay buffer to support truncation and dataset handling AdilZouitine 2025-02-24 16:53:37 +00:00
  • d3b84ecd6f Added caching function in the learner_server and modeling sac in order to limit the number of forward passes through the pretrained encoder when its frozen. Added tensordict dependencies Updated the version of torch and torchvision Michel Aractingi 2025-02-21 10:13:43 +00:00
  • e1d55c7a44 [Port HIL-SERL] Adjust Actor-Learner architecture & clean up dependency management for HIL-SERL (#722) Eugene Mironov 2025-02-21 16:29:00 +07:00
  • 85242cac67 Refactor SAC policy with performance optimizations and multi-camera support AdilZouitine 2025-02-20 17:14:27 +00:00
  • 0d88a5ee09 - Fixed big issue in the loading of the policy parameters sent by the learner to the actor -- pass only the actor to the update_policy_parameters and remove strict=False - Fixed big issue in the normalization of the actions in the forward function of the critic -- remove the torch.no_grad decorator in normalize.py in the normalization function - Fixed performance issue to boost the optimization frequency by setting the storage device to be the same as the device of learning. Michel Aractingi 2025-02-19 16:22:51 +00:00
  • 62e237bdee Re-enable parameter push thread in learner server AdilZouitine 2025-02-17 10:26:33 +00:00
  • c85f88fb62 Improve wandb logging and custom step tracking in logger AdilZouitine 2025-02-17 10:08:49 +00:00
  • a90f4872f2 Add maniskill support. Co-authored-by: Michel Aractingi <michel.aractingi@gmail.com> AdilZouitine 2025-02-14 19:53:29 +00:00
  • a16ea283f5 Fixed bug in the action scale of the intervention actions and offline dataset actions. (scale by inverse delta) Michel Aractingi 2025-02-14 15:17:16 +01:00
  • 8209a6dfb7 Modified crop_dataset_roi interface to automatically write the cropped parameters to a json file in the meta of the dataset Michel Aractingi 2025-02-14 12:32:45 +01:00
  • b5fbeb7401 Optimized the replay buffer from the memory side to store data on cpu instead of a gpu device and send the batches to the gpu. Michel Aractingi 2025-02-13 18:03:57 +01:00
  • 2ac25b02e2 nit Michel Aractingi 2025-02-13 17:12:57 +01:00
  • 39fe4b1301 removed uncomment in actor server Michel Aractingi 2025-02-13 16:53:33 +01:00
  • 140e30e386 Changed the init_final value to center the starting mean and std of the policy Michel Aractingi 2025-02-13 16:42:43 +01:00
  • ddcc0415e4 Changed bounds for a new so100 robot Michel Aractingi 2025-02-13 15:43:30 +01:00
  • 5195f40fd3 Hardcoded some normalization parameters. TODO refactor Added masking actions on the level of the intervention actions and offline dataset Michel Aractingi 2025-02-13 14:27:14 +01:00
  • 98c6557869 fix log_alpha in modeling_sac: change to nn.parameter added pretrained vision model in policy Michel Aractingi 2025-02-13 11:26:24 +01:00
  • ee820859d3 Added logging for interventions to monitor the rate of interventions through time Added an s keyboard command to force success in the case the reward classifier fails Michel Aractingi 2025-02-13 11:04:49 +01:00
  • 5d6879d93a Added possiblity to record and replay delta actions during teleoperation rather than absolute actions Michel Aractingi 2025-02-12 19:25:41 +01:00
  • fae47d58d3 [PORT-Hilserl] classifier fixes (#695) Yoel 2025-02-11 11:39:17 +01:00
  • 3a07301365 [Port HIL-SERL] Add resnet-10 as default encoder for HIL-SERL (#696) Eugene Mironov 2025-02-11 17:37:00 +07:00
  • f1af97dc9c - Added JointMaskingActionSpace wrapper in gym_manipulator in order to select which joints will be controlled. For example, we can disable the gripper actions for some tasks. - Added Nan detection mechanisms in the actor, learner and gym_manipulator for the case where we encounter nans in the loop. - changed the non-blocking in the .to(device) functions to only work for the case of cuda because they were causing nans when running the policy on mps - Added some joint clipping and limits in the env, robot and policy configs. TODO clean this part and make the limits in one config file only. Michel Aractingi 2025-02-11 11:34:46 +01:00
  • f2266101df Added sac_real config file in the policym configs dir. Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com> Michel Aractingi 2025-02-10 16:08:13 +01:00
  • 9784d8a47f Several fixes to move the actor_server and learner_server code from the maniskill environment to the real robot environment. Michel Aractingi 2025-02-10 16:03:39 +01:00
  • af769abd8d [HIL-SERL port] Add Reward classifier benchmark tracking to chose best visual encoder (#688) Eugene Mironov 2025-02-07 00:39:51 +07:00
  • 12c13e320e - Added lerobot/scripts/server/gym_manipulator.py that contains all the necessary wrappers to run a gym-style env around the real robot. - Added lerobot/scripts/server/find_joint_limits.py to test the min and max angles of the motion you wish the robot to explore during RL training. - Added logic in manipulator.py to limit the maximum possible joint angles to allow motion within a predefined joint position range. The limits are specified in the yaml config for each robot. Checkout the so100.yaml. Michel Aractingi 2025-02-06 16:29:37 +01:00
  • 273fa2e6e1 fixed bug in crop_dataset_roi.py added missing buffer.pt in server dir Michel Aractingi 2025-02-05 18:22:50 +00:00
  • d143043037 Added additional wrappers for the environment: Action repeat, keyboard interface, reset wrapper Tested the reset mechanism and keyboard interface and the convert wrapper on the robots. Michel Aractingi 2025-02-04 17:41:14 +00:00
  • ca45c34ad5 Added crop_dataset_roi.py that allows you to load a lerobotdataset -> crop its images -> create a new lerobot dataset with the cropped and resized images. Michel Aractingi 2025-02-03 17:48:35 +00:00
  • b1679050de - Added base gym env class for the real robot environment. - Added several wrappers around the base gym env robot class. - Including: time limit, reward classifier, crop images, preprocess observations. - Added an interactive script crop_roi.py where the user can interactively select the roi in the observation images and return the correct crop values that will improve the policy and reward classifier performance. Michel Aractingi 2025-02-03 14:52:45 +00:00
  • d2c41b35db - Refactor observation encoder in modeling_sac.py - added torch.compile to the actor and learner servers. - organized imports in train_sac.py - optimized the parameters push by not sending the frozen pre-trained encoder. Michel Aractingi 2025-01-31 16:45:52 +00:00
  • bc7b6d3daf [Port HIL-SERL] Add HF vision encoder option in SAC (#651) Yoel 2025-01-31 09:42:13 +01:00
  • 2516101cba Cleaned learner_server.py. Added several block function to improve readability. Michel Aractingi 2025-01-31 08:33:33 +00:00
  • aebea08a99 Added support for checkpointing the policy. We can save and load the policy state dict, optimizers state, optimization step and interaction step Added functions for converting the replay buffer from and to LeRobotDataset. When we want to save the replay buffer, we convert it first to LeRobotDataset format and save it locally and vice-versa. Michel Aractingi 2025-01-30 17:39:41 +00:00
  • 03616db82c Removed unnecessary time.sleep in the streaming server on the learner side Michel Aractingi 2025-01-29 16:31:38 +00:00
  • 93c4fc198f Added missing config files env/maniskill_example.yaml and policy/sac_maniskill.yaml that are necessary to run the lerobot implementation of sac with the maniskill baselines. Michel Aractingi 2025-01-29 16:07:32 +00:00
  • 8cd44ae163 - Added additional logging information in wandb around the timings of the policy loop and optimization loop. - Optimized critic design that improves the performance of the learner loop by a factor of 2 - Cleaned the code and fixed style issues Michel Aractingi 2025-01-29 15:50:46 +00:00
  • 2ae657f568 FREEDOM, added back the optimization loop code in learner_server.py Ran experiment with pushcube env from maniskill. The learning seem to work. Michel Aractingi 2025-01-28 17:25:49 +00:00
  • 508f5d1407 Added server directory in lerobot/scripts that contains scripts and the protobuf message types to split training into two processes, acting and learning. The actor rollouts the policy and collects interaction data while the learner recieves the data, trains the policy and sends the updated parameters to the actor. The two scripts are ran simultaneously Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com> Michel Aractingi 2025-01-28 15:52:03 +00:00
  • c8b1132846 Stable version of rlpd + drq AdilZouitine 2025-01-22 09:00:16 +00:00
  • ef777993cd Add type annotations and restructure SACConfig class fields AdilZouitine 2025-01-21 09:51:12 +00:00
  • 760d60ad4b Change SAC policy implementation with configuration and modeling classes Adil Zouitine 2025-01-17 09:39:04 +01:00
  • 875c0271b7 SAC works Adil Zouitine 2025-01-14 11:34:52 +01:00
  • 57344bfde5 [WIP] correct sac implementation Adil Zouitine 2025-01-13 17:54:11 +01:00
  • 46827fb002 Add rlpd tricks Adil Zouitine 2025-01-15 15:49:24 +01:00
  • 2fd78879f6 SAC works Adil Zouitine 2025-01-14 11:34:52 +01:00
  • e8449e9630 remove breakpoint Adil Zouitine 2025-01-13 17:58:00 +01:00
  • a0e2be8b92 [WIP] correct sac implementation Adil Zouitine 2025-01-13 17:54:11 +01:00
  • 181727c0fe Extend reward classifier for multiple camera views (#626) Michel Aractingi 2025-01-13 13:57:49 +01:00
  • d1d6ffd23c [Port HIL_SERL] Final fixes for the Reward Classifier (#598) Eugene Mironov 2025-01-06 17:34:00 +07:00
  • e5801f467f added temporary fix for missing task_index key in online environment Michel Aractingi 2024-12-30 13:47:28 +00:00
  • c6ca9523de split encoder for critic and actor Michel Aractingi 2024-12-29 23:59:39 +00:00