Add Aloha env and ACT policy

WIP Aloha env tests pass

Rendering works (fps look fast tho? TODO action bounding is too wide [-1,1])

Update README

Copy past from act repo

Remove download.py add a WIP for Simxarm

Remove download.py add a WIP for Simxarm

Add act yaml (TODO: try train.py)

Training can runs (TODO: eval)

Add tasks without end_effector that are compatible with dataset, Eval can run (TODO: training and pretrained model)

Add AbstractEnv, Refactor AlohaEnv, Add rendering_hook in env, Minor modifications, (TODO: Refactor Pusht and Simxarm)

poetry lock

fix bug in compute_stats for action normalization

fix more bugs in normalization

fix training

fix import

PushtEnv inheriates AbstractEnv, Improve factory Normalization

Add _make_env to EnvAbstract

Add call_rendering_hooks to pusht env

SimxarmEnv inherites from AbstractEnv (NOT TESTED)

Add aloha tests artifacts + update pusht stats

fix image normalization: before env was in [0,1] but dataset in [0,255], and now both in [0,255]

Small fix on simxarm

Add next to obs

Add top camera to Aloha env (TODO: make it compatible with set of cameras)

Add top camera to Aloha env (TODO: make it compatible with set of cameras)
This commit is contained in:
Remi Cadene
2024-03-08 09:47:39 +00:00
committed by Cadene
parent 060bac7672
commit 9d002032d1
116 changed files with 3658 additions and 301 deletions

View File

@@ -38,27 +38,18 @@ def eval_policy(
successes = []
threads = []
for i in tqdm.tqdm(range(num_episodes)):
tensordict = env.reset()
ep_frames = []
if save_video or (return_first_video and i == 0):
def rendering_callback(env, td=None):
def render_frame(env):
ep_frames.append(env.render()) # noqa: B023
# render first frame before rollout
rendering_callback(env)
else:
rendering_callback = None
env.register_rendering_hook(render_frame)
with torch.inference_mode():
rollout = env.rollout(
max_steps=max_steps,
policy=policy,
callback=rendering_callback,
auto_reset=False,
tensordict=tensordict,
auto_cast_to_device=True,
)
# print(", ".join([f"{x:.3f}" for x in rollout["next", "reward"][:,0].tolist()]))
@@ -85,6 +76,8 @@ def eval_policy(
if return_first_video and i == 0:
first_video = stacked_frames.transpose(0, 3, 1, 2)
env.reset_rendering_hooks()
for thread in threads:
thread.join()