78 lines
4.9 KiB
Plaintext
78 lines
4.9 KiB
Plaintext
2026-02-19 19:57:52.488339: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
2026-02-19 19:57:52.536176: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
2026-02-19 19:57:52.536222: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
2026-02-19 19:57:52.537285: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
2026-02-19 19:57:52.544051: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
2026-02-19 19:57:53.469912: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
Global seed set to 123
|
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
>>> Prepared model loaded.
|
|
>>> Diffusion backbone (model.model) converted to FP16.
|
|
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
INFO:root:***** Configing Data *****
|
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
>>> unitree_z1_stackbox: data stats loaded.
|
|
>>> unitree_z1_stackbox: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
>>> unitree_g1_pack_camera: data stats loaded.
|
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
>>> Dataset is successfully loaded ...
|
|
✓ KV fused: 66 attention layers
|
|
>>> Generate 16 frames under each generation ...
|
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
|
|
0%| | 0/12 [00:00<?, ?it/s]
|
|
8%|▊ | 1/12 [00:24<04:24, 24.06s/it]
|
|
17%|█▋ | 2/12 [00:47<03:55, 23.56s/it]
|
|
25%|██▌ | 3/12 [01:10<03:31, 23.46s/it]
|
|
33%|███▎ | 4/12 [01:33<03:07, 23.43s/it]
|
|
42%|████▏ | 5/12 [01:57<02:43, 23.41s/it]
|
|
50%|█████ | 6/12 [02:20<02:20, 23.42s/it]
|
|
58%|█████▊ | 7/12 [02:44<01:56, 23.39s/it]
|
|
67%|██████▋ | 8/12 [03:07<01:33, 23.36s/it]
|
|
75%|███████▌ | 9/12 [03:30<01:09, 23.33s/it]
|
|
83%|████████▎ | 10/12 [03:53<00:46, 23.31s/it]
|
|
92%|█████████▏| 11/12 [04:17<00:23, 23.30s/it]
|
|
100%|██████████| 12/12 [04:40<00:00, 23.29s/it]
|
|
100%|██████████| 12/12 [04:40<00:00, 23.37s/it]
|
|
>>> Step 0: generating actions ...
|
|
>>> Step 0: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 1: generating actions ...
|
|
>>> Step 1: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 2: generating actions ...
|
|
>>> Step 2: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 3: generating actions ...
|
|
>>> Step 3: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 4: generating actions ...
|
|
>>> Step 4: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 5: generating actions ...
|
|
>>> Step 5: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 6: generating actions ...
|
|
>>> Step 6: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 7: generating actions ...
|
|
>>> Step 7: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 8: generating actions ...
|
|
>>> Step 8: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 9: generating actions ...
|
|
>>> Step 9: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 10: generating actions ...
|
|
>>> Step 10: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 11: generating actions ...
|
|
>>> Step 11: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
|
|
real 4m58.541s
|
|
user 4m56.152s
|
|
sys 0m33.392s
|