75 lines
4.7 KiB
Plaintext
75 lines
4.7 KiB
Plaintext
2026-02-19 19:48:42.460586: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
2026-02-19 19:48:42.508096: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
2026-02-19 19:48:42.508140: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
2026-02-19 19:48:42.509152: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
2026-02-19 19:48:42.515865: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
2026-02-19 19:48:43.425699: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
Global seed set to 123
|
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
>>> Prepared model loaded.
|
|
>>> Diffusion backbone (model.model) converted to FP16.
|
|
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
INFO:root:***** Configing Data *****
|
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
>>> unitree_z1_stackbox: data stats loaded.
|
|
>>> unitree_z1_stackbox: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
>>> unitree_g1_pack_camera: data stats loaded.
|
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
>>> Dataset is successfully loaded ...
|
|
✓ KV fused: 66 attention layers
|
|
>>> Generate 16 frames under each generation ...
|
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
|
|
0%| | 0/11 [00:00<?, ?it/s]
|
|
9%|▉ | 1/11 [00:24<04:00, 24.07s/it]
|
|
18%|█▊ | 2/11 [00:47<03:32, 23.62s/it]
|
|
27%|██▋ | 3/11 [01:10<03:08, 23.51s/it]
|
|
36%|███▋ | 4/11 [01:34<02:44, 23.46s/it]
|
|
45%|████▌ | 5/11 [01:57<02:20, 23.42s/it]
|
|
55%|█████▍ | 6/11 [02:20<01:56, 23.39s/it]
|
|
64%|██████▎ | 7/11 [02:44<01:33, 23.37s/it]
|
|
73%|███████▎ | 8/11 [03:07<01:10, 23.36s/it]
|
|
82%|████████▏ | 9/11 [03:30<00:46, 23.36s/it]
|
|
91%|█████████ | 10/11 [03:54<00:23, 23.36s/it]
|
|
100%|██████████| 11/11 [04:17<00:00, 23.36s/it]
|
|
100%|██████████| 11/11 [04:17<00:00, 23.42s/it]
|
|
>>> Step 0: generating actions ...
|
|
>>> Step 0: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 1: generating actions ...
|
|
>>> Step 1: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 2: generating actions ...
|
|
>>> Step 2: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 3: generating actions ...
|
|
>>> Step 3: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 4: generating actions ...
|
|
>>> Step 4: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 5: generating actions ...
|
|
>>> Step 5: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 6: generating actions ...
|
|
>>> Step 6: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 7: generating actions ...
|
|
>>> Step 7: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 8: generating actions ...
|
|
>>> Step 8: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 9: generating actions ...
|
|
>>> Step 9: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
>>> Step 10: generating actions ...
|
|
>>> Step 10: interacting with world model ...
|
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
|
|
real 4m35.120s
|
|
user 4m35.176s
|
|
sys 0m29.141s
|