2026-02-19 19:20:40.444703: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2026-02-19 19:20:40.492237: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2026-02-19 19:20:40.492278: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2026-02-19 19:20:40.493360: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2026-02-19 19:20:40.500130: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2026-02-19 19:20:41.414718: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Global seed set to 123 >>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ... >>> Prepared model loaded. >>> Diffusion backbone (model.model) converted to FP16. >>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16. >>> Encoders (cond_stage_model, embedder) converted to FP16. INFO:root:***** Configing Data ***** >>> unitree_z1_stackbox: 1 data samples loaded. >>> unitree_z1_stackbox: data stats loaded. >>> unitree_z1_stackbox: normalizer initiated. >>> unitree_z1_dual_arm_stackbox: 1 data samples loaded. >>> unitree_z1_dual_arm_stackbox: data stats loaded. >>> unitree_z1_dual_arm_stackbox: normalizer initiated. >>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded. >>> unitree_z1_dual_arm_stackbox_v2: data stats loaded. >>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated. >>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded. >>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded. >>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated. >>> unitree_g1_pack_camera: 1 data samples loaded. >>> unitree_g1_pack_camera: data stats loaded. >>> unitree_g1_pack_camera: normalizer initiated. >>> Dataset is successfully loaded ... ✓ KV fused: 66 attention layers >>> Generate 16 frames under each generation ... DEBUG:h5py._conv:Creating converter from 3 to 5 DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096 0%| | 0/8 [00:00>> Step 0: generating actions ... >>> Step 0: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 1: generating actions ... >>> Step 1: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 2: generating actions ... >>> Step 2: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 3: generating actions ... >>> Step 3: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 4: generating actions ... >>> Step 4: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 5: generating actions ... >>> Step 5: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 6: generating actions ... >>> Step 6: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> >>> Step 7: generating actions ... >>> Step 7: interacting with world model ... >>>>>>>>>>>>>>>>>>>>>>>> real 3m24.747s user 3m21.618s sys 0m28.508s