37 lines
3.0 KiB
Plaintext
37 lines
3.0 KiB
Plaintext
2026-02-11 17:34:29.188470: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
2026-02-11 17:34:29.238296: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
2026-02-11 17:34:29.238342: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
2026-02-11 17:34:29.239649: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
2026-02-11 17:34:29.247152: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
2026-02-11 17:34:30.172640: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
Global seed set to 123
|
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
>>> Prepared model loaded.
|
|
>>> Diffusion backbone (model.model) converted to FP16.
|
|
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
INFO:root:***** Configing Data *****
|
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
>>> unitree_z1_stackbox: data stats loaded.
|
|
>>> unitree_z1_stackbox: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
>>> unitree_g1_pack_camera: data stats loaded.
|
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
>>> Dataset is successfully loaded ...
|
|
✓ KV fused: 66 attention layers
|
|
>>> Generate 16 frames under each generation ...
|
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
|
|
0%| | 0/11 [00:00<?, ?it/s]
|
|
9%|▉ | 1/11 [00:23<03:52, 23.28s/it]
|
|
18%|█▊ | 2/11 [00:45<03:26, 22.89s/it]
|
|
27%|██▋ | 3/11 [01:08<03:02, 22.85s/it] |