Compare commits
2 Commits
qhy4
...
trt-replac
| Author | SHA1 | Date | |
|---|---|---|---|
| 68d695d81d | |||
| 65788be1b3 |
@@ -9,13 +9,7 @@
|
|||||||
"Bash(nvidia-smi:*)",
|
"Bash(nvidia-smi:*)",
|
||||||
"Bash(conda activate unifolm-wma)",
|
"Bash(conda activate unifolm-wma)",
|
||||||
"Bash(conda info:*)",
|
"Bash(conda info:*)",
|
||||||
"Bash(direnv allow:*)",
|
"Bash(direnv allow:*)"
|
||||||
"Bash(ls:*)",
|
|
||||||
"Bash(for scenario in unitree_g1_pack_camera unitree_z1_dual_arm_cleanup_pencils unitree_z1_dual_arm_stackbox unitree_z1_dual_arm_stackbox_v2 unitree_z1_stackbox)",
|
|
||||||
"Bash(do for case in case1 case2 case3 case4)",
|
|
||||||
"Bash(done)",
|
|
||||||
"Bash(chmod:*)",
|
|
||||||
"Bash(ln:*)"
|
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -132,3 +132,5 @@ Experiment/log
|
|||||||
|
|
||||||
*.0
|
*.0
|
||||||
ckpts/unifolm_wma_dual.ckpt.prepared.pt
|
ckpts/unifolm_wma_dual.ckpt.prepared.pt
|
||||||
|
trt_engines/video_backbone.engine
|
||||||
|
trt_engines/video_backbone.onnx
|
||||||
|
|||||||
@@ -1,122 +0,0 @@
|
|||||||
== Task Comprehension: Diffusion Model and UnifoLM-WMA
|
|
||||||
|
|
||||||
This section provides a comprehensive overview of the UnifoLM-WMA-0 deep learning architecture, serving as a practical foundation for the optimization strategies discussed in subsequent sections.
|
|
||||||
|
|
||||||
=== Overall Inference Pipeline
|
|
||||||
|
|
||||||
UnifoLM-WMA-0 is Unitree Robotics' open-source World-Model-Action framework. Its core task is to predict future video frame sequences along with the corresponding robot action and state trajectories, given a current observation image and a text instruction. The model operates in an interactive simulation mode: each iteration consumes the previous prediction as input and generates the next segment of video and actions, thereby forming a closed-loop rollout. A single iteration of this pipeline can be decomposed into four sequential stages — condition encoding, VAE encoding, DDIM diffusion sampling, and VAE decoding — each of which is described below.
|
|
||||||
|
|
||||||
==== Condition Encoding
|
|
||||||
|
|
||||||
The condition encoding stage transforms raw multi-modal inputs into a unified context vector that guides the diffusion denoising process, through three parallel encoding paths. On the image side, the input observation image (320#sym.times 512) is processed by a frozen OpenCLIP ViT-H-14 vision encoder, then compressed through a Resampler — a Perceiver-based cross-attention module (4 layers, 12 heads, dim\_head=64, embed\_dim 1280 #sym.arrow 1024) — into 16 image condition tokens per frame, yielding $16 times T = 256$ image tokens for T=16 frames.
|
|
||||||
|
|
||||||
On the text side, the instruction is encoded by a frozen OpenCLIP text encoder (`FrozenOpenCLIPEmbedder`, penultimate layer output) into 77 tokens of dimension 1024, computed once and reused across all DDIM steps. On the state side, the robot proprioceptive state (dim 16) is mapped through a SATokenProjector (Perceiver Attention, 1 layer, 16 heads, dim\_head=64, 16 learnable queries) into 16 tokens of dimension 1024.
|
|
||||||
|
|
||||||
These three token sets are concatenated to form the unified context vector: `[agent_state(2) | agent_action(16) | text(77) | image(256)]`, totaling 351 tokens per cross-attention operation.
|
|
||||||
|
|
||||||
==== VAE Encoding
|
|
||||||
|
|
||||||
The observation images are encoded into a compact latent space through an AutoencoderKL (`autoencoder.py`) — a variational autoencoder regularized by KL divergence. The encoder follows a convolutional architecture with 4-level channel multipliers [1, 2, 4, 4] (base channels ch=128, yielding channel widths [128, 256, 512, 512]), 2 residual blocks per level, and a latent channel count of z\_channels=4. The input RGB frames at resolution 320#sym.times 512 are encoded into latent representations at 1/8 spatial resolution, producing tensors of shape `(B, 4, T, 40, 64)`.
|
|
||||||
|
|
||||||
A critical configuration parameter is `perframe_ae=True`, which means the VAE processes each of the T=16 frames independently rather than as a 3D volume. While this per-frame strategy avoids the memory overhead of volumetric convolutions, it introduces a sequential loop of T forward passes through the encoder — a point worth noting for latency optimization. The latent representations are scaled by a fixed factor of `scale_factor=0.18215` before being fed into the diffusion process.
|
|
||||||
|
|
||||||
==== DDIM Diffusion Sampling
|
|
||||||
|
|
||||||
This is the core time-consuming part of inference. A DDIM (Denoising Diffusion Implicit Models) sampler (`ddim.py`) is employed with a default of 50 denoising steps. The diffusion process is parameterized with v-prediction (`parameterization="v"`), 1000 training timesteps, and a linear beta schedule from `linear_start=0.00085` to `linear_end=0.012`, with zero-SNR terminal rescaling enabled (`rescale_betas_zero_snr=True`) and dynamic rescaling applied at `base_scale=0.7` to stabilize generation quality.
|
|
||||||
|
|
||||||
Unlike standard video diffusion models that only predict denoised video latents, UnifoLM-WMA simultaneously produces three outputs per step: a video latent prediction `y` of shape `(B, 4, T, 40, 64)`, an action trajectory prediction `a_y` of shape `(B, T, 16)`, and a state trajectory prediction `s_y` of shape `(B, T, 16)`. The three predictions share the same diffusion timestep but employ heterogeneous noise schedules — the video stream uses the DDPM schedule with v-prediction, while the action and state streams use a `DDIMScheduler` from the `diffusers` library with epsilon-prediction and a `squaredcos_cap_v2` beta schedule. This design allows each modality to adopt its optimal denoising strategy.
|
|
||||||
|
|
||||||
The sampler also supports classifier-free guidance with `unconditional_guidance_scale` and guidance rescaling, applied only to the video stream to balance generation quality and diversity.
|
|
||||||
|
|
||||||
==== VAE Decoding
|
|
||||||
|
|
||||||
After the DDIM sampling loop completes, the denoised video latent tensor $x_0$ of shape `(B, 4, T, 40, 64)` is decoded back to RGB pixel space through the AutoencoderKL decoder. Due to the `perframe_ae=True` configuration, decoding is likewise performed frame-by-frame: each of the T=16 latent frames is individually inverse-scaled by $1 slash "scale_factor"$, passed through the decoder's convolutional transpose layers, and reconstructed to a 320#sym.times 512 RGB frame.
|
|
||||||
|
|
||||||
In the interactive simulation mode, the decoded video serves a dual purpose — providing the observation image for the next iteration's condition encoding (only the first `exe_steps` frames are needed) and producing the final output video for visualization and evaluation. The action and state trajectories predicted by the DDIM loop are directly used for robot control without further decoding.
|
|
||||||
|
|
||||||
=== WMAModel Backbone: Dual-UNet Collaborative Architecture
|
|
||||||
|
|
||||||
The WMAModel (`wma_model.py:326`) is the core neural network invoked at every DDIM step, employing a unique dual-UNet collaborative architecture that jointly predicts video, actions, and states within a single forward pass. This tightly-coupled design enables the action and state predictions to directly leverage the rich spatiotemporal features extracted by the video generation backbone, rather than treating them as independent prediction heads.
|
|
||||||
|
|
||||||
==== Video UNet
|
|
||||||
|
|
||||||
The primary backbone is a 2D convolution-based UNet with temporal extensions. Its key configuration is summarized in the following table:
|
|
||||||
|
|
||||||
#figure(
|
|
||||||
table(
|
|
||||||
columns: (3fr, 5fr),
|
|
||||||
[*Parameter*], [*Value*],
|
|
||||||
[Input / Output channels], [8 (4 latent + 4 conditioning) / 4],
|
|
||||||
[Base model channels], [320],
|
|
||||||
[Channel multipliers], [\[1, 2, 4, 4\] #sym.arrow widths \[320, 640, 1280, 1280\]],
|
|
||||||
[Residual blocks per level], [2],
|
|
||||||
[Attention resolutions], [\[4, 2, 1\] (3 of 4 resolution levels)],
|
|
||||||
[Attention head channels], [64],
|
|
||||||
[Transformer depth], [1 per attention resolution],
|
|
||||||
[Context dimension], [1024],
|
|
||||||
[Temporal length], [16 frames],
|
|
||||||
),
|
|
||||||
caption: [Video UNet configuration parameters.],
|
|
||||||
)
|
|
||||||
|
|
||||||
The UNet follows the classic encoder-middle-decoder structure with skip connections. At each attention-enabled resolution level, every ResBlock is followed by two transformer modules: a SpatialTransformer that performs spatial self-attention among all $H times W$ tokens within each frame followed by cross-attention with the 351-token context vector, and a TemporalTransformer that performs self-attention among T=16 time-step tokens at each spatial position (configured with `temporal_selfatt_only=True`, i.e., no cross-attention).
|
|
||||||
|
|
||||||
During the forward pass, intermediate feature maps are collected after each Downsample layer and the middle block, reshaped from $(B times T, C, H, W)$ to $(B, T, C, H, W)$, accumulating 10 multi-scale feature maps in `hs_a` — the bridge to the Action/State UNets.
|
|
||||||
|
|
||||||
==== Action UNet and State UNet
|
|
||||||
|
|
||||||
The Action UNet (`conditional_unet1d.py`) is a 1D convolutional UNet specifically designed for predicting robot action trajectories. Its configuration is as follows:
|
|
||||||
|
|
||||||
#figure(
|
|
||||||
table(
|
|
||||||
columns: (3fr, 5fr),
|
|
||||||
[*Parameter*], [*Value*],
|
|
||||||
[Input dimension], [16 (agent\_action\_dim)],
|
|
||||||
[Down channel widths], [\[256, 512, 1024, 2048\]],
|
|
||||||
[Kernel size], [5],
|
|
||||||
[GroupNorm groups], [8],
|
|
||||||
[Diffusion step embedding dim], [128],
|
|
||||||
[Horizon], [16],
|
|
||||||
[Action projection dim], [32],
|
|
||||||
),
|
|
||||||
caption: [Action UNet (ConditionalUnet1D) configuration parameters.],
|
|
||||||
)
|
|
||||||
|
|
||||||
The Action UNet receives the 10 `hs_a` feature maps from the Video UNet as visual conditioning. The conditioning pipeline involves three stages: (1) SpatialSoftmax compresses each 2D feature map into keypoint coordinates $(B times T, C, 2)$; (2) the compressed features are concatenated with the diffusion timestep embedding and observation encoding (ResNet-18 `MultiImageObsEncoder`), then injected via FiLM modulation to produce per-channel scale/bias for the 1D convolution blocks; (3) `ActionLatentImageCrossAttention` enables action tokens to cross-attend to the Video UNet's spatiotemporal features, allowing visually-grounded action planning.
|
|
||||||
|
|
||||||
The input action tensor $(B, T, 16)$ is projected to act\_proj\_dim=32, processed through the 1D UNet, then projected back to $(B, T, 16)$.
|
|
||||||
|
|
||||||
The State UNet is an identical `ConditionalUnet1D` instance with the same hyperparameters, operating on the state tensor `x_state` $(B, T, 16)$ instead of the action tensor.
|
|
||||||
|
|
||||||
A critical optimization observation: the Action and State UNets are computationally independent — sharing read-only inputs with no data dependencies. The original code executes them sequentially, leaving significant room for CUDA stream parallelization.
|
|
||||||
|
|
||||||
=== Multi-Level Design of Attention Mechanisms
|
|
||||||
|
|
||||||
The attention mechanisms in UnifoLM-WMA constitute the core computational bottleneck of inference. Their design encompasses four distinct levels, each serving a different purpose in the model's spatiotemporal reasoning, and understanding their structure is essential for identifying optimization opportunities.
|
|
||||||
|
|
||||||
The first level is *spatial self-attention* within the SpatialTransformer. For a latent frame at resolution $H times W$, the token count is $H times W$ (e.g., $40 times 64 = 2560$ at the highest resolution). Implemented via xformers `memory_efficient_attention`, reducing peak memory from $O(N^2)$ to $O(N)$. Q/K/V use bias-free linear layers with head count = channel\_dim / num\_head\_channels (e.g., 1280/64 = 20 heads).
|
|
||||||
|
|
||||||
The second level is *multi-source cross-attention*, the most distinctive design in UnifoLM-WMA. The unified context vector is split into four semantic sources, each with dedicated K/V projection layers:
|
|
||||||
|
|
||||||
#figure(
|
|
||||||
table(
|
|
||||||
columns: (2fr, 1fr, 3fr, 2fr),
|
|
||||||
[*Source*], [*Tokens*], [*K/V Projections*], [*Scale*],
|
|
||||||
[Text], [77], [`to_k` / `to_v` (shared base)], [1.0],
|
|
||||||
[Image], [16#sym.times T], [`to_k_ip` / `to_v_ip`], [`image_cross_attention_scale`],
|
|
||||||
[Agent state], [2], [`to_k_as` / `to_v_as`], [`agent_state_cross_attention_scale`],
|
|
||||||
[Agent action], [16], [`to_k_aa` / `to_v_aa`], [`agent_action_cross_attention_scale`],
|
|
||||||
),
|
|
||||||
caption: [Multi-source cross-attention configuration.],
|
|
||||||
)
|
|
||||||
|
|
||||||
The Query vector Q is always derived from the video latent features via `to_q`. For each of the four sources, independent attention scores are computed — $"softmax"(Q dot K_i^T \/ sqrt(d)) dot V_i$ — producing four separate attention outputs. These outputs are then combined via weighted summation:
|
|
||||||
|
|
||||||
$ "out" = "out"_"text" + alpha_"img" dot "out"_"ip" + alpha_"state" dot "out"_"as" + alpha_"action" dot "out"_"aa" $
|
|
||||||
|
|
||||||
In the current configuration, `cross_attention_scale_learnable=False` (fixed scales). This decoupled design adds 8 extra linear layers versus standard single-source cross-attention, creating opportunities for KV fusion optimization.
|
|
||||||
|
|
||||||
The third level is *temporal self-attention* within the TemporalTransformer. The input $(B, C, T, H, W)$ is reshaped to $(B times H times W, C, T)$, so each spatial position becomes an independent batch element and T=16 time steps form the token sequence. Supports relative position encoding via a `RelativePosition` module and optional causal masks; current configuration uses bidirectional temporal attention.
|
|
||||||
|
|
||||||
The fourth level is *action-latent-image cross-attention* in the `ActionLatentImageCrossAttention` module. Action tokens $(B, "action_dim", "act_proj_dim")$ as Query cross-attend to Video UNet features reshaped to $(B, T times H times W, C)$ as Key/Value. A `BasicTransformerBlock` (depth=1) performs action self-attention then cross-attention to video features, with zero-initialized `proj_out` and residual connection. This mechanism is the key bridge enabling the action head to access the visual world model's internal representations.
|
|
||||||
|
|
||||||
208
record.md
Normal file
208
record.md
Normal file
@@ -0,0 +1,208 @@
|
|||||||
|
好,下面是完整的 decode 优化讲解。
|
||||||
|
|
||||||
|
---
|
||||||
|
背景:原始 decode 流程的问题
|
||||||
|
|
||||||
|
原始代码中,每轮迭代的 world model 交互会调用 image_guided_synthesis_sim_mode,内部对DDIM 采样得到的 latent 做全量 VAE decode:
|
||||||
|
|
||||||
|
# ddpms.py decode_core, perframe_ae=True
|
||||||
|
for index in range(z.shape[0]): # 逐帧循环,16帧就调16次
|
||||||
|
frame_z = 1. / self.scale_factor * z[index:index+1]
|
||||||
|
frame_result = self.first_stage_model.decode(frame_z)
|
||||||
|
results.append(frame_result)
|
||||||
|
|
||||||
|
假设 8 轮迭代、每轮 16 帧,WM 路径 decode 128 次,policy 路径再 decode 128 次,共 256 次 VAE decode。每次都是同步阻塞的。
|
||||||
|
|
||||||
|
decode 完的视频还会被逐轮写成 mp4 + tensorboard,产生大量磁盘 I/O。最后还要把所有轮的 decoded video 在内存中torch.cat
|
||||||
|
拼接,再写一次完整视频。
|
||||||
|
|
||||||
|
---
|
||||||
|
优化1:decode_video 开关——按需跳过 decode
|
||||||
|
|
||||||
|
文件: world_model_interaction.py函数 image_guided_synthesis_sim_mode
|
||||||
|
|
||||||
|
改动: 给函数加decode_video 参数(默认 False),返回值增加 raw samples:
|
||||||
|
|
||||||
|
def image_guided_synthesis_sim_mode(...,
|
||||||
|
decode_video: bool = False, # 新增
|
||||||
|
...) -> tuple[Tensor | None, Tensor, Tensor, Tensor | None]:
|
||||||
|
|
||||||
|
samples = None
|
||||||
|
if ddim_sampler is not None:
|
||||||
|
samples, actions, states, intermedia = ddim_sampler.sample(...)if decode_video: # 条件 decode
|
||||||
|
batch_images = model.decode_first_stage(samples)
|
||||||
|
batch_variants = batch_images
|
||||||
|
|
||||||
|
return batch_variants, actions, states, samples# 多返回 samples
|
||||||
|
|
||||||
|
调用侧:
|
||||||
|
- Policy 路径:由 CLI 参数 --fast_policy_no_decode 控制,只需要 action 时可跳过 decode
|
||||||
|
- WM 交互路径:传decode_video=False,只拿 raw latent
|
||||||
|
|
||||||
|
效果: WM 路径每轮省掉 16 帧全量 decode。
|
||||||
|
|
||||||
|
---
|
||||||
|
优化2:只decode observation 需要的帧
|
||||||
|
|
||||||
|
问题: WM 跳过了全量 decode,但下一轮的CLIP embedding 需要 pixel-space 图像做 observation。
|
||||||
|
|
||||||
|
改动: 只decode exe_steps 帧(通常 1帧),而不是全部 16 帧:
|
||||||
|
|
||||||
|
# WM 调用,不做全量 decode
|
||||||
|
pred_videos_1, _, pred_states, wm_samples = image_guided_synthesis_sim_mode(
|
||||||
|
..., decode_video=False)
|
||||||
|
|
||||||
|
# 只 decode exe_steps 帧给 observation
|
||||||
|
obs_pixels = model.decode_first_stage(
|
||||||
|
wm_samples[:, :, :args.exe_steps, :, :])
|
||||||
|
|
||||||
|
for idx in range(args.exe_steps):
|
||||||
|
observation = {
|
||||||
|
'observation.images.top':obs_pixels[0, :, idx:idx + 1].permute(1, 0, 2, 3),
|
||||||
|
...
|
||||||
|
}
|
||||||
|
cond_obs_queues = populate_queues(cond_obs_queues, observation)
|
||||||
|
|
||||||
|
关键细节: 必须逐帧填充 observation queue(idx:idx+1),不能全用最后一帧,否则 CLIP embedding 输入变了会影响精度。
|
||||||
|
|
||||||
|
效果: 每轮从 decode 16 帧降到 decode exe_steps 帧(省15 帧/轮)。
|
||||||
|
|
||||||
|
---
|
||||||
|
优化3:decode stream——GPU 上并行 decode 和 UNet
|
||||||
|
|
||||||
|
问题: 写入最终视频仍需要完整 segment 的 pixel,这部分 decode 还是要做。
|
||||||
|
|
||||||
|
思路: 用独立 CUDA stream 做 segment decode,和下一轮 UNet 推断在 GPU 上并行。
|
||||||
|
|
||||||
|
改动:
|
||||||
|
|
||||||
|
初始化:
|
||||||
|
decode_stream = torch.cuda.Stream(device=device)
|
||||||
|
pending_decode = None
|
||||||
|
|
||||||
|
循环尾部:
|
||||||
|
# 收集上一轮 decode 结果
|
||||||
|
if pending_decode is not None:
|
||||||
|
decode_stream.synchronize()
|
||||||
|
write_q.put(pending_decode.cpu())
|
||||||
|
pending_decode = None
|
||||||
|
|
||||||
|
# 在 decode stream 上启动当前轮 segment decode(不阻塞主线程)
|
||||||
|
latent_slice = wm_samples[:, :, :args.exe_steps]
|
||||||
|
decode_stream.wait_stream(torch.cuda.current_stream()) # 确保 latent 就绪
|
||||||
|
with torch.cuda.stream(decode_stream):
|
||||||
|
pending_decode = model.decode_first_stage(latent_slice)
|
||||||
|
# 主线程立即进入下一轮 UNet
|
||||||
|
|
||||||
|
循环结束后收集最后一轮:
|
||||||
|
if pending_decode is not None:
|
||||||
|
decode_stream.synchronize()
|
||||||
|
write_q.put(pending_decode.cpu())
|
||||||
|
|
||||||
|
原理: decode_stream.wait_stream() 建立 stream间依赖,确保 latent 产出后才开始 decode。两个 stream 的 kernel 可以被GPU
|
||||||
|
调度器交错执行。
|
||||||
|
|
||||||
|
效果: segment decode 时间被下一轮 UNet 推断掩盖。
|
||||||
|
|
||||||
|
---
|
||||||
|
优化4:Writer 进程——CPU 工作跨进程并行
|
||||||
|
|
||||||
|
问题: decode 完的tensor 需要转numpy + cv2 编码写盘,这是 CPU 密集型操作,Python GIL 限制线程并行。
|
||||||
|
|
||||||
|
改动:
|
||||||
|
|
||||||
|
辅助函数(主进程和子进程都能调用):
|
||||||
|
def _video_tensor_to_frames(video: Tensor) -> np.ndarray:
|
||||||
|
video = torch.clamp(video.float(), -1., 1.)
|
||||||
|
n = video.shape[0]
|
||||||
|
video = video.permute(2, 0, 1, 3, 4)
|
||||||
|
frame_grids = [
|
||||||
|
torchvision.utils.make_grid(f, nrow=int(n), padding=0) for f in video
|
||||||
|
]
|
||||||
|
grid = torch.stack(frame_grids, dim=0)
|
||||||
|
grid = ((grid + 1.0) / 2.0 * 255).to(torch.uint8).permute(0, 2, 3, 1)
|
||||||
|
return grid.numpy()[:, :, :, ::-1] # RGB → BGR
|
||||||
|
|
||||||
|
Writer 进程:
|
||||||
|
def _video_writer_process(q: mp.Queue, filename: str, fps: int):
|
||||||
|
vwriter = None
|
||||||
|
while True:
|
||||||
|
item = q.get()
|
||||||
|
if item is None: # sentinel,退出
|
||||||
|
break
|
||||||
|
frames = _video_tensor_to_frames(item)
|
||||||
|
if vwriter is None:
|
||||||
|
h, w = frames.shape[1], frames.shape[2]
|
||||||
|
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
|
||||||
|
vwriter = cv2.VideoWriter(filename, fourcc, fps, (w, h))
|
||||||
|
for f in frames:
|
||||||
|
vwriter.write(f)
|
||||||
|
if vwriter is not None:
|
||||||
|
vwriter.release()
|
||||||
|
|
||||||
|
主进程启动 writer:
|
||||||
|
write_q = mp.Queue()
|
||||||
|
writer_proc = mp.Process(target=_video_writer_process,
|
||||||
|
args=(write_q, sample_full_video_file, args.save_fps))
|
||||||
|
writer_proc.start()
|
||||||
|
|
||||||
|
主进程通过 write_q.put(tensor.cpu()) 发送数据,循环结束发None sentinel 并join()。
|
||||||
|
|
||||||
|
效果:
|
||||||
|
- tensor→numpy 转换和cv2 编码不占主进程 CPU 时间
|
||||||
|
- 不受 GIL 限制
|
||||||
|
- cv2.VideoWriter 增量写入,不攒内存,不做最终 torch.cat
|
||||||
|
- Queue 自带背压,writer 处理不过来时 put 自然阻塞,不会 OOM
|
||||||
|
|
||||||
|
---
|
||||||
|
同时删除的冗余 I/O
|
||||||
|
|
||||||
|
- 每轮迭代的 WM 中间tensorboard log(log_to_tensorboard_async)
|
||||||
|
- 每轮迭代的 WM 中间 mp4(save_results_async)
|
||||||
|
- 每轮迭代的 policy 中间 mp4
|
||||||
|
- 最终的 torch.cat + 一次性 save_results_async 写完整视频
|
||||||
|
|
||||||
|
---
|
||||||
|
总结:优化前后对比
|
||||||
|
|
||||||
|
┌─────────────────────┬────────────────────────┬───────────────────────────┐
|
||||||
|
│ │ 原始 │ 优化后 │
|
||||||
|
├─────────────────────┼────────────────────────┼───────────────────────────┤
|
||||||
|
│ WM decode帧数/轮 │ 16 │ exe_steps(通常1) │
|
||||||
|
├─────────────────────┼────────────────────────┼───────────────────────────┤
|
||||||
|
│ segment decode 阻塞 │ 同步阻塞主线程 │ decode stream,GPU并行 │
|
||||||
|
├─────────────────────┼────────────────────────┼───────────────────────────┤
|
||||||
|
│ tensor→numpy+写盘 │ 主线程同步/线程池(GIL) │ 独立进程,无GIL │
|
||||||
|
├─────────────────────┼────────────────────────┼───────────────────────────┤
|
||||||
|
│ 视频写入方式 │ 攒内存→cat→一次写 │ cv2增量追加写 │
|
||||||
|
├─────────────────────┼────────────────────────┼───────────────────────────┤
|
||||||
|
│ 中间I/O │ 每轮2×mp4+2×tb │ 无 │
|
||||||
|
├─────────────────────┼────────────────────────┼───────────────────────────┤
|
||||||
|
│ 内存占用 │ 随迭代数线性增长 │ 恒定(不攒decoded video) │
|
||||||
|
└─────────────────────┴────────────────────────┴───────────────────────────┘
|
||||||
|
|
||||||
|
✻ Cooked for 1m 31s
|
||||||
|
|
||||||
|
❯ 目前哪些优化是默认开启的
|
||||||
|
|
||||||
|
● 让我检查各个优化的开关状态。
|
||||||
|
|
||||||
|
● Searched for 2 patterns (ctrl+o to expand)
|
||||||
|
|
||||||
|
● 当前各优化的默认状态:
|
||||||
|
|
||||||
|
┌───────────────────────────────────┬────────────┬────────────────────────────────────────────────────────┐
|
||||||
|
│ 优化 │ 默认开启? │ 控制方式 │
|
||||||
|
├───────────────────────────────────┼────────────┼────────────────────────────────────────────────────────┤
|
||||||
|
│ WM路径跳过全量decode │ ✅ 是 │ 硬编码 decode_video=False(789行) │
|
||||||
|
├───────────────────────────────────┼────────────┼────────────────────────────────────────────────────────┤
|
||||||
|
│ 只decode exe_steps帧给observation │ ✅ 是 │ 无开关,直接生效(792行) │
|
||||||
|
├───────────────────────────────────┼────────────┼────────────────────────────────────────────────────────┤
|
||||||
|
│ decode stream GPU并行 │ ✅ 是 │ 无开关,直接生效(824-827行) │
|
||||||
|
├───────────────────────────────────┼────────────┼────────────────────────────────────────────────────────┤
|
||||||
|
│ Writer进程增量写盘 │ ✅ 是 │ 无开关,直接生效(672-675行) │
|
||||||
|
├───────────────────────────────────┼────────────┼────────────────────────────────────────────────────────┤
|
||||||
|
│ 删除中间wm mp4/tb保存 │ ✅ 是 │ 代码已删除 │
|
||||||
|
├───────────────────────────────────┼────────────┼────────────────────────────────────────────────────────┤
|
||||||
|
│ Policy路径跳过decode │ ❌ 否 │ 需传--fast_policy_no_decode(默认False,即默认decode) │
|
||||||
|
└───────────────────────────────────┴────────────┴────────────────────────────────────────────────────────┘
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,37 +0,0 @@
|
|||||||
2026-02-11 17:34:29.188470: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-11 17:34:29.238296: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-11 17:34:29.238342: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-11 17:34:29.239649: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-11 17:34:29.247152: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-11 17:34:30.172640: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
File diff suppressed because it is too large
Load Diff
504
run_all_cases_20260218_190150.log
Normal file
504
run_all_cases_20260218_190150.log
Normal file
@@ -0,0 +1,504 @@
|
|||||||
|
2026-02-18 19:01:56.891895: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
|
2026-02-18 19:01:56.940243: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
|
2026-02-18 19:01:56.940285: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
|
2026-02-18 19:01:56.941395: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
|
2026-02-18 19:01:56.948327: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
|
2026-02-18 19:01:57.870809: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
|
Global seed set to 123
|
||||||
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
|
>>> Prepared model loaded.
|
||||||
|
INFO:root:***** Configing Data *****
|
||||||
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
|
>>> unitree_z1_stackbox: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
||||||
|
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
||||||
|
>>> unitree_g1_pack_camera: data stats loaded.
|
||||||
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
|
>>> Dataset is successfully loaded ...
|
||||||
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
|
>>> Generate 16 frames under each generation ...
|
||||||
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-19:02:10] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
|
9%|▉ | 1/11 [00:17<02:51, 17.15s/it]>>> Step 0: generating actions ...
|
||||||
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
|
18%|█▊ | 2/11 [00:33<02:31, 16.87s/it]
|
||||||
|
27%|██▋ | 3/11 [00:50<02:14, 16.76s/it]
|
||||||
|
36%|███▋ | 4/11 [01:07<01:57, 16.81s/it]
|
||||||
|
45%|████▌ | 5/11 [01:24<01:41, 16.85s/it]
|
||||||
|
55%|█████▍ | 6/11 [01:41<01:24, 16.82s/it]
|
||||||
|
64%|██████▎ | 7/11 [01:57<01:07, 16.82s/it]
|
||||||
|
73%|███████▎ | 8/11 [02:14<00:50, 16.83s/it]
|
||||||
|
82%|████████▏ | 9/11 [02:31<00:33, 16.80s/it]
|
||||||
|
91%|█████████ | 10/11 [02:48<00:16, 16.81s/it]
|
||||||
|
100%|██████████| 11/11 [03:05<00:00, 16.81s/it]
|
||||||
|
100%|██████████| 11/11 [03:05<00:00, 16.83s/it]
|
||||||
|
>>> Step 1: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 2: generating actions ...
|
||||||
|
>>> Step 2: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 3: generating actions ...
|
||||||
|
>>> Step 3: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 4: generating actions ...
|
||||||
|
>>> Step 4: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 5: generating actions ...
|
||||||
|
>>> Step 5: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 6: generating actions ...
|
||||||
|
>>> Step 6: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 7: generating actions ...
|
||||||
|
>>> Step 7: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 8: generating actions ...
|
||||||
|
>>> Step 8: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 9: generating actions ...
|
||||||
|
>>> Step 9: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 10: generating actions ...
|
||||||
|
>>> Step 10: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|
||||||
|
real 3m49.072s
|
||||||
|
user 4m16.055s
|
||||||
|
sys 0m44.636s
|
||||||
|
2026-02-18 19:05:45.956647: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
|
2026-02-18 19:05:46.004149: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
|
2026-02-18 19:05:46.004193: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
|
2026-02-18 19:05:46.005265: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
|
2026-02-18 19:05:46.012074: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
|
2026-02-18 19:05:46.932966: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
|
Global seed set to 123
|
||||||
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
|
>>> Prepared model loaded.
|
||||||
|
INFO:root:***** Configing Data *****
|
||||||
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
|
>>> unitree_z1_stackbox: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
||||||
|
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
||||||
|
>>> unitree_g1_pack_camera: data stats loaded.
|
||||||
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
|
>>> Dataset is successfully loaded ...
|
||||||
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
|
>>> Generate 16 frames under each generation ...
|
||||||
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-19:05:59] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
|
9%|▉ | 1/11 [00:16<02:47, 16.71s/it]>>> Step 0: generating actions ...
|
||||||
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
|
18%|█▊ | 2/11 [00:33<02:30, 16.75s/it]
|
||||||
|
27%|██▋ | 3/11 [00:50<02:15, 16.91s/it]
|
||||||
|
36%|███▋ | 4/11 [01:07<01:59, 17.02s/it]
|
||||||
|
45%|████▌ | 5/11 [01:24<01:41, 16.98s/it]
|
||||||
|
55%|█████▍ | 6/11 [01:41<01:24, 16.94s/it]
|
||||||
|
64%|██████▎ | 7/11 [01:58<01:07, 16.90s/it]
|
||||||
|
73%|███████▎ | 8/11 [02:15<00:50, 16.83s/it]
|
||||||
|
82%|████████▏ | 9/11 [02:31<00:33, 16.80s/it]
|
||||||
|
91%|█████████ | 10/11 [02:49<00:16, 16.94s/it]
|
||||||
|
100%|██████████| 11/11 [03:06<00:00, 16.97s/it]
|
||||||
|
100%|██████████| 11/11 [03:06<00:00, 16.91s/it]
|
||||||
|
>>> Step 1: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 2: generating actions ...
|
||||||
|
>>> Step 2: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 3: generating actions ...
|
||||||
|
>>> Step 3: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 4: generating actions ...
|
||||||
|
>>> Step 4: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 5: generating actions ...
|
||||||
|
>>> Step 5: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 6: generating actions ...
|
||||||
|
>>> Step 6: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 7: generating actions ...
|
||||||
|
>>> Step 7: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 8: generating actions ...
|
||||||
|
>>> Step 8: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 9: generating actions ...
|
||||||
|
>>> Step 9: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 10: generating actions ...
|
||||||
|
>>> Step 10: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|
||||||
|
real 3m49.162s
|
||||||
|
user 4m12.814s
|
||||||
|
sys 0m45.565s
|
||||||
|
2026-02-18 19:09:35.113634: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
|
2026-02-18 19:09:35.161428: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
|
2026-02-18 19:09:35.161474: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
|
2026-02-18 19:09:35.162551: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
|
2026-02-18 19:09:35.169325: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
|
2026-02-18 19:09:36.089250: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
|
Global seed set to 123
|
||||||
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
|
>>> Prepared model loaded.
|
||||||
|
INFO:root:***** Configing Data *****
|
||||||
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
|
>>> unitree_z1_stackbox: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
||||||
|
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
||||||
|
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
||||||
|
>>> unitree_g1_pack_camera: data stats loaded.
|
||||||
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
|
>>> Dataset is successfully loaded ...
|
||||||
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
|
>>> Generate 16 frames under each generation ...
|
||||||
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-19:09:49] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
|
9%|▉ | 1/11 [00:16<02:45, 16.53s/it]>>> Step 0: generating actions ...
|
||||||
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
File diff suppressed because it is too large
Load Diff
0
run_all_psnr.sh
Executable file → Normal file
0
run_all_psnr.sh
Executable file → Normal file
@@ -10,7 +10,6 @@ import einops
|
|||||||
import warnings
|
import warnings
|
||||||
import imageio
|
import imageio
|
||||||
import atexit
|
import atexit
|
||||||
import multiprocessing as mp
|
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
|
||||||
from pytorch_lightning import seed_everything
|
from pytorch_lightning import seed_everything
|
||||||
@@ -232,32 +231,6 @@ def log_to_tensorboard_async(writer, data: Tensor, tag: str, fps: int = 10) -> N
|
|||||||
_io_futures.append(fut)
|
_io_futures.append(fut)
|
||||||
|
|
||||||
|
|
||||||
def _video_tensor_to_frames(video: Tensor) -> np.ndarray:
|
|
||||||
video = torch.clamp(video.float(), -1., 1.)
|
|
||||||
n = video.shape[0]
|
|
||||||
video = video.permute(2, 0, 1, 3, 4)
|
|
||||||
frame_grids = [
|
|
||||||
torchvision.utils.make_grid(f, nrow=int(n), padding=0) for f in video
|
|
||||||
]
|
|
||||||
grid = torch.stack(frame_grids, dim=0)
|
|
||||||
grid = ((grid + 1.0) / 2.0 * 255).to(torch.uint8).permute(0, 2, 3, 1)
|
|
||||||
return grid.numpy()[:, :, :, ::-1]
|
|
||||||
|
|
||||||
|
|
||||||
def _video_writer_process(q: mp.Queue, filename: str, fps: int):
|
|
||||||
frames = []
|
|
||||||
while True:
|
|
||||||
item = q.get()
|
|
||||||
if item is None:
|
|
||||||
break
|
|
||||||
frames.append(_video_tensor_to_frames(item))
|
|
||||||
if frames:
|
|
||||||
grid = np.concatenate(frames, axis=0)
|
|
||||||
grid = torch.from_numpy(grid[:, :, :, ::-1].copy()) # BGR → RGB
|
|
||||||
torchvision.io.write_video(filename, grid, fps=fps,
|
|
||||||
video_codec='h264', options={'crf': '10'})
|
|
||||||
|
|
||||||
|
|
||||||
def get_init_frame_path(data_dir: str, sample: dict) -> str:
|
def get_init_frame_path(data_dir: str, sample: dict) -> str:
|
||||||
"""Construct the init_frame path from directory and sample metadata.
|
"""Construct the init_frame path from directory and sample metadata.
|
||||||
|
|
||||||
@@ -477,9 +450,8 @@ def image_guided_synthesis_sim_mode(
|
|||||||
|
|
||||||
img = observation['observation.images.top'].permute(0, 2, 1, 3, 4)
|
img = observation['observation.images.top'].permute(0, 2, 1, 3, 4)
|
||||||
cond_img = rearrange(img, 'b o c h w -> (b o) c h w')[-1:]
|
cond_img = rearrange(img, 'b o c h w -> (b o) c h w')[-1:]
|
||||||
with torch.cuda.amp.autocast(dtype=torch.float16):
|
cond_img_emb = model.embedder(cond_img)
|
||||||
cond_img_emb = model.embedder(cond_img)
|
cond_img_emb = model.image_proj_model(cond_img_emb)
|
||||||
cond_img_emb = model.image_proj_model(cond_img_emb)
|
|
||||||
|
|
||||||
if model.model.conditioning_key == 'hybrid':
|
if model.model.conditioning_key == 'hybrid':
|
||||||
z = get_latent_z(model, img.permute(0, 2, 1, 3, 4))
|
z = get_latent_z(model, img.permute(0, 2, 1, 3, 4))
|
||||||
@@ -493,12 +465,11 @@ def image_guided_synthesis_sim_mode(
|
|||||||
prompts = [""] * batch_size
|
prompts = [""] * batch_size
|
||||||
cond_ins_emb = model.get_learned_conditioning(prompts)
|
cond_ins_emb = model.get_learned_conditioning(prompts)
|
||||||
|
|
||||||
with torch.cuda.amp.autocast(dtype=torch.float16):
|
cond_state_emb = model.state_projector(observation['observation.state'])
|
||||||
cond_state_emb = model.state_projector(observation['observation.state'])
|
cond_state_emb = cond_state_emb + model.agent_state_pos_emb
|
||||||
cond_state_emb = cond_state_emb + model.agent_state_pos_emb
|
|
||||||
|
|
||||||
cond_action_emb = model.action_projector(observation['action'])
|
cond_action_emb = model.action_projector(observation['action'])
|
||||||
cond_action_emb = cond_action_emb + model.agent_action_pos_emb
|
cond_action_emb = cond_action_emb + model.agent_action_pos_emb
|
||||||
|
|
||||||
if not sim_mode:
|
if not sim_mode:
|
||||||
cond_action_emb = torch.zeros_like(cond_action_emb)
|
cond_action_emb = torch.zeros_like(cond_action_emb)
|
||||||
@@ -521,7 +492,6 @@ def image_guided_synthesis_sim_mode(
|
|||||||
cond_mask = None
|
cond_mask = None
|
||||||
cond_z0 = None
|
cond_z0 = None
|
||||||
batch_variants = None
|
batch_variants = None
|
||||||
samples = None
|
|
||||||
if ddim_sampler is not None:
|
if ddim_sampler is not None:
|
||||||
samples, actions, states, intermedia = ddim_sampler.sample(
|
samples, actions, states, intermedia = ddim_sampler.sample(
|
||||||
S=ddim_steps,
|
S=ddim_steps,
|
||||||
@@ -545,7 +515,7 @@ def image_guided_synthesis_sim_mode(
|
|||||||
batch_images = model.decode_first_stage(samples)
|
batch_images = model.decode_first_stage(samples)
|
||||||
batch_variants = batch_images
|
batch_variants = batch_images
|
||||||
|
|
||||||
return batch_variants, actions, states, samples
|
return batch_variants, actions, states
|
||||||
|
|
||||||
|
|
||||||
def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
||||||
@@ -601,22 +571,6 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
torch.save(model, prepared_path)
|
torch.save(model, prepared_path)
|
||||||
print(f">>> Prepared model saved ({os.path.getsize(prepared_path) / 1024**3:.1f} GB).")
|
print(f">>> Prepared model saved ({os.path.getsize(prepared_path) / 1024**3:.1f} GB).")
|
||||||
|
|
||||||
# ---- FP16: convert diffusion backbone + conditioning modules ----
|
|
||||||
model.model.to(torch.float16)
|
|
||||||
model.model.diffusion_model.dtype = torch.float16
|
|
||||||
print(">>> Diffusion backbone (model.model) converted to FP16.")
|
|
||||||
|
|
||||||
# Projectors / MLP → FP16
|
|
||||||
model.image_proj_model.half()
|
|
||||||
model.state_projector.half()
|
|
||||||
model.action_projector.half()
|
|
||||||
print(">>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.")
|
|
||||||
|
|
||||||
# Text/image encoders → FP16
|
|
||||||
model.cond_stage_model.half()
|
|
||||||
model.embedder.half()
|
|
||||||
print(">>> Encoders (cond_stage_model, embedder) converted to FP16.")
|
|
||||||
|
|
||||||
# Build normalizer (always needed, independent of model loading path)
|
# Build normalizer (always needed, independent of model loading path)
|
||||||
logging.info("***** Configing Data *****")
|
logging.info("***** Configing Data *****")
|
||||||
data = instantiate_from_config(config.data)
|
data = instantiate_from_config(config.data)
|
||||||
@@ -631,6 +585,11 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
if isinstance(m, CrossAttention) and m.fuse_kv())
|
if isinstance(m, CrossAttention) and m.fuse_kv())
|
||||||
print(f" ✓ KV fused: {kv_count} attention layers")
|
print(f" ✓ KV fused: {kv_count} attention layers")
|
||||||
|
|
||||||
|
# Load TRT backbone if engine exists
|
||||||
|
trt_engine_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..', 'trt_engines', 'video_backbone.engine')
|
||||||
|
if os.path.exists(trt_engine_path):
|
||||||
|
model.model.diffusion_model.load_trt_backbone(trt_engine_path)
|
||||||
|
|
||||||
# Run over data
|
# Run over data
|
||||||
assert (args.height % 16 == 0) and (
|
assert (args.height % 16 == 0) and (
|
||||||
args.width % 16
|
args.width % 16
|
||||||
@@ -675,13 +634,8 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
# For saving environmental changes in world-model
|
# For saving environmental changes in world-model
|
||||||
sample_save_dir = f'{video_save_dir}/wm/{fs}'
|
sample_save_dir = f'{video_save_dir}/wm/{fs}'
|
||||||
os.makedirs(sample_save_dir, exist_ok=True)
|
os.makedirs(sample_save_dir, exist_ok=True)
|
||||||
# Writer process for incremental video saving
|
# For collecting interaction videos
|
||||||
sample_full_video_file = f"{video_save_dir}/../{sample['videoid']}_full_fs{fs}.mp4"
|
wm_video = []
|
||||||
write_q = mp.Queue()
|
|
||||||
writer_proc = mp.Process(
|
|
||||||
target=_video_writer_process,
|
|
||||||
args=(write_q, sample_full_video_file, args.save_fps))
|
|
||||||
writer_proc.start()
|
|
||||||
# Initialize observation queues
|
# Initialize observation queues
|
||||||
cond_obs_queues = {
|
cond_obs_queues = {
|
||||||
"observation.images.top":
|
"observation.images.top":
|
||||||
@@ -737,7 +691,7 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
|
|
||||||
# Use world-model in policy to generate action
|
# Use world-model in policy to generate action
|
||||||
print(f'>>> Step {itr}: generating actions ...')
|
print(f'>>> Step {itr}: generating actions ...')
|
||||||
pred_videos_0, pred_actions, _, _ = image_guided_synthesis_sim_mode(
|
pred_videos_0, pred_actions, _ = image_guided_synthesis_sim_mode(
|
||||||
model,
|
model,
|
||||||
sample['instruction'],
|
sample['instruction'],
|
||||||
observation,
|
observation,
|
||||||
@@ -779,7 +733,7 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
|
|
||||||
# Interaction with the world-model
|
# Interaction with the world-model
|
||||||
print(f'>>> Step {itr}: interacting with world model ...')
|
print(f'>>> Step {itr}: interacting with world model ...')
|
||||||
pred_videos_1, _, pred_states, wm_samples = image_guided_synthesis_sim_mode(
|
pred_videos_1, _, pred_states = image_guided_synthesis_sim_mode(
|
||||||
model,
|
model,
|
||||||
"",
|
"",
|
||||||
observation,
|
observation,
|
||||||
@@ -792,16 +746,12 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
fs=model_input_fs,
|
fs=model_input_fs,
|
||||||
text_input=False,
|
text_input=False,
|
||||||
timestep_spacing=args.timestep_spacing,
|
timestep_spacing=args.timestep_spacing,
|
||||||
guidance_rescale=args.guidance_rescale,
|
guidance_rescale=args.guidance_rescale)
|
||||||
decode_video=False)
|
|
||||||
|
|
||||||
# Decode only the last frame for CLIP embedding in next iteration
|
|
||||||
last_frame_pixel = model.decode_first_stage(wm_samples[:, :, -1:, :, :])
|
|
||||||
|
|
||||||
for idx in range(args.exe_steps):
|
for idx in range(args.exe_steps):
|
||||||
observation = {
|
observation = {
|
||||||
'observation.images.top':
|
'observation.images.top':
|
||||||
last_frame_pixel[0, :, 0:1].permute(1, 0, 2, 3),
|
pred_videos_1[0][:, idx:idx + 1].permute(1, 0, 2, 3),
|
||||||
'observation.state':
|
'observation.state':
|
||||||
torch.zeros_like(pred_states[0][idx:idx + 1]) if
|
torch.zeros_like(pred_states[0][idx:idx + 1]) if
|
||||||
args.zero_pred_state else pred_states[0][idx:idx + 1],
|
args.zero_pred_state else pred_states[0][idx:idx + 1],
|
||||||
@@ -819,16 +769,37 @@ def run_inference(args: argparse.Namespace, gpu_num: int, gpu_no: int) -> None:
|
|||||||
pred_videos_0,
|
pred_videos_0,
|
||||||
sample_tag,
|
sample_tag,
|
||||||
fps=args.save_fps)
|
fps=args.save_fps)
|
||||||
|
# Save videos environment changes via world-model interaction
|
||||||
|
sample_tag = f"{args.dataset}-vid{sample['videoid']}-wd-fs-{fs}/itr-{itr}"
|
||||||
|
log_to_tensorboard_async(writer,
|
||||||
|
pred_videos_1,
|
||||||
|
sample_tag,
|
||||||
|
fps=args.save_fps)
|
||||||
|
|
||||||
|
# Save the imagen videos for decision-making
|
||||||
|
if pred_videos_0 is not None:
|
||||||
|
sample_video_file = f'{video_save_dir}/dm/{fs}/itr-{itr}.mp4'
|
||||||
|
save_results_async(pred_videos_0,
|
||||||
|
sample_video_file,
|
||||||
|
fps=args.save_fps)
|
||||||
|
# Save videos environment changes via world-model interaction
|
||||||
|
sample_video_file = f'{video_save_dir}/wm/{fs}/itr-{itr}.mp4'
|
||||||
|
save_results_async(pred_videos_1,
|
||||||
|
sample_video_file,
|
||||||
|
fps=args.save_fps)
|
||||||
|
|
||||||
print('>' * 24)
|
print('>' * 24)
|
||||||
# Decode segment and send to writer process
|
# Collect the result of world-model interactions
|
||||||
seg_video = model.decode_first_stage(
|
wm_video.append(pred_videos_1[:, :, :args.exe_steps].cpu())
|
||||||
wm_samples[:, :, :args.exe_steps]).detach().cpu()
|
|
||||||
write_q.put(seg_video)
|
|
||||||
|
|
||||||
# Stop writer process
|
full_video = torch.cat(wm_video, dim=2)
|
||||||
write_q.put(None)
|
sample_tag = f"{args.dataset}-vid{sample['videoid']}-wd-fs-{fs}/full"
|
||||||
writer_proc.join()
|
log_to_tensorboard_async(writer,
|
||||||
|
full_video,
|
||||||
|
sample_tag,
|
||||||
|
fps=args.save_fps)
|
||||||
|
sample_full_video_file = f"{video_save_dir}/../{sample['videoid']}_full_fs{fs}.mp4"
|
||||||
|
save_results_async(full_video, sample_full_video_file, fps=args.save_fps)
|
||||||
|
|
||||||
# Wait for all async I/O to complete
|
# Wait for all async I/O to complete
|
||||||
_flush_io()
|
_flush_io()
|
||||||
@@ -949,7 +920,7 @@ def get_parser():
|
|||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--fast_policy_no_decode",
|
"--fast_policy_no_decode",
|
||||||
action='store_true',
|
action='store_true',
|
||||||
default=True,
|
default=False,
|
||||||
help="Speed mode: policy pass only predicts actions, skip policy video decode/log/save.")
|
help="Speed mode: policy pass only predicts actions, skip policy video decode/log/save.")
|
||||||
parser.add_argument("--save_fps",
|
parser.add_argument("--save_fps",
|
||||||
type=int,
|
type=int,
|
||||||
|
|||||||
87
scripts/export_trt.py
Normal file
87
scripts/export_trt.py
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
"""Export video UNet backbone to ONNX, then convert to TensorRT engine.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python scripts/export_trt.py \
|
||||||
|
--ckpt ckpts/unifolm_wma_dual.ckpt.prepared.pt \
|
||||||
|
--config configs/inference/world_model_interaction.yaml \
|
||||||
|
--out_dir trt_engines
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import tensorrt as trt
|
||||||
|
from omegaconf import OmegaConf
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||||
|
from unifolm_wma.utils.utils import instantiate_from_config
|
||||||
|
from unifolm_wma.trt_utils import export_backbone_onnx
|
||||||
|
|
||||||
|
|
||||||
|
def load_model(config_path, ckpt_path):
|
||||||
|
if ckpt_path.endswith('.prepared.pt'):
|
||||||
|
model = torch.load(ckpt_path, map_location='cpu')
|
||||||
|
else:
|
||||||
|
config = OmegaConf.load(config_path)
|
||||||
|
model = instantiate_from_config(config.model)
|
||||||
|
state_dict = torch.load(ckpt_path, map_location='cpu')
|
||||||
|
if 'state_dict' in state_dict:
|
||||||
|
state_dict = state_dict['state_dict']
|
||||||
|
model.load_state_dict(state_dict, strict=False)
|
||||||
|
model.eval().cuda()
|
||||||
|
return model
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument('--ckpt', required=True)
|
||||||
|
parser.add_argument('--config', default='configs/inference/world_model_interaction.yaml')
|
||||||
|
parser.add_argument('--out_dir', default='trt_engines')
|
||||||
|
parser.add_argument('--context_len', type=int, default=95)
|
||||||
|
parser.add_argument('--fp16', action='store_true', default=True)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
os.makedirs(args.out_dir, exist_ok=True)
|
||||||
|
onnx_path = os.path.join(args.out_dir, 'video_backbone.onnx')
|
||||||
|
engine_path = os.path.join(args.out_dir, 'video_backbone.engine')
|
||||||
|
|
||||||
|
if os.path.exists(onnx_path):
|
||||||
|
print(f">>> ONNX already exists at {onnx_path}, skipping export.")
|
||||||
|
n_outputs = 10
|
||||||
|
else:
|
||||||
|
print(">>> Loading model ...")
|
||||||
|
model = load_model(args.config, args.ckpt)
|
||||||
|
print(">>> Exporting ONNX ...")
|
||||||
|
with torch.no_grad():
|
||||||
|
n_outputs = export_backbone_onnx(model, onnx_path, context_len=args.context_len)
|
||||||
|
del model
|
||||||
|
torch.cuda.empty_cache()
|
||||||
|
|
||||||
|
print(">>> Converting ONNX -> TensorRT engine ...")
|
||||||
|
logger = trt.Logger(trt.Logger.WARNING)
|
||||||
|
builder = trt.Builder(logger)
|
||||||
|
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
|
||||||
|
parser = trt.OnnxParser(network, logger)
|
||||||
|
|
||||||
|
if not parser.parse_from_file(os.path.abspath(onnx_path)):
|
||||||
|
for i in range(parser.num_errors):
|
||||||
|
print(f" ONNX parse error: {parser.get_error(i)}")
|
||||||
|
raise RuntimeError("ONNX parsing failed")
|
||||||
|
|
||||||
|
config = builder.create_builder_config()
|
||||||
|
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 16 << 30)
|
||||||
|
if args.fp16:
|
||||||
|
config.set_flag(trt.BuilderFlag.FP16)
|
||||||
|
|
||||||
|
engine_bytes = builder.build_serialized_network(network, config)
|
||||||
|
with open(engine_path, 'wb') as f:
|
||||||
|
f.write(engine_bytes)
|
||||||
|
|
||||||
|
print(f"\n>>> Done! Engine saved to {engine_path}")
|
||||||
|
print(f" Outputs: 1 y + {n_outputs - 1} hs_a tensors")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
@@ -988,7 +988,7 @@ class LatentDiffusion(DDPM):
|
|||||||
|
|
||||||
def instantiate_cond_stage(self, config: OmegaConf) -> None:
|
def instantiate_cond_stage(self, config: OmegaConf) -> None:
|
||||||
"""
|
"""
|
||||||
Build the conditioning stage model. Frozen models are converted to FP16.
|
Build the conditioning stage model.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config: OmegaConf config describing the conditioning model to instantiate.
|
config: OmegaConf config describing the conditioning model to instantiate.
|
||||||
@@ -1000,7 +1000,6 @@ class LatentDiffusion(DDPM):
|
|||||||
self.cond_stage_model.train = disabled_train
|
self.cond_stage_model.train = disabled_train
|
||||||
for param in self.cond_stage_model.parameters():
|
for param in self.cond_stage_model.parameters():
|
||||||
param.requires_grad = False
|
param.requires_grad = False
|
||||||
self.cond_stage_model.half()
|
|
||||||
else:
|
else:
|
||||||
model = instantiate_from_config(config)
|
model = instantiate_from_config(config)
|
||||||
self.cond_stage_model = model
|
self.cond_stage_model = model
|
||||||
@@ -1015,18 +1014,17 @@ class LatentDiffusion(DDPM):
|
|||||||
Returns:
|
Returns:
|
||||||
Conditioning embedding as a tensor (shape depends on cond model).
|
Conditioning embedding as a tensor (shape depends on cond model).
|
||||||
"""
|
"""
|
||||||
with torch.cuda.amp.autocast(dtype=torch.float16):
|
if self.cond_stage_forward is None:
|
||||||
if self.cond_stage_forward is None:
|
if hasattr(self.cond_stage_model, 'encode') and callable(
|
||||||
if hasattr(self.cond_stage_model, 'encode') and callable(
|
self.cond_stage_model.encode):
|
||||||
self.cond_stage_model.encode):
|
c = self.cond_stage_model.encode(c)
|
||||||
c = self.cond_stage_model.encode(c)
|
if isinstance(c, DiagonalGaussianDistribution):
|
||||||
if isinstance(c, DiagonalGaussianDistribution):
|
c = c.mode()
|
||||||
c = c.mode()
|
|
||||||
else:
|
|
||||||
c = self.cond_stage_model(c)
|
|
||||||
else:
|
else:
|
||||||
assert hasattr(self.cond_stage_model, self.cond_stage_forward)
|
c = self.cond_stage_model(c)
|
||||||
c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
|
else:
|
||||||
|
assert hasattr(self.cond_stage_model, self.cond_stage_forward)
|
||||||
|
c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
|
||||||
return c
|
return c
|
||||||
|
|
||||||
def get_first_stage_encoding(
|
def get_first_stage_encoding(
|
||||||
@@ -1959,7 +1957,6 @@ class LatentVisualDiffusion(LatentDiffusion):
|
|||||||
self.image_proj_model.train = disabled_train
|
self.image_proj_model.train = disabled_train
|
||||||
for param in self.image_proj_model.parameters():
|
for param in self.image_proj_model.parameters():
|
||||||
param.requires_grad = False
|
param.requires_grad = False
|
||||||
self.image_proj_model.half()
|
|
||||||
|
|
||||||
def _init_embedder(self, config: OmegaConf, freeze: bool = True) -> None:
|
def _init_embedder(self, config: OmegaConf, freeze: bool = True) -> None:
|
||||||
"""
|
"""
|
||||||
@@ -1975,7 +1972,6 @@ class LatentVisualDiffusion(LatentDiffusion):
|
|||||||
self.embedder.train = disabled_train
|
self.embedder.train = disabled_train
|
||||||
for param in self.embedder.parameters():
|
for param in self.embedder.parameters():
|
||||||
param.requires_grad = False
|
param.requires_grad = False
|
||||||
self.embedder.half()
|
|
||||||
|
|
||||||
def init_normalizers(self, normalize_config: OmegaConf,
|
def init_normalizers(self, normalize_config: OmegaConf,
|
||||||
dataset_stats: Mapping[str, Any]) -> None:
|
dataset_stats: Mapping[str, Any]) -> None:
|
||||||
@@ -2179,9 +2175,8 @@ class LatentVisualDiffusion(LatentDiffusion):
|
|||||||
(random_num < 3 * self.uncond_prob).float(), "n -> n 1 1 1")
|
(random_num < 3 * self.uncond_prob).float(), "n -> n 1 1 1")
|
||||||
|
|
||||||
cond_img = input_mask * img
|
cond_img = input_mask * img
|
||||||
with torch.cuda.amp.autocast(dtype=torch.float16):
|
cond_img_emb = self.embedder(cond_img)
|
||||||
cond_img_emb = self.embedder(cond_img)
|
cond_img_emb = self.image_proj_model(cond_img_emb)
|
||||||
cond_img_emb = self.image_proj_model(cond_img_emb)
|
|
||||||
|
|
||||||
if self.model.conditioning_key == 'hybrid':
|
if self.model.conditioning_key == 'hybrid':
|
||||||
if self.interp_mode:
|
if self.interp_mode:
|
||||||
@@ -2196,12 +2191,11 @@ class LatentVisualDiffusion(LatentDiffusion):
|
|||||||
repeat=z.shape[2])
|
repeat=z.shape[2])
|
||||||
cond["c_concat"] = [img_cat_cond]
|
cond["c_concat"] = [img_cat_cond]
|
||||||
|
|
||||||
with torch.cuda.amp.autocast(dtype=torch.float16):
|
cond_action = self.action_projector(action)
|
||||||
cond_action = self.action_projector(action)
|
cond_action_emb = self.agent_action_pos_emb + cond_action
|
||||||
cond_action_emb = self.agent_action_pos_emb + cond_action
|
# Get conditioning states
|
||||||
# Get conditioning states
|
cond_state = self.state_projector(obs_state)
|
||||||
cond_state = self.state_projector(obs_state)
|
cond_state_emb = self.agent_state_pos_emb + cond_state
|
||||||
cond_state_emb = self.agent_state_pos_emb + cond_state
|
|
||||||
|
|
||||||
if self.decision_making_only:
|
if self.decision_making_only:
|
||||||
is_sim_mode = False
|
is_sim_mode = False
|
||||||
@@ -2463,17 +2457,7 @@ class DiffusionWrapper(pl.LightningModule):
|
|||||||
Returns:
|
Returns:
|
||||||
Output from the inner diffusion model (tensor or tuple, depending on the model).
|
Output from the inner diffusion model (tensor or tuple, depending on the model).
|
||||||
"""
|
"""
|
||||||
with torch.cuda.amp.autocast(dtype=torch.float16):
|
|
||||||
return self._forward_impl(x, x_action, x_state, t,
|
|
||||||
c_concat, c_crossattn, c_crossattn_action,
|
|
||||||
c_adm, s, mask, **kwargs)
|
|
||||||
|
|
||||||
def _forward_impl(
|
|
||||||
self,
|
|
||||||
x, x_action, x_state, t,
|
|
||||||
c_concat=None, c_crossattn=None, c_crossattn_action=None,
|
|
||||||
c_adm=None, s=None, mask=None, **kwargs,
|
|
||||||
):
|
|
||||||
if self.conditioning_key is None:
|
if self.conditioning_key is None:
|
||||||
out = self.diffusion_model(x, t)
|
out = self.diffusion_model(x, t)
|
||||||
elif self.conditioning_key == 'concat':
|
elif self.conditioning_key == 'concat':
|
||||||
|
|||||||
@@ -688,6 +688,7 @@ class WMAModel(nn.Module):
|
|||||||
# Context precomputation cache
|
# Context precomputation cache
|
||||||
self._ctx_cache_enabled = False
|
self._ctx_cache_enabled = False
|
||||||
self._ctx_cache = {}
|
self._ctx_cache = {}
|
||||||
|
self._trt_backbone = None # TRT engine for video UNet backbone
|
||||||
# Reusable CUDA stream for parallel state_unet / action_unet
|
# Reusable CUDA stream for parallel state_unet / action_unet
|
||||||
self._state_stream = torch.cuda.Stream()
|
self._state_stream = torch.cuda.Stream()
|
||||||
|
|
||||||
@@ -700,6 +701,12 @@ class WMAModel(nn.Module):
|
|||||||
self.__dict__.update(state)
|
self.__dict__.update(state)
|
||||||
self._state_stream = torch.cuda.Stream()
|
self._state_stream = torch.cuda.Stream()
|
||||||
|
|
||||||
|
def load_trt_backbone(self, engine_path, n_hs_a=9):
|
||||||
|
"""Load a TensorRT engine for the video UNet backbone."""
|
||||||
|
from unifolm_wma.trt_utils import TRTBackbone
|
||||||
|
self._trt_backbone = TRTBackbone(engine_path, n_hs_a=n_hs_a)
|
||||||
|
print(f">>> TRT backbone loaded from {engine_path}")
|
||||||
|
|
||||||
def forward(self,
|
def forward(self,
|
||||||
x: Tensor,
|
x: Tensor,
|
||||||
x_action: Tensor,
|
x_action: Tensor,
|
||||||
@@ -812,44 +819,50 @@ class WMAModel(nn.Module):
|
|||||||
fs_embed = fs_embed.repeat_interleave(repeats=t, dim=0)
|
fs_embed = fs_embed.repeat_interleave(repeats=t, dim=0)
|
||||||
emb = emb + fs_embed
|
emb = emb + fs_embed
|
||||||
|
|
||||||
h = x.type(self.dtype)
|
if self._trt_backbone is not None:
|
||||||
adapter_idx = 0
|
# TRT path: run backbone via TensorRT engine
|
||||||
hs = []
|
h_in = x.type(self.dtype).contiguous()
|
||||||
hs_a = []
|
y, hs_a = self._trt_backbone(h_in, emb.contiguous(), context.contiguous())
|
||||||
for id, module in enumerate(self.input_blocks):
|
else:
|
||||||
h = module(h, emb, context=context, batch_size=b)
|
# PyTorch path: original backbone
|
||||||
if id == 0 and self.addition_attention:
|
h = x.type(self.dtype)
|
||||||
h = self.init_attn(h, emb, context=context, batch_size=b)
|
adapter_idx = 0
|
||||||
# plug-in adapter features
|
hs = []
|
||||||
if ((id + 1) % 3 == 0) and features_adapter is not None:
|
hs_a = []
|
||||||
h = h + features_adapter[adapter_idx]
|
for id, module in enumerate(self.input_blocks):
|
||||||
adapter_idx += 1
|
h = module(h, emb, context=context, batch_size=b)
|
||||||
if id != 0:
|
if id == 0 and self.addition_attention:
|
||||||
if isinstance(module[0], Downsample):
|
h = self.init_attn(h, emb, context=context, batch_size=b)
|
||||||
|
# plug-in adapter features
|
||||||
|
if ((id + 1) % 3 == 0) and features_adapter is not None:
|
||||||
|
h = h + features_adapter[adapter_idx]
|
||||||
|
adapter_idx += 1
|
||||||
|
if id != 0:
|
||||||
|
if isinstance(module[0], Downsample):
|
||||||
|
hs_a.append(
|
||||||
|
rearrange(hs[-1], '(b t) c h w -> b t c h w', t=t))
|
||||||
|
hs.append(h)
|
||||||
|
hs_a.append(rearrange(h, '(b t) c h w -> b t c h w', t=t))
|
||||||
|
|
||||||
|
if features_adapter is not None:
|
||||||
|
assert len(
|
||||||
|
features_adapter) == adapter_idx, 'Wrong features_adapter'
|
||||||
|
h = self.middle_block(h, emb, context=context, batch_size=b)
|
||||||
|
hs_a.append(rearrange(h, '(b t) c h w -> b t c h w', t=t))
|
||||||
|
|
||||||
|
hs_out = []
|
||||||
|
for module in self.output_blocks:
|
||||||
|
h = torch.cat([h, hs.pop()], dim=1)
|
||||||
|
h = module(h, emb, context=context, batch_size=b)
|
||||||
|
if isinstance(module[-1], Upsample):
|
||||||
hs_a.append(
|
hs_a.append(
|
||||||
rearrange(hs[-1], '(b t) c h w -> b t c h w', t=t))
|
rearrange(hs_out[-1], '(b t) c h w -> b t c h w', t=t))
|
||||||
hs.append(h)
|
hs_out.append(h)
|
||||||
hs_a.append(rearrange(h, '(b t) c h w -> b t c h w', t=t))
|
h = h.type(x.dtype)
|
||||||
|
hs_a.append(rearrange(hs_out[-1], '(b t) c h w -> b t c h w', t=t))
|
||||||
|
|
||||||
if features_adapter is not None:
|
y = self.out(h)
|
||||||
assert len(
|
y = rearrange(y, '(b t) c h w -> b c t h w', b=b)
|
||||||
features_adapter) == adapter_idx, 'Wrong features_adapter'
|
|
||||||
h = self.middle_block(h, emb, context=context, batch_size=b)
|
|
||||||
hs_a.append(rearrange(h, '(b t) c h w -> b t c h w', t=t))
|
|
||||||
|
|
||||||
hs_out = []
|
|
||||||
for module in self.output_blocks:
|
|
||||||
h = torch.cat([h, hs.pop()], dim=1)
|
|
||||||
h = module(h, emb, context=context, batch_size=b)
|
|
||||||
if isinstance(module[-1], Upsample):
|
|
||||||
hs_a.append(
|
|
||||||
rearrange(hs_out[-1], '(b t) c h w -> b t c h w', t=t))
|
|
||||||
hs_out.append(h)
|
|
||||||
h = h.type(x.dtype)
|
|
||||||
hs_a.append(rearrange(hs_out[-1], '(b t) c h w -> b t c h w', t=t))
|
|
||||||
|
|
||||||
y = self.out(h)
|
|
||||||
y = rearrange(y, '(b t) c h w -> b c t h w', b=b)
|
|
||||||
|
|
||||||
if not self.base_model_gen_only:
|
if not self.base_model_gen_only:
|
||||||
ba, _, _ = x_action.shape
|
ba, _, _ = x_action.shape
|
||||||
|
|||||||
151
src/unifolm_wma/trt_utils.py
Normal file
151
src/unifolm_wma/trt_utils.py
Normal file
@@ -0,0 +1,151 @@
|
|||||||
|
"""TensorRT acceleration utilities for the video UNet backbone."""
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import torch.nn as nn
|
||||||
|
from einops import rearrange
|
||||||
|
from unifolm_wma.modules.networks.wma_model import Downsample, Upsample
|
||||||
|
|
||||||
|
|
||||||
|
class VideoBackboneForExport(nn.Module):
|
||||||
|
"""Wrapper that isolates the video UNet backbone for ONNX export.
|
||||||
|
|
||||||
|
Takes already-preprocessed inputs (after context/time embedding prep)
|
||||||
|
and returns y + hs_a as a flat tuple.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, wma_model):
|
||||||
|
super().__init__()
|
||||||
|
self.input_blocks = wma_model.input_blocks
|
||||||
|
self.middle_block = wma_model.middle_block
|
||||||
|
self.output_blocks = wma_model.output_blocks
|
||||||
|
self.out = wma_model.out
|
||||||
|
self.addition_attention = wma_model.addition_attention
|
||||||
|
if self.addition_attention:
|
||||||
|
self.init_attn = wma_model.init_attn
|
||||||
|
self.dtype = wma_model.dtype
|
||||||
|
|
||||||
|
def forward(self, h, emb, context):
|
||||||
|
t = 16
|
||||||
|
b = 1
|
||||||
|
|
||||||
|
hs = []
|
||||||
|
hs_a = []
|
||||||
|
h = h.type(self.dtype)
|
||||||
|
for id, module in enumerate(self.input_blocks):
|
||||||
|
h = module(h, emb, context=context, batch_size=b)
|
||||||
|
if id == 0 and self.addition_attention:
|
||||||
|
h = self.init_attn(h, emb, context=context, batch_size=b)
|
||||||
|
if id != 0:
|
||||||
|
if isinstance(module[0], Downsample):
|
||||||
|
hs_a.append(rearrange(hs[-1], '(b t) c h w -> b t c h w', t=t))
|
||||||
|
hs.append(h)
|
||||||
|
hs_a.append(rearrange(h, '(b t) c h w -> b t c h w', t=t))
|
||||||
|
|
||||||
|
h = self.middle_block(h, emb, context=context, batch_size=b)
|
||||||
|
hs_a.append(rearrange(h, '(b t) c h w -> b t c h w', t=t))
|
||||||
|
|
||||||
|
hs_out = []
|
||||||
|
for module in self.output_blocks:
|
||||||
|
h = torch.cat([h, hs.pop()], dim=1)
|
||||||
|
h = module(h, emb, context=context, batch_size=b)
|
||||||
|
if isinstance(module[-1], Upsample):
|
||||||
|
hs_a.append(rearrange(hs_out[-1], '(b t) c h w -> b t c h w', t=t))
|
||||||
|
hs_out.append(h)
|
||||||
|
hs_a.append(rearrange(hs_out[-1], '(b t) c h w -> b t c h w', t=t))
|
||||||
|
|
||||||
|
y = self.out(h.type(h.dtype))
|
||||||
|
y = rearrange(y, '(b t) c h w -> b c t h w', b=b)
|
||||||
|
return (y, *hs_a)
|
||||||
|
|
||||||
|
|
||||||
|
def export_backbone_onnx(model, save_path, context_len=95):
|
||||||
|
wma = model.model.diffusion_model
|
||||||
|
wrapper = VideoBackboneForExport(wma)
|
||||||
|
wrapper.eval().cuda()
|
||||||
|
|
||||||
|
for m in wrapper.modules():
|
||||||
|
if hasattr(m, 'checkpoint'):
|
||||||
|
m.checkpoint = False
|
||||||
|
if hasattr(m, 'use_checkpoint'):
|
||||||
|
m.use_checkpoint = False
|
||||||
|
|
||||||
|
import xformers.ops
|
||||||
|
_orig_mea = xformers.ops.memory_efficient_attention
|
||||||
|
def _sdpa_replacement(q, k, v, attn_bias=None, op=None, **kw):
|
||||||
|
return torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_bias)
|
||||||
|
xformers.ops.memory_efficient_attention = _sdpa_replacement
|
||||||
|
|
||||||
|
BT = 16
|
||||||
|
emb_dim = wma.model_channels * 4
|
||||||
|
ctx_dim = 1024
|
||||||
|
in_ch = wma.in_channels
|
||||||
|
|
||||||
|
dummy_h = torch.randn(BT, in_ch, 40, 64, device='cuda', dtype=torch.float32)
|
||||||
|
dummy_emb = torch.randn(BT, emb_dim, device='cuda', dtype=torch.float32)
|
||||||
|
dummy_ctx = torch.randn(BT, context_len, ctx_dim, device='cuda', dtype=torch.float32)
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
outputs = wrapper(dummy_h, dummy_emb, dummy_ctx)
|
||||||
|
n_outputs = len(outputs)
|
||||||
|
print(f">>> Backbone has {n_outputs} outputs (1 y + {n_outputs-1} hs_a)")
|
||||||
|
for i, o in enumerate(outputs):
|
||||||
|
print(f" output[{i}]: {o.shape} {o.dtype}")
|
||||||
|
|
||||||
|
output_names = ['y'] + [f'hs_a_{i}' for i in range(n_outputs - 1)]
|
||||||
|
|
||||||
|
torch.onnx.export(
|
||||||
|
wrapper,
|
||||||
|
(dummy_h, dummy_emb, dummy_ctx),
|
||||||
|
save_path,
|
||||||
|
input_names=['h', 'emb', 'context'],
|
||||||
|
output_names=output_names,
|
||||||
|
opset_version=17,
|
||||||
|
do_constant_folding=True,
|
||||||
|
)
|
||||||
|
print(f">>> ONNX exported to {save_path}")
|
||||||
|
xformers.ops.memory_efficient_attention = _orig_mea
|
||||||
|
return n_outputs
|
||||||
|
|
||||||
|
|
||||||
|
class TRTBackbone:
|
||||||
|
"""TensorRT runtime wrapper for the video UNet backbone."""
|
||||||
|
|
||||||
|
def __init__(self, engine_path, n_hs_a=9):
|
||||||
|
import tensorrt as trt
|
||||||
|
|
||||||
|
self.logger = trt.Logger(trt.Logger.WARNING)
|
||||||
|
with open(engine_path, 'rb') as f:
|
||||||
|
runtime = trt.Runtime(self.logger)
|
||||||
|
self.engine = runtime.deserialize_cuda_engine(f.read())
|
||||||
|
self.context = self.engine.create_execution_context()
|
||||||
|
self.n_hs_a = n_hs_a
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
self.output_buffers = {}
|
||||||
|
for i in range(self.engine.num_io_tensors):
|
||||||
|
name = self.engine.get_tensor_name(i)
|
||||||
|
if self.engine.get_tensor_mode(name) == trt.TensorIOMode.OUTPUT:
|
||||||
|
shape = self.engine.get_tensor_shape(name)
|
||||||
|
np_dtype = trt.nptype(self.engine.get_tensor_dtype(name))
|
||||||
|
buf = torch.empty(list(shape), dtype=torch.from_numpy(np.empty(0, dtype=np_dtype)).dtype, device='cuda')
|
||||||
|
self.output_buffers[name] = buf
|
||||||
|
print(f" TRT output '{name}': {list(shape)} {buf.dtype}")
|
||||||
|
|
||||||
|
def __call__(self, h, emb, context):
|
||||||
|
import tensorrt as trt
|
||||||
|
for name, tensor in [('h', h), ('emb', emb), ('context', context)]:
|
||||||
|
expected_dtype = trt.nptype(self.engine.get_tensor_dtype(name))
|
||||||
|
torch_expected = torch.from_numpy(__import__('numpy').empty(0, dtype=expected_dtype)).dtype
|
||||||
|
if tensor.dtype != torch_expected:
|
||||||
|
tensor = tensor.to(torch_expected)
|
||||||
|
self.context.set_tensor_address(name, tensor.contiguous().data_ptr())
|
||||||
|
|
||||||
|
for name, buf in self.output_buffers.items():
|
||||||
|
self.context.set_tensor_address(name, buf.data_ptr())
|
||||||
|
|
||||||
|
self.context.execute_async_v3(torch.cuda.current_stream().cuda_stream)
|
||||||
|
torch.cuda.synchronize()
|
||||||
|
|
||||||
|
y = self.output_buffers['y']
|
||||||
|
hs_a = [self.output_buffers[f'hs_a_{i}'] for i in range(self.n_hs_a)]
|
||||||
|
return y, hs_a
|
||||||
@@ -1,16 +1,13 @@
|
|||||||
2026-02-19 18:55:32.160020: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
2026-02-18 19:01:56.891895: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
2026-02-19 18:55:32.207538: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
2026-02-18 19:01:56.940243: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
2026-02-19 18:55:32.207581: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
2026-02-18 19:01:56.940285: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
2026-02-19 18:55:32.208613: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
2026-02-18 19:01:56.941395: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
2026-02-19 18:55:32.215249: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
2026-02-18 19:01:56.948327: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
2026-02-19 18:55:33.121466: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
2026-02-18 19:01:57.870809: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
Global seed set to 123
|
Global seed set to 123
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
>>> Prepared model loaded.
|
>>> Prepared model loaded.
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
INFO:root:***** Configing Data *****
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
@@ -29,16 +26,124 @@ INFO:root:***** Configing Data *****
|
|||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
>>> Dataset is successfully loaded ...
|
>>> Dataset is successfully loaded ...
|
||||||
✓ KV fused: 66 attention layers
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
>>> Generate 16 frames under each generation ...
|
>>> Generate 16 frames under each generation ...
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-19:02:10] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
9%|▉ | 1/11 [00:17<02:51, 17.15s/it]>>> Step 0: generating actions ...
|
9%|▉ | 1/11 [00:17<02:51, 17.15s/it]>>> Step 0: generating actions ...
|
||||||
>>> Step 0: interacting with world model ...
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
18%|█▊ | 2/11 [00:33<02:31, 16.87s/it]
|
18%|█▊ | 2/11 [00:33<02:31, 16.87s/it]
|
||||||
@@ -69,6 +174,6 @@ DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|||||||
>>> Step 6: generating actions ...
|
>>> Step 6: generating actions ...
|
||||||
>>> Step 6: interacting with world model ...
|
>>> Step 6: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
>>> Step 7: generating actions ...
|
>>> Step 7: generating actions ...
|
||||||
>>> Step 7: interacting with world model ...
|
>>> Step 7: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_g1_pack_camera/case1/unitree_g1_pack_camera_case1.mp4",
|
"gt_video": "unitree_g1_pack_camera/case1/unitree_g1_pack_camera_case1.mp4",
|
||||||
"pred_video": "unitree_g1_pack_camera/case1/output/inference/0_full_fs6.mp4",
|
"pred_video": "unitree_g1_pack_camera/case1/output/inference/0_full_fs6.mp4",
|
||||||
"psnr": 32.34126103448495
|
"psnr": 35.615362167470806
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_g1_pack_camera"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,16 +1,13 @@
|
|||||||
2026-02-19 19:00:05.944067: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
2026-02-18 19:05:45.956647: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
2026-02-19 19:00:05.991354: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
2026-02-18 19:05:46.004149: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
2026-02-19 19:00:05.991392: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
2026-02-18 19:05:46.004193: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
2026-02-19 19:00:05.992414: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
2026-02-18 19:05:46.005265: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
2026-02-19 19:00:05.999050: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
2026-02-18 19:05:46.012074: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
2026-02-19 19:00:06.916175: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
2026-02-18 19:05:46.932966: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
Global seed set to 123
|
Global seed set to 123
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
>>> Prepared model loaded.
|
>>> Prepared model loaded.
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
INFO:root:***** Configing Data *****
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
@@ -29,16 +26,124 @@ INFO:root:***** Configing Data *****
|
|||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
>>> Dataset is successfully loaded ...
|
>>> Dataset is successfully loaded ...
|
||||||
✓ KV fused: 66 attention layers
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
>>> Generate 16 frames under each generation ...
|
>>> Generate 16 frames under each generation ...
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-19:05:59] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
9%|▉ | 1/11 [00:16<02:47, 16.71s/it]>>> Step 0: generating actions ...
|
9%|▉ | 1/11 [00:16<02:47, 16.71s/it]>>> Step 0: generating actions ...
|
||||||
>>> Step 0: interacting with world model ...
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
18%|█▊ | 2/11 [00:33<02:30, 16.75s/it]
|
18%|█▊ | 2/11 [00:33<02:30, 16.75s/it]
|
||||||
@@ -69,6 +174,6 @@ DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|||||||
>>> Step 6: generating actions ...
|
>>> Step 6: generating actions ...
|
||||||
>>> Step 6: interacting with world model ...
|
>>> Step 6: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
>>> Step 7: generating actions ...
|
>>> Step 7: generating actions ...
|
||||||
>>> Step 7: interacting with world model ...
|
>>> Step 7: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_g1_pack_camera/case2/unitree_g1_pack_camera_case2.mp4",
|
"gt_video": "unitree_g1_pack_camera/case2/unitree_g1_pack_camera_case2.mp4",
|
||||||
"pred_video": "unitree_g1_pack_camera/case2/output/inference/50_full_fs6.mp4",
|
"pred_video": "unitree_g1_pack_camera/case2/output/inference/50_full_fs6.mp4",
|
||||||
"psnr": 37.49178506869336
|
"psnr": 34.61979248212279
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_g1_pack_camera"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,16 +1,13 @@
|
|||||||
2026-02-19 19:04:41.036634: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
2026-02-18 19:09:35.113634: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
2026-02-19 19:04:41.084414: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
2026-02-18 19:09:35.161428: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
2026-02-19 19:04:41.084452: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
2026-02-18 19:09:35.161474: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
2026-02-19 19:04:41.085481: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
2026-02-18 19:09:35.162551: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
2026-02-19 19:04:41.092287: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
2026-02-18 19:09:35.169325: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
2026-02-19 19:04:42.000614: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
2026-02-18 19:09:36.089250: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
Global seed set to 123
|
Global seed set to 123
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
>>> Prepared model loaded.
|
>>> Prepared model loaded.
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
INFO:root:***** Configing Data *****
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
@@ -29,46 +26,121 @@ INFO:root:***** Configing Data *****
|
|||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
>>> Dataset is successfully loaded ...
|
>>> Dataset is successfully loaded ...
|
||||||
✓ KV fused: 66 attention layers
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
>>> Generate 16 frames under each generation ...
|
>>> Generate 16 frames under each generation ...
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-19:09:49] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
9%|▉ | 1/11 [00:16<02:45, 16.53s/it]>>> Step 0: generating actions ...
|
9%|▉ | 1/11 [00:16<02:45, 16.53s/it]>>> Step 0: generating actions ...
|
||||||
>>> Step 0: interacting with world model ...
|
>>> Step 0: interacting with world model ...
|
||||||
36%|███▋ | 4/11 [01:34<02:44, 23.47s/it]
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
45%|████▌ | 5/11 [01:57<02:20, 23.45s/it]
|
>>> Step 1: generating actions ...
|
||||||
55%|█████▍ | 6/11 [02:21<01:57, 23.44s/it]
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
64%|██████▎ | 7/11 [02:44<01:33, 23.41s/it]
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
73%|███████▎ | 8/11 [03:07<01:10, 23.38s/it]
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
82%|████████▏ | 9/11 [03:31<00:46, 23.36s/it]
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
91%|█████████ | 10/11 [03:54<00:23, 23.35s/it]
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
100%|██████████| 11/11 [04:17<00:00, 23.33s/it]
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
100%|██████████| 11/11 [04:17<00:00, 23.42s/it]
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
>>> Step 0: generating actions ...
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
>>> Step 0: interacting with world model ...
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
>>> Step 1: generating actions ...
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
>>> Step 1: interacting with world model ...
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
>>> Step 2: generating actions ...
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
>>> Step 2: interacting with world model ...
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
>>> Step 3: generating actions ...
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
>>> Step 3: interacting with world model ...
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
>>> Step 4: generating actions ...
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
>>> Step 4: interacting with world model ...
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
>>> Step 5: generating actions ...
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
>>> Step 5: interacting with world model ...
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
>>> Step 6: generating actions ...
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
>>> Step 6: interacting with world model ...
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
>>> Step 7: generating actions ...
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
>>> Step 7: interacting with world model ...
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_g1_pack_camera/case3/unitree_g1_pack_camera_case3.mp4",
|
"gt_video": "unitree_g1_pack_camera/case3/unitree_g1_pack_camera_case3.mp4",
|
||||||
"pred_video": "unitree_g1_pack_camera/case3/output/inference/100_full_fs6.mp4",
|
"pred_video": "unitree_g1_pack_camera/case3/output/inference/100_full_fs6.mp4",
|
||||||
"psnr": 29.88155122131729
|
"psnr": 37.034952654534486
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_g1_pack_camera"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,74 +0,0 @@
|
|||||||
2026-02-19 19:09:16.122268: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:09:16.170290: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:09:16.170331: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:09:16.171349: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:09:16.177993: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:09:17.087425: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
|
||||||
9%|▉ | 1/11 [00:24<04:01, 24.17s/it]
|
|
||||||
18%|█▊ | 2/11 [00:47<03:32, 23.62s/it]
|
|
||||||
27%|██▋ | 3/11 [01:10<03:07, 23.49s/it]
|
|
||||||
36%|███▋ | 4/11 [01:34<02:44, 23.46s/it]
|
|
||||||
45%|████▌ | 5/11 [01:57<02:20, 23.42s/it]
|
|
||||||
55%|█████▍ | 6/11 [02:20<01:56, 23.40s/it]
|
|
||||||
64%|██████▎ | 7/11 [02:44<01:33, 23.38s/it]
|
|
||||||
73%|███████▎ | 8/11 [03:07<01:10, 23.36s/it]
|
|
||||||
82%|████████▏ | 9/11 [03:30<00:46, 23.35s/it]
|
|
||||||
91%|█████████ | 10/11 [03:54<00:23, 23.34s/it]
|
|
||||||
100%|██████████| 11/11 [04:17<00:00, 23.33s/it]
|
|
||||||
100%|██████████| 11/11 [04:17<00:00, 23.40s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_g1_pack_camera/case4/unitree_g1_pack_camera_case4.mp4",
|
"gt_video": "unitree_g1_pack_camera/case4/unitree_g1_pack_camera_case4.mp4",
|
||||||
"pred_video": "unitree_g1_pack_camera/case4/output/inference/200_full_fs6.mp4",
|
"pred_video": "unitree_g1_pack_camera/case4/output/inference/200_full_fs6.mp4",
|
||||||
"psnr": 35.62512454155058
|
"psnr": 31.43390896360405
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_g1_pack_camera"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,16 +1,24 @@
|
|||||||
2026-02-19 19:13:51.554194: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
2026-02-10 15:38:28.973314: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
2026-02-19 19:13:51.601580: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
2026-02-10 15:38:29.023024: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
2026-02-19 19:13:51.601622: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
2026-02-10 15:38:29.023070: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
2026-02-19 19:13:51.602646: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
2026-02-10 15:38:29.024393: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
2026-02-19 19:13:51.609297: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
2026-02-10 15:38:29.031901: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
2026-02-19 19:13:52.517676: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
2026-02-10 15:38:29.955454: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
Global seed set to 123
|
Global seed set to 123
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
INFO:mainlogger:LatentVisualDiffusion: Running in v-prediction mode
|
||||||
>>> Prepared model loaded.
|
INFO:unifolm_wma.models.diffusion_head.conditional_unet1d:number of parameters: 5.010531e+08
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
INFO:unifolm_wma.models.diffusion_head.conditional_unet1d:number of parameters: 5.010531e+08
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
AE working on z of shape (1, 4, 32, 32) = 4096 dimensions.
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
INFO:root:Loaded ViT-H-14 model config.
|
||||||
|
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): hf-mirror.com:443
|
||||||
|
DEBUG:urllib3.connectionpool:https://hf-mirror.com:443 "HEAD /laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin HTTP/1.1" 302 0
|
||||||
|
INFO:root:Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
|
||||||
|
INFO:root:Loaded ViT-H-14 model config.
|
||||||
|
DEBUG:urllib3.connectionpool:https://hf-mirror.com:443 "HEAD /laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin HTTP/1.1" 302 0
|
||||||
|
INFO:root:Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
|
||||||
|
>>> model checkpoint loaded.
|
||||||
|
>>> Load pre-trained model ...
|
||||||
INFO:root:***** Configing Data *****
|
INFO:root:***** Configing Data *****
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
@@ -28,15 +36,63 @@ INFO:root:***** Configing Data *****
|
|||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
>>> unitree_g1_pack_camera: data stats loaded.
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
>>> Dataset is successfully loaded ...
|
>>> Dataset is successfully loaded ...
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
>>> Generate 16 frames under each generation ...
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
0%| | 0/8 [00:00<?, ?it/s]
|
|
||||||
0%| | 0/8 [00:00<?, ?it/s]>>> Step 0: generating actions ...
|
0%| | 0/8 [00:00<?, ?it/s]>>> Step 0: generating actions ...
|
||||||
|
>>> Step 0: interacting with world model ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
12%|█▎ | 1/8 [01:14<08:41, 74.51s/it]
|
12%|█▎ | 1/8 [01:14<08:41, 74.51s/it]
|
||||||
25%|██▌ | 2/8 [02:29<07:28, 74.79s/it]
|
25%|██▌ | 2/8 [02:29<07:28, 74.79s/it]
|
||||||
@@ -60,6 +116,6 @@ DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|||||||
>>> Step 4: generating actions ...
|
>>> Step 4: generating actions ...
|
||||||
>>> Step 4: interacting with world model ...
|
>>> Step 4: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
>>> Step 5: generating actions ...
|
>>> Step 5: generating actions ...
|
||||||
>>> Step 5: interacting with world model ...
|
>>> Step 5: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case1/unitree_z1_dual_arm_cleanup_pencils_case1.mp4",
|
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case1/unitree_z1_dual_arm_cleanup_pencils_case1.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case1/output/inference/0_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case1/output/inference/0_full_fs4.mp4",
|
||||||
"psnr": 38.269577028444445
|
"psnr": 47.911564449209735
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_cleanup_pencils"
|
|||||||
--n_iter 8 \
|
--n_iter 8 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,65 +0,0 @@
|
|||||||
2026-02-19 19:17:16.282875: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:17:16.330519: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:17:16.330561: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:17:16.331631: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:17:16.338413: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:17:17.250653: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/8 [00:00<?, ?it/s]
|
|
||||||
12%|█▎ | 1/8 [00:24<02:48, 24.06s/it]
|
|
||||||
25%|██▌ | 2/8 [00:47<02:21, 23.61s/it]
|
|
||||||
38%|███▊ | 3/8 [01:10<01:57, 23.47s/it]
|
|
||||||
50%|█████ | 4/8 [01:34<01:33, 23.44s/it]
|
|
||||||
62%|██████▎ | 5/8 [01:57<01:10, 23.41s/it]
|
|
||||||
75%|███████▌ | 6/8 [02:20<00:46, 23.37s/it]
|
|
||||||
88%|████████▊ | 7/8 [02:44<00:23, 23.35s/it]
|
|
||||||
100%|██████████| 8/8 [03:07<00:00, 23.34s/it]
|
|
||||||
100%|██████████| 8/8 [03:07<00:00, 23.42s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case2/unitree_z1_dual_arm_cleanup_pencils_case2.mp4",
|
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case2/unitree_z1_dual_arm_cleanup_pencils_case2.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case2/output/inference/50_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case2/output/inference/50_full_fs4.mp4",
|
||||||
"psnr": 44.50028075962896
|
"psnr": 48.344571927558974
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_cleanup_pencils"
|
|||||||
--n_iter 8 \
|
--n_iter 8 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,65 +0,0 @@
|
|||||||
2026-02-19 19:20:40.444703: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:20:40.492237: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:20:40.492278: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:20:40.493360: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:20:40.500130: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:20:41.414718: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/8 [00:00<?, ?it/s]
|
|
||||||
12%|█▎ | 1/8 [00:24<02:48, 24.06s/it]
|
|
||||||
25%|██▌ | 2/8 [00:47<02:21, 23.58s/it]
|
|
||||||
38%|███▊ | 3/8 [01:10<01:57, 23.45s/it]
|
|
||||||
50%|█████ | 4/8 [01:33<01:33, 23.41s/it]
|
|
||||||
62%|██████▎ | 5/8 [01:57<01:10, 23.38s/it]
|
|
||||||
75%|███████▌ | 6/8 [02:20<00:46, 23.37s/it]
|
|
||||||
88%|████████▊ | 7/8 [02:43<00:23, 23.36s/it]
|
|
||||||
100%|██████████| 8/8 [03:07<00:00, 23.34s/it]
|
|
||||||
100%|██████████| 8/8 [03:07<00:00, 23.41s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case3/unitree_z1_dual_arm_cleanup_pencils_case3.mp4",
|
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case3/unitree_z1_dual_arm_cleanup_pencils_case3.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case3/output/inference/100_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case3/output/inference/100_full_fs4.mp4",
|
||||||
"psnr": 32.29959078097713
|
"psnr": 41.152374490134825
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_cleanup_pencils"
|
|||||||
--n_iter 8 \
|
--n_iter 8 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,65 +0,0 @@
|
|||||||
2026-02-19 19:24:05.230366: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:24:05.278058: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:24:05.278100: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:24:05.279133: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:24:05.285789: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:24:06.199101: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/8 [00:00<?, ?it/s]
|
|
||||||
12%|█▎ | 1/8 [00:24<02:48, 24.06s/it]
|
|
||||||
25%|██▌ | 2/8 [00:47<02:21, 23.56s/it]
|
|
||||||
38%|███▊ | 3/8 [01:10<01:57, 23.45s/it]
|
|
||||||
50%|█████ | 4/8 [01:33<01:33, 23.43s/it]
|
|
||||||
62%|██████▎ | 5/8 [01:57<01:10, 23.38s/it]
|
|
||||||
75%|███████▌ | 6/8 [02:20<00:46, 23.35s/it]
|
|
||||||
88%|████████▊ | 7/8 [02:43<00:23, 23.33s/it]
|
|
||||||
100%|██████████| 8/8 [03:07<00:00, 23.33s/it]
|
|
||||||
100%|██████████| 8/8 [03:07<00:00, 23.40s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case4/unitree_z1_dual_arm_cleanup_pencils_case4.mp4",
|
"gt_video": "unitree_z1_dual_arm_cleanup_pencils/case4/unitree_z1_dual_arm_cleanup_pencils_case4.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case4/output/inference/200_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_cleanup_pencils/case4/output/inference/200_full_fs4.mp4",
|
||||||
"psnr": 45.051241961122535
|
"psnr": 46.025723557253855
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_cleanup_pencils"
|
|||||||
--n_iter 8 \
|
--n_iter 8 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,62 +0,0 @@
|
|||||||
2026-02-19 19:27:29.317502: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:27:29.365030: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:27:29.365079: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:27:29.366111: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:27:29.372733: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:27:30.291220: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/7 [00:00<?, ?it/s]
|
|
||||||
14%|█▍ | 1/7 [00:24<02:24, 24.09s/it]
|
|
||||||
29%|██▊ | 2/7 [00:47<01:57, 23.59s/it]
|
|
||||||
43%|████▎ | 3/7 [01:10<01:33, 23.46s/it]
|
|
||||||
57%|█████▋ | 4/7 [01:33<01:10, 23.42s/it]
|
|
||||||
71%|███████▏ | 5/7 [01:57<00:46, 23.39s/it]
|
|
||||||
86%|████████▌ | 6/7 [02:20<00:23, 23.37s/it]
|
|
||||||
100%|██████████| 7/7 [02:43<00:00, 23.35s/it]
|
|
||||||
100%|██████████| 7/7 [02:43<00:00, 23.42s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox/case1/unitree_z1_dual_arm_stackbox_case1.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox/case1/unitree_z1_dual_arm_stackbox_case1.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox/case1/output/inference/5_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox/case1/output/inference/5_full_fs4.mp4",
|
||||||
"psnr": 42.717688631296596
|
"psnr": 44.3480149502738
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox"
|
|||||||
--n_iter 7 \
|
--n_iter 7 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,62 +0,0 @@
|
|||||||
2026-02-19 19:30:30.058862: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:30:30.106200: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:30:30.106243: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:30:30.107276: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:30:30.113917: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:30:31.026240: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/7 [00:00<?, ?it/s]
|
|
||||||
14%|█▍ | 1/7 [00:24<02:24, 24.09s/it]
|
|
||||||
29%|██▊ | 2/7 [00:47<01:58, 23.60s/it]
|
|
||||||
43%|████▎ | 3/7 [01:10<01:33, 23.48s/it]
|
|
||||||
57%|█████▋ | 4/7 [01:34<01:10, 23.43s/it]
|
|
||||||
71%|███████▏ | 5/7 [01:57<00:46, 23.43s/it]
|
|
||||||
86%|████████▌ | 6/7 [02:20<00:23, 23.42s/it]
|
|
||||||
100%|██████████| 7/7 [02:44<00:00, 23.40s/it]
|
|
||||||
100%|██████████| 7/7 [02:44<00:00, 23.46s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox/case2/unitree_z1_dual_arm_stackbox_case2.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox/case2/unitree_z1_dual_arm_stackbox_case2.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox/case2/output/inference/15_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox/case2/output/inference/15_full_fs4.mp4",
|
||||||
"psnr": 44.90750363879194
|
"psnr": 39.867728254007716
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox"
|
|||||||
--n_iter 7 \
|
--n_iter 7 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,62 +0,0 @@
|
|||||||
2026-02-19 19:33:31.235859: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:33:31.283866: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:33:31.283908: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:33:31.284941: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:33:31.291610: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:33:32.199716: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/7 [00:00<?, ?it/s]
|
|
||||||
14%|█▍ | 1/7 [00:24<02:24, 24.10s/it]
|
|
||||||
29%|██▊ | 2/7 [00:47<01:58, 23.62s/it]
|
|
||||||
43%|████▎ | 3/7 [01:10<01:34, 23.51s/it]
|
|
||||||
57%|█████▋ | 4/7 [01:34<01:10, 23.46s/it]
|
|
||||||
71%|███████▏ | 5/7 [01:57<00:46, 23.44s/it]
|
|
||||||
86%|████████▌ | 6/7 [02:20<00:23, 23.43s/it]
|
|
||||||
100%|██████████| 7/7 [02:44<00:00, 23.40s/it]
|
|
||||||
100%|██████████| 7/7 [02:44<00:00, 23.47s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox/case3/unitree_z1_dual_arm_stackbox_case3.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox/case3/unitree_z1_dual_arm_stackbox_case3.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox/case3/output/inference/25_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox/case3/output/inference/25_full_fs4.mp4",
|
||||||
"psnr": 39.63695040491171
|
"psnr": 39.19101039445159
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox"
|
|||||||
--n_iter 7 \
|
--n_iter 7 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,62 +0,0 @@
|
|||||||
2026-02-19 19:36:32.251051: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:36:32.298464: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:36:32.298506: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:36:32.299538: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:36:32.306168: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:36:33.213503: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/7 [00:00<?, ?it/s]
|
|
||||||
14%|█▍ | 1/7 [00:24<02:24, 24.05s/it]
|
|
||||||
29%|██▊ | 2/7 [00:47<01:57, 23.58s/it]
|
|
||||||
43%|████▎ | 3/7 [01:10<01:33, 23.45s/it]
|
|
||||||
57%|█████▋ | 4/7 [01:33<01:10, 23.43s/it]
|
|
||||||
71%|███████▏ | 5/7 [01:57<00:46, 23.41s/it]
|
|
||||||
86%|████████▌ | 6/7 [02:20<00:23, 23.38s/it]
|
|
||||||
100%|██████████| 7/7 [02:43<00:00, 23.35s/it]
|
|
||||||
100%|██████████| 7/7 [02:43<00:00, 23.42s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox/case4/unitree_z1_dual_arm_stackbox_case4.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox/case4/unitree_z1_dual_arm_stackbox_case4.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox/case4/output/inference/35_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox/case4/output/inference/35_full_fs4.mp4",
|
||||||
"psnr": 42.34177660061245
|
"psnr": 40.29563315341769
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox"
|
|||||||
--n_iter 7 \
|
--n_iter 7 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,16 +1,13 @@
|
|||||||
2026-02-19 19:39:32.908698: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
2026-02-18 18:49:49.117856: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
2026-02-19 19:39:32.956378: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
2026-02-18 18:49:49.165270: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
2026-02-19 19:39:32.956417: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
2026-02-18 18:49:49.165322: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
2026-02-19 19:39:32.957459: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
2026-02-18 18:49:49.166382: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
2026-02-19 19:39:32.964104: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
2026-02-18 18:49:49.173299: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
2026-02-19 19:39:33.875854: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
2026-02-18 18:49:50.090214: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
Global seed set to 123
|
Global seed set to 123
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
>>> Prepared model loaded.
|
>>> Prepared model loaded.
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
INFO:root:***** Configing Data *****
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
@@ -29,16 +26,77 @@ INFO:root:***** Configing Data *****
|
|||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
>>> Dataset is successfully loaded ...
|
>>> Dataset is successfully loaded ...
|
||||||
✓ KV fused: 66 attention layers
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
>>> Generate 16 frames under each generation ...
|
>>> Generate 16 frames under each generation ...
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-18:50:03] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
9%|▉ | 1/11 [00:15<02:38, 15.88s/it]>>> Step 0: generating actions ...
|
9%|▉ | 1/11 [00:15<02:38, 15.88s/it]>>> Step 0: generating actions ...
|
||||||
>>> Step 0: interacting with world model ...
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
18%|█▊ | 2/11 [00:31<02:21, 15.71s/it]
|
18%|█▊ | 2/11 [00:31<02:21, 15.71s/it]
|
||||||
@@ -69,6 +127,6 @@ DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|||||||
>>> Step 6: generating actions ...
|
>>> Step 6: generating actions ...
|
||||||
>>> Step 6: interacting with world model ...
|
>>> Step 6: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
>>> Step 7: generating actions ...
|
>>> Step 7: generating actions ...
|
||||||
>>> Step 7: interacting with world model ...
|
>>> Step 7: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case1/unitree_z1_dual_arm_stackbox_v2_case1.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case1/unitree_z1_dual_arm_stackbox_v2_case1.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case1/output/inference/5_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case1/output/inference/5_full_fs4.mp4",
|
||||||
"psnr": 26.68301835085306
|
"psnr": 27.62636266067224
|
||||||
}
|
}
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
2026-02-19 19:44:07.724109: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:44:07.771461: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:44:07.771505: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:44:07.772537: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:44:07.779172: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:44:08.688975: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
|
||||||
9%|▉ | 1/11 [00:24<04:00, 24.03s/it]
|
|
||||||
18%|█▊ | 2/11 [00:47<03:31, 23.54s/it]
|
|
||||||
27%|██▋ | 3/11 [01:10<03:07, 23.42s/it]
|
|
||||||
36%|███▋ | 4/11 [01:33<02:43, 23.40s/it]
|
|
||||||
45%|████▌ | 5/11 [01:57<02:20, 23.39s/it]
|
|
||||||
55%|█████▍ | 6/11 [02:20<01:56, 23.37s/it]
|
|
||||||
64%|██████▎ | 7/11 [02:43<01:33, 23.34s/it]
|
|
||||||
73%|███████▎ | 8/11 [03:07<01:09, 23.32s/it]
|
|
||||||
82%|████████▏ | 9/11 [03:30<00:46, 23.31s/it]
|
|
||||||
91%|█████████ | 10/11 [03:53<00:23, 23.29s/it]
|
|
||||||
100%|██████████| 11/11 [04:16<00:00, 23.28s/it]
|
|
||||||
100%|██████████| 11/11 [04:16<00:00, 23.36s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case2/unitree_z1_dual_arm_stackbox_v2_case2.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case2/unitree_z1_dual_arm_stackbox_v2_case2.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case2/output/inference/15_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case2/output/inference/15_full_fs4.mp4",
|
||||||
"psnr": 27.46347145461597
|
"psnr": 33.90444714332389
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox_v2"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,74 +0,0 @@
|
|||||||
2026-02-19 19:48:42.460586: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:48:42.508096: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:48:42.508140: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:48:42.509152: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:48:42.515865: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:48:43.425699: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
|
||||||
9%|▉ | 1/11 [00:24<04:00, 24.07s/it]
|
|
||||||
18%|█▊ | 2/11 [00:47<03:32, 23.62s/it]
|
|
||||||
27%|██▋ | 3/11 [01:10<03:08, 23.51s/it]
|
|
||||||
36%|███▋ | 4/11 [01:34<02:44, 23.46s/it]
|
|
||||||
45%|████▌ | 5/11 [01:57<02:20, 23.42s/it]
|
|
||||||
55%|█████▍ | 6/11 [02:20<01:56, 23.39s/it]
|
|
||||||
64%|██████▎ | 7/11 [02:44<01:33, 23.37s/it]
|
|
||||||
73%|███████▎ | 8/11 [03:07<01:10, 23.36s/it]
|
|
||||||
82%|████████▏ | 9/11 [03:30<00:46, 23.36s/it]
|
|
||||||
91%|█████████ | 10/11 [03:54<00:23, 23.36s/it]
|
|
||||||
100%|██████████| 11/11 [04:17<00:00, 23.36s/it]
|
|
||||||
100%|██████████| 11/11 [04:17<00:00, 23.42s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case3/unitree_z1_dual_arm_stackbox_v2_case3.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case3/unitree_z1_dual_arm_stackbox_v2_case3.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case3/output/inference/25_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case3/output/inference/25_full_fs4.mp4",
|
||||||
"psnr": 28.604047286947512
|
"psnr": 34.50192428908007
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox_v2"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,16 +1,13 @@
|
|||||||
2026-02-19 19:53:17.574354: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
2026-02-18 18:54:56.403136: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
||||||
2026-02-19 19:53:17.621335: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
2026-02-18 18:54:56.451144: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
2026-02-19 19:53:17.621388: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
2026-02-18 18:54:56.451189: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
2026-02-19 19:53:17.622415: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
2026-02-18 18:54:56.452312: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
2026-02-19 19:53:17.629050: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
2026-02-18 18:54:56.459281: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
2026-02-19 19:53:18.537233: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
2026-02-18 18:54:57.381032: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
||||||
Global seed set to 123
|
Global seed set to 123
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
||||||
>>> Prepared model loaded.
|
>>> Prepared model loaded.
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
INFO:root:***** Configing Data *****
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
>>> unitree_z1_stackbox: 1 data samples loaded.
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
>>> unitree_z1_stackbox: data stats loaded.
|
||||||
@@ -29,16 +26,124 @@ INFO:root:***** Configing Data *****
|
|||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
>>> unitree_g1_pack_camera: normalizer initiated.
|
||||||
>>> Dataset is successfully loaded ...
|
>>> Dataset is successfully loaded ...
|
||||||
✓ KV fused: 66 attention layers
|
✓ KV fused: 66 attention layers
|
||||||
|
TRT output 'y': [1, 4, 16, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_0': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
TRT output 'hs_a_1': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_2': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_3': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_4': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_5': [1, 16, 1280, 5, 8] torch.float32
|
||||||
|
TRT output 'hs_a_6': [1, 16, 1280, 10, 16] torch.float32
|
||||||
|
TRT output 'hs_a_7': [1, 16, 640, 20, 32] torch.float32
|
||||||
|
TRT output 'hs_a_8': [1, 16, 320, 40, 64] torch.float32
|
||||||
|
>>> TRT backbone loaded from /home/qhy/unifolm-world-model-action/scripts/evaluation/../../trt_engines/video_backbone.engine
|
||||||
>>> Generate 16 frames under each generation ...
|
>>> Generate 16 frames under each generation ...
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
DEBUG:h5py._conv:Creating converter from 3 to 5
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
||||||
|
|
||||||
0%| | 0/11 [00:00<?, ?it/s]
|
0%| | 0/11 [00:00<?, ?it/s][02/18/2026-18:55:10] [TRT] [W] Using default stream in enqueueV3() may lead to performance issues due to additional calls to cudaStreamSynchronize() by TensorRT to ensure correct synchronization. Please use non-default stream instead.
|
||||||
|
|
||||||
9%|▉ | 1/11 [00:16<02:45, 16.53s/it]>>> Step 0: generating actions ...
|
9%|▉ | 1/11 [00:16<02:45, 16.53s/it]>>> Step 0: generating actions ...
|
||||||
>>> Step 0: interacting with world model ...
|
>>> Step 0: interacting with world model ...
|
||||||
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
>>> Step 1: generating actions ...
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BlpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BmpImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing BufrStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing CurImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing DdsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing EpsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FitsStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FliImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FpxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import FpxImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing FtexImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GbrImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GifImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing GribStubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Hdf5StubImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcnsImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IcoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing ImtImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing IptcImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing JpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing Jpeg2KImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing McIdasImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MicImagePlugin
|
||||||
|
DEBUG:PIL.Image:Image: failed to import MicImagePlugin: No module named 'olefile'
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpegImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MpoImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing MspImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PalmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PcxImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PdfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PixarImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PngImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing PsdImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing QoiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SgiImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SpiderImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing SunImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TgaImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing TiffImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WebPImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing WmfImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XbmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XpmImagePlugin
|
||||||
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
DEBUG:PIL.Image:Importing XVThumbImagePlugin
|
||||||
|
|
||||||
18%|█▊ | 2/11 [00:33<02:28, 16.52s/it]
|
18%|█▊ | 2/11 [00:33<02:28, 16.52s/it]
|
||||||
@@ -69,6 +174,6 @@ DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|||||||
>>> Step 6: generating actions ...
|
>>> Step 6: generating actions ...
|
||||||
>>> Step 6: interacting with world model ...
|
>>> Step 6: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
>>> Step 7: generating actions ...
|
>>> Step 7: generating actions ...
|
||||||
>>> Step 7: interacting with world model ...
|
>>> Step 7: interacting with world model ...
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
>>>>>>>>>>>>>>>>>>>>>>>>
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case4/unitree_z1_dual_arm_stackbox_v2_case4.mp4",
|
"gt_video": "unitree_z1_dual_arm_stackbox_v2/case4/unitree_z1_dual_arm_stackbox_v2_case4.mp4",
|
||||||
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case4/output/inference/35_full_fs4.mp4",
|
"pred_video": "unitree_z1_dual_arm_stackbox_v2/case4/output/inference/35_full_fs4.mp4",
|
||||||
"psnr": 25.578757174083307
|
"psnr": 25.49270910031428
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_dual_arm_stackbox_v2"
|
|||||||
--n_iter 11 \
|
--n_iter 11 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,77 +0,0 @@
|
|||||||
2026-02-19 19:57:52.488339: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 19:57:52.536176: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 19:57:52.536222: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 19:57:52.537285: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 19:57:52.544051: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 19:57:53.469912: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/12 [00:00<?, ?it/s]
|
|
||||||
8%|▊ | 1/12 [00:24<04:24, 24.06s/it]
|
|
||||||
17%|█▋ | 2/12 [00:47<03:55, 23.56s/it]
|
|
||||||
25%|██▌ | 3/12 [01:10<03:31, 23.46s/it]
|
|
||||||
33%|███▎ | 4/12 [01:33<03:07, 23.43s/it]
|
|
||||||
42%|████▏ | 5/12 [01:57<02:43, 23.41s/it]
|
|
||||||
50%|█████ | 6/12 [02:20<02:20, 23.42s/it]
|
|
||||||
58%|█████▊ | 7/12 [02:44<01:56, 23.39s/it]
|
|
||||||
67%|██████▋ | 8/12 [03:07<01:33, 23.36s/it]
|
|
||||||
75%|███████▌ | 9/12 [03:30<01:09, 23.33s/it]
|
|
||||||
83%|████████▎ | 10/12 [03:53<00:46, 23.31s/it]
|
|
||||||
92%|█████████▏| 11/12 [04:17<00:23, 23.30s/it]
|
|
||||||
100%|██████████| 12/12 [04:40<00:00, 23.29s/it]
|
|
||||||
100%|██████████| 12/12 [04:40<00:00, 23.37s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 8: generating actions ...
|
|
||||||
>>> Step 8: interacting with world model ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_stackbox/case1/unitree_z1_stackbox_case1.mp4",
|
"gt_video": "unitree_z1_stackbox/case1/unitree_z1_stackbox_case1.mp4",
|
||||||
"pred_video": "unitree_z1_stackbox/case1/output/inference/5_full_fs4.mp4",
|
"pred_video": "unitree_z1_stackbox/case1/output/inference/5_full_fs4.mp4",
|
||||||
"psnr": 46.05271283048069
|
"psnr": 42.83913947323794
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_stackbox"
|
|||||||
--n_iter 12 \
|
--n_iter 12 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,77 +0,0 @@
|
|||||||
2026-02-19 20:02:50.975402: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 20:02:51.023211: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 20:02:51.023253: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 20:02:51.024328: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 20:02:51.031176: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 20:02:51.947400: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/12 [00:00<?, ?it/s]
|
|
||||||
8%|▊ | 1/12 [00:24<04:24, 24.08s/it]
|
|
||||||
17%|█▋ | 2/12 [00:47<03:56, 23.62s/it]
|
|
||||||
25%|██▌ | 3/12 [01:10<03:31, 23.51s/it]
|
|
||||||
33%|███▎ | 4/12 [01:34<03:07, 23.48s/it]
|
|
||||||
42%|████▏ | 5/12 [01:57<02:44, 23.46s/it]
|
|
||||||
50%|█████ | 6/12 [02:20<02:20, 23.43s/it]
|
|
||||||
58%|█████▊ | 7/12 [02:44<01:57, 23.40s/it]
|
|
||||||
67%|██████▋ | 8/12 [03:07<01:33, 23.39s/it]
|
|
||||||
75%|███████▌ | 9/12 [03:31<01:10, 23.39s/it]
|
|
||||||
83%|████████▎ | 10/12 [03:54<00:46, 23.38s/it]
|
|
||||||
92%|█████████▏| 11/12 [04:17<00:23, 23.37s/it]
|
|
||||||
100%|██████████| 12/12 [04:41<00:00, 23.35s/it]
|
|
||||||
100%|██████████| 12/12 [04:41<00:00, 23.42s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 8: generating actions ...
|
|
||||||
>>> Step 8: interacting with world model ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_stackbox/case2/unitree_z1_stackbox_case2.mp4",
|
"gt_video": "unitree_z1_stackbox/case2/unitree_z1_stackbox_case2.mp4",
|
||||||
"pred_video": "unitree_z1_stackbox/case2/output/inference/15_full_fs4.mp4",
|
"pred_video": "unitree_z1_stackbox/case2/output/inference/15_full_fs4.mp4",
|
||||||
"psnr": 43.005233352958804
|
"psnr": 48.64571989587276
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_stackbox"
|
|||||||
--n_iter 12 \
|
--n_iter 12 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,77 +0,0 @@
|
|||||||
2026-02-19 20:07:49.410622: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 20:07:49.457896: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 20:07:49.457948: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 20:07:49.458967: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 20:07:49.465632: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 20:07:50.373326: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/12 [00:00<?, ?it/s]
|
|
||||||
8%|▊ | 1/12 [00:24<04:25, 24.17s/it]
|
|
||||||
17%|█▋ | 2/12 [00:47<03:56, 23.64s/it]
|
|
||||||
25%|██▌ | 3/12 [01:10<03:31, 23.53s/it]
|
|
||||||
33%|███▎ | 4/12 [01:34<03:07, 23.49s/it]
|
|
||||||
42%|████▏ | 5/12 [01:57<02:44, 23.45s/it]
|
|
||||||
50%|█████ | 6/12 [02:21<02:20, 23.43s/it]
|
|
||||||
58%|█████▊ | 7/12 [02:44<01:57, 23.41s/it]
|
|
||||||
67%|██████▋ | 8/12 [03:07<01:33, 23.39s/it]
|
|
||||||
75%|███████▌ | 9/12 [03:31<01:10, 23.37s/it]
|
|
||||||
83%|████████▎ | 10/12 [03:54<00:46, 23.37s/it]
|
|
||||||
92%|█████████▏| 11/12 [04:17<00:23, 23.36s/it]
|
|
||||||
100%|██████████| 12/12 [04:41<00:00, 23.34s/it]
|
|
||||||
100%|██████████| 12/12 [04:41<00:00, 23.43s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 8: generating actions ...
|
|
||||||
>>> Step 8: interacting with world model ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_stackbox/case3/unitree_z1_stackbox_case3.mp4",
|
"gt_video": "unitree_z1_stackbox/case3/unitree_z1_stackbox_case3.mp4",
|
||||||
"pred_video": "unitree_z1_stackbox/case3/output/inference/25_full_fs4.mp4",
|
"pred_video": "unitree_z1_stackbox/case3/output/inference/25_full_fs4.mp4",
|
||||||
"psnr": 49.489774674892764
|
"psnr": 45.127553229898034
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_stackbox"
|
|||||||
--n_iter 12 \
|
--n_iter 12 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
@@ -1,77 +0,0 @@
|
|||||||
2026-02-19 20:12:48.029611: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2026-02-19 20:12:48.076914: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
|
||||||
2026-02-19 20:12:48.076957: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
|
||||||
2026-02-19 20:12:48.077981: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
|
||||||
2026-02-19 20:12:48.084620: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2026-02-19 20:12:49.004753: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
Global seed set to 123
|
|
||||||
>>> Loading prepared model from ckpts/unifolm_wma_dual.ckpt.prepared.pt ...
|
|
||||||
>>> Prepared model loaded.
|
|
||||||
>>> Diffusion backbone (model.model) converted to FP16.
|
|
||||||
>>> Projectors (image_proj_model, state_projector, action_projector) converted to FP16.
|
|
||||||
>>> Encoders (cond_stage_model, embedder) converted to FP16.
|
|
||||||
INFO:root:***** Configing Data *****
|
|
||||||
>>> unitree_z1_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_stackbox_v2: normalizer initiated.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: 1 data samples loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: data stats loaded.
|
|
||||||
>>> unitree_z1_dual_arm_cleanup_pencils: normalizer initiated.
|
|
||||||
>>> unitree_g1_pack_camera: 1 data samples loaded.
|
|
||||||
>>> unitree_g1_pack_camera: data stats loaded.
|
|
||||||
>>> unitree_g1_pack_camera: normalizer initiated.
|
|
||||||
>>> Dataset is successfully loaded ...
|
|
||||||
✓ KV fused: 66 attention layers
|
|
||||||
>>> Generate 16 frames under each generation ...
|
|
||||||
DEBUG:h5py._conv:Creating converter from 3 to 5
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'pHYs' 41 9
|
|
||||||
DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 62 4096
|
|
||||||
|
|
||||||
0%| | 0/12 [00:00<?, ?it/s]
|
|
||||||
8%|▊ | 1/12 [00:24<04:24, 24.06s/it]
|
|
||||||
17%|█▋ | 2/12 [00:47<03:55, 23.59s/it]
|
|
||||||
25%|██▌ | 3/12 [01:10<03:31, 23.49s/it]
|
|
||||||
33%|███▎ | 4/12 [01:34<03:07, 23.44s/it]
|
|
||||||
42%|████▏ | 5/12 [01:57<02:43, 23.41s/it]
|
|
||||||
50%|█████ | 6/12 [02:20<02:20, 23.40s/it]
|
|
||||||
58%|█████▊ | 7/12 [02:44<01:56, 23.38s/it]
|
|
||||||
67%|██████▋ | 8/12 [03:07<01:33, 23.37s/it]
|
|
||||||
75%|███████▌ | 9/12 [03:30<01:10, 23.36s/it]
|
|
||||||
83%|████████▎ | 10/12 [03:54<00:46, 23.35s/it]
|
|
||||||
92%|█████████▏| 11/12 [04:17<00:23, 23.33s/it]
|
|
||||||
100%|██████████| 12/12 [04:40<00:00, 23.32s/it]
|
|
||||||
100%|██████████| 12/12 [04:40<00:00, 23.39s/it]
|
|
||||||
>>> Step 0: generating actions ...
|
|
||||||
>>> Step 0: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 1: generating actions ...
|
|
||||||
>>> Step 1: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 2: generating actions ...
|
|
||||||
>>> Step 2: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 3: generating actions ...
|
|
||||||
>>> Step 3: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 4: generating actions ...
|
|
||||||
>>> Step 4: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 5: generating actions ...
|
|
||||||
>>> Step 5: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 6: generating actions ...
|
|
||||||
>>> Step 6: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 7: generating actions ...
|
|
||||||
>>> Step 7: interacting with world model ...
|
|
||||||
>>>>>>>>>>>>>>>>>>>>>>>>
|
|
||||||
>>> Step 8: generating actions ...
|
|
||||||
>>> Step 8: interacting with world model ...
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"gt_video": "unitree_z1_stackbox/case4/unitree_z1_stackbox_case4.mp4",
|
"gt_video": "unitree_z1_stackbox/case4/unitree_z1_stackbox_case4.mp4",
|
||||||
"pred_video": "unitree_z1_stackbox/case4/output/inference/35_full_fs4.mp4",
|
"pred_video": "unitree_z1_stackbox/case4/output/inference/35_full_fs4.mp4",
|
||||||
"psnr": 47.18724378194084
|
"psnr": 50.642542240144444
|
||||||
}
|
}
|
||||||
@@ -20,6 +20,5 @@ dataset="unitree_z1_stackbox"
|
|||||||
--n_iter 12 \
|
--n_iter 12 \
|
||||||
--timestep_spacing 'uniform_trailing' \
|
--timestep_spacing 'uniform_trailing' \
|
||||||
--guidance_rescale 0.7 \
|
--guidance_rescale 0.7 \
|
||||||
--perframe_ae \
|
--perframe_ae
|
||||||
--fast_policy_no_decode
|
|
||||||
} 2>&1 | tee "${res_dir}/output.log"
|
} 2>&1 | tee "${res_dir}/output.log"
|
||||||
|
|||||||
Reference in New Issue
Block a user