Flow Equivariant World Modeling for
Partially Observed Dynamic Environments
Anonymous Submission
Abstract
Embodied systems experience the world as 'a symphony of flows': a combination of many continuous streams of sensory input coupled to self-motion, interwoven with the motion of external objects. These streams obey smooth, time-parameterized symmetries, which combine through a precisely structured algebra; yet most neural network world models ignore this structure and instead repeatedly re-learn the same transformations from data. In this work, we introduce 'Flow Equivariant World Models', a framework in which both self-motion and external object motion are unified as one-parameter Lie group 'flows'. We leverage this unification to implement group equivariance with respect to these transformations, thereby sharing model weights over locations and motions, eliminating redundant re-learning, and providing a stable latent world representation over hundreds of timesteps. On both 2D and 3D partially observed world modeling benchmarks, we demonstrate Flow Equivariant World Models significantly outperform comparable state-of-the-art diffusion-based and memory-augmented world-modeling architectures, training faster and reaching lower error -- particularly when there are predictable world dynamics outside the agent's current field of view. We show that flow equivariance is particularly beneficial for long rollouts, generalizing far beyond the training horizon. By structuring world model representations with respect to internal and external motion, flow equivariance charts a scalable route to data-efficient, symmetry-guided, embodied intelligence.

Model Framework (2D and 3D)


Rollout Results
We evaluate on 2D (MNIST World) and 3D (Dynamic Block World) datasets. We compare FloWM rollouts and quantitative results with a Diffusion Forcing Transformer baseline (DfoT) and a long-context SSM-based baseline (DFoT SSM). We additionally include ablations without velocity channels (VC) and self-motion equivariance (SME). The first 50 frames are fed in to the model as ground truth context, and the model must predict the next 150 frames. The ground truth observations are available for reference.
* Since the world has no velocity, the velocity channels are redundant and only add noise in this case.
* For fully observable cases, the World View (GT) is same as Agent View (GT).
We visualize failure cases of our model, in comparison to DFoT and DFoT SSM by visualizing low PSNR rollouts:
Results Table
Validation Metrics on 2D Dynamic Partially Observable MNIST World
Columns show mean metrics (MSE, PSNR, SSIM) of frames over the first 20 generated frames (matches training distribution) vs. 150 generated frames (length generalization). 50 frames are passed in as context.
Validation Metrics on 3D Dynamic Block World
Columns show mean metrics (MSE, PSNR, SSIM) of generated frames over the first 20 frames (matches training distribution) vs. 150 frames (length generalization). 50 frames are passed in as context.
Top