Top Headlines

Feeds

Forward‑Learned Discrete Diffusion Reduces Sampling Steps

Cached

Learnable forward noising replaces fixed Markovian chain – The authors introduce a learnable noising (forward) process for discrete diffusion models, moving away from the traditional fixed Markovian forward chain to better align the model with the target distribution and cut the number of required sampling steps [1].

Non‑Markovian formulation with learnable marginals and posteriors – By adopting a non‑Markovian approach, the paper adds learnable marginal and posterior distributions, allowing the reverse generative process to stay factorized while matching the learned forward noise [1].

Few‑step generation achieved without losing factorization – This design narrows the gap between target and model distributions, enabling high‑quality generation in a small number of steps and addressing the computational cost of long diffusion sampling procedures [1].

End‑to‑end training under standard variational objective – All components of the forward and reverse processes are optimized jointly using the conventional variational lower‑bound objective, ensuring coherent learning across the diffusion pipeline [1].

Links