a visual representation of Stable Diffusion neural network weights
transformer self - attention neural networks prefer to operate in frequency s...
transformer self - attention neural networks prefer to operate in frequency space because relationship between concepts is smoother in that transformation. here is the output of a neural radiance field 3 d inverse diffusion model which was asked to visualize this effect. the intuition is shockingly well conveyed, this will likely help with communicating the intuition to human vision systems as well.
Details
Tags