Combining neural network generated faces with traditional 3D animation rendering from Disney Research. The idea is to fill in parts that are hard to do with traditional 3D modeling, such as the eyes and the inside of the mouth, to make a fully photorealistic render. In addition, the neural rendering from the face is blended with the 3D model to make the whole face more photorealistic.
As long as it's just still photos, it looks totally photorealistic, but the photorealism stops as soon as the animations start. The traditional 3D models don't perfectly match the movement of real humans. As a result, the animations look like regular 3D movie animations, maybe just a little better. So we haven't yet reached the point where we can fire all the human actors and generate all our movies with only computers.
They allude to an "optimization technique" but don't spell out what it is. I checked out the paper to see what it is. Basically they don't train the neural network, which is StyleGAN2, on the details of the face, just the details of the parts they want to fill in, such as the eyes. So the face is approximately right but not exact, and the neural rendering is blended with the traditional 3D rendering. The optimization process additionally partitions the parameters which puts further constraints on them.
Rendering with Style Combining Traditional and Neural Approaches for High Quality Face Rendering
#solidstatelife #ai #computervision #generativeai #gans
There are no comments yet.