photorealism has never been so close

photorealism has never been so close
photorealism has never been so close

In our latest editorial on the state of the art of the gaming industry, we tried to understand if photorealism in video games can really exist. We know that we have never been so close to reaching this milestone thanks toArtificial intelligence, but from today the developers will have an extra weapon.
We are talking about Face Trainer, the latest creation of Ziva Dynamics, a group known for having given birth to another incredible tool, Ziva VFX, which allows you to generate and simulate the body muscles in an ultra-realistic way.

What is it about

Ziva Face Trainer, as the name suggests, it deals with facial expressions instead and its results are somewhere between incredible and shocking.

To welcome us on the Ziva Dynamics page we find a woman’s face so realistic that it seems real. To appear, in fact, since in reality it is a face generated in real time on Unreal Engine 5 in 4K at 60 Frames per second, which you can also see in this video. What leaves you breathless, however, are the expressions and faces that the virtual subject is able to make. We are used to admirable examples, from The Last of Us 2 to Quantum Break, passing through Detroit, but it is clear that, in the face of as shown by the potential of Ziva, we are absolutely in another generation. But how does the magic happen?

That of Ziva Dynamics is a tool currently available online in an experimental phase, which can be implemented within any title. What this product is able to do is make previously acquired static material scalable and animated and to do so it uses a huge library, consisting of over 15 Terabytes of data in 4 dimensions which include a lot of information on intelligible facial movements.
To obtain the result, these data are processed through a method that we have now come to know, the machine learning. Already widely introduced in the medium from NVIDIA with its DLSS, Deep Learning is used by Ziva to train the supercomputer to recognize over 72,000 different expressions and reproduce them directly on any face designed in 3D.

The complete processing of an individual is possible, directly online, in just 30 minutes. This means that by carefully following Ziva’s instructions for acquiring a face, once you have access to the beta you will be able to get your own virtual alter ego in less than a day.
Preparation takes a variable amount of time depending on the skills of the designer, but once our online resource, made up of the face meshes we want to animate, has been uploaded, the process until the delivery of the final file will last a maximum of one hour.

Ziva suggests using Wrap3 oppure Maya, programs with which we will have to identify the references on the face, defined by a series of red points applied directly on the shape and then carry out the wrapping, generating the three-dimensional map of the movement areas. Here too and until the production of the final mesh, manual intervention is required to smooth out the roughness that could then generate a less fluidity in movements, still being polygons. Once the production phase is complete, Maya is asked to increase the resolution to the maximum possible, in order to allow Ziva to be able to capture and animate more details. You can see the example of a mesh animation in this video.

articolo-142464-850.webp

The movements that the faces generated with ZRT can make are very precise and difficult to distinguish from a sequence shot in the real world, both in terms of facial expression reproduction and the naturalness of the animation.
Currently the product can be tested totally free on the official website through the interactive Beta, accessible by invitation only. After the initial preparation phase, results can be obtained in a fully automated way, directly in the cloud. Once it lands on the market, it will be up to the developers to integrate it into the game instructions to implement it in the next IPs, but the first results are already more than encouraging.

articolo-142465-850.webp

After decades in which Moore’s law it has marked the technological progress and, directly and indirectly, also the videogame one, the artificial intelligence has opened the doors on a nearer future than we ever expected. The same real-time lighting it harnesses Ray Tracing algorithm arrived on our machines thanks to deep learning and, to be exact, thanks to denoiser of NVIDIA that allows you to generate the lighting system at very low. A well-established workflow, which has made us take giant steps in spite of the limitations imposed by the raw power and computing capacity of our video cards.

articolo-142467-850.webp

Starting from Forza Horizon 5 photogrammetry, passing through global lighting, the direction is now quite clear and Face Trainer is just one more tool, hopefully in the hands of the developers soon. When can we see the wonders of Ziva Face Trainer on the first compatible IP it is still early to say but as usual, we can’t wait to find out.

PREV Will Activision Blizzard still launch games on PlayStation? Sony answers
NEXT more than 10 Intertwined Make free with two log-in events – Nerd4.life