Intel: lifelike video games with deep learning20. May 2021
Intel: lifelike video games with deep learning
San Francisco, 5/20/2021
Intel’s Intelligent Systems Lab has developed a machine learning tool that makes computer-generated images look more photorealistic, which could lead to more lifelike video games and applications.
The technique uses Deep Learning to analyze each frame in a game and generate new frames based on the “Cityscapes” dataset, which consists of Street View images taken from the perspective of a car.
The AI system extracts features from the Cityscapes images that most closely match the original frame in the “GTA V” game engine and inserts them into a final frame. The result is an enhanced image that appears real.
The method differs from other “style transfer” approaches in that it integrates G-buffer data – or detailed information about each frame – that helps it more accurately guess which elements of the real image it should draw from.
While the system is too slow to be practical at the moment, it could eventually be used in conjunction with game engines to speed up the development process and bring photorealistic images to video games.