Microsoft Is Developing an AI Upscaling Engine to Improve Xbox Visuals

Gaming
This site may earn affiliate commissions from the links on this page. Terms of use.

Microsoft is apparently planning its own AI upscaling method, possibly in the same vein as Nvidia’s DLSS and AMD’s FSR. The company hasn’t made any direct announcements, but it has put up two job offers, one for a senior software engineer and the other for a principal software engineer for graphics.

The senior software engineer position notes that “The Xbox graphics team is seeking an engineer who will implement machine learning algorithms in graphics software to delight millions of gamers. Work closely with partners to develop software for future machine learning hardware.” The principal software engineer job description is more general, but the senior software engineer’s responsibilities list such tasks as “research and prototype machine learning algorithms” and “implement software that incorporates machine learning algorithms for graphics in shipping hardware.”

AI Upscaling Is the Future of Graphics

Microsoft could be going a couple of ways on this. A robust AI upscaling method or procedure, folded into a future DirectX standard and made generally available to developers everywhere, would reach a larger audience than AMD or Nvidia does. Alternately, Microsoft could be looking to develop a proprietary solution it can use in the Xbox Series S|X or future hardware.

But one thing I’m absolutely certain of is that AI upscaling is the future of graphics.

It’s not an accident that Nvidia introduced ray tracing and started working on DLSS at the same time. Ray tracing is difficult and compute-intensive whether you use dedicated hardware (Nvidia) or bulk up with more compute cores (AMD). GPU manufacturers are also up against a wall on power consumption.

This is where upscaling methods can shine. If AMD, Nvidia, or apparently Microsoft can master the art of making 1080p look like 4K in half the power consumption or less, the additional power can be devoted to ray tracing or compute.

Eight years ago, I covered the emerging field of imprecise computing. What the researchers found was that building deliberately imprecise processors could substantially reduce power consumption. AI upscaling is essentially an application of the same theory, namely: Rather than attempting to be perfect, focus on being good enough. Good enough, it turns out, can sometimes boost frame rates by 30 percent or more. That’s typically equivalent to upgrading to a new GPU in a higher price bracket than your current one.

Data and image by Igor’s Lab

AI upscaling that could convert 1080p into a nearly indistinguishable 4K, in real-time, ought to be far more power-efficient than actually doing the work of drawing it. Igor’s Lab has actually measured the impact of using DLSS as opposed to native 4K. With DLSS enabled, the RTX 2080 drew 80W less power. Typically you only see this kind of improvement when comparing the same GPU built on two different process nodes. Given these kinds of improvements, it would be surprising if Sony and Microsoft weren’t working on these ideas.

None of this is to say that rasterization performance won’t continue to improve. It absolutely will. But AI upscaling will also continue to evolve, both in software and hardware. The quality improvements in Topaz Video Enhance AI over the last 16 months have been significant and the application now runs much faster than it did.

The coolest thing about technologies such as AMD’s FSR and Nvidia’s DLSS is that they’re evolving as quickly as they are. AI is still in its infancy. I don’t think it’ll be more than a few years before the quality uplift I’ve given Deep Space Nine is available in real time to anyone watching the show. Currently, it takes hours to pre-process and then upscale a single episode. Five years from now, the same uplift — or better — may be available as a toggle switch on your TV or PC.

Now Read:

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *