There’s something slightly surreal about watching a 2004 shooter wake up under modern lighting.
That’s what happened with Painkiller RTX. Thousands of low-resolution textures from the early 2000s had to be transformed into proper Physically Based Rendering materials so they could behave correctly under path tracing. Traditionally, that meant artists manually unpacking atlases, removing baked shadows, rebuilding albedo and roughness maps, and hoping everything still matched stylistically. It was slow, meticulous work. Multiply that by 35 levels and you are looking at years.
PBRFusion changed that equation by putting generative AI directly into the asset pipeline.
Instead of simply upscaling textures, the model was fine-tuned to understand what those textures were supposed to represent. It learns to strip out baked lighting that used to be painted into the image. It infers how rough a surface should be. It predicts how metal should respond to light. It processes whole batches of assets together so the output shares a consistent visual language across an entire level.
That consistency is the trick. Texture atlases usually break automated tools because they cram walls, floors, and props onto one sheet. PBRFusion uses context-aware generation to separate and reinterpret those surfaces correctly. It is not copying pixels. It is reconstructing material intent.
The team behind Painkiller RTX processed thousands of materials this way, then refined the results for hero assets. What used to require hand-built shaders and endless cleanup now starts with AI generating a usable baseline in hours.
This is AI applied where most players never see it: inside the production grind. It does not design better levels or write smarter enemies. It handles volume. It handles repetition. It gives small teams the ability to modernize entire back catalogs without hiring an army.