blog/rt-intro.md

34 lines
2.9 KiB
Markdown

# Interactive Digital Light
In recent years ray tracing has been hailed as a generational leap in real-time computer graphics.
Modern graphics pipelines in games like [Control]() and [Cyberpunk 2077]() have made use of recent hardware to include ray traced shading passes, delivering stunning real-time photorealistic shadows, reflections and ambient occlusion far in excess of what was possible but a few years before.
NVIDIA achieved this through the implementation of dedicated RTX hardware on their graphics cards, specifically able to accelerate the crucial ray-triangle intersection operation needed to perform ray tracing on polygonal meshes.
Without this key development, ray tracing would still be the domain of render farms, and not at all relevant to real-time, interactive graphics.
Or would it?
As well as having dedicated ray tracing hardware, contemporary graphics cards are *fast*.
Even without the use of vendor-specific hardware acceleration, the general-purpose compute capability of graphics hardware is no slouch.
We can use this compute capability to write our own real-time ray tracers, giving us the fine control and unique capabilities of this rendering technique without being tied to any particular API, game engine, or hardware.
In this blog series I want to explore an alternative view of ray tracing, its quirks and implications, and what can be done in a world with two triangles (Inigo Quilez moment?) and a bunch of maths.
We'll start with a look into the fundamentals of ray tracing as a technique - how it works, what it lets us do differently, and the key performance characteristics to look out for in an interactive application.
Later, we'll dive deeper and begin to explore ideas and techniques important to accelerating rendering, by minimising the number of trace operations we need to do and trading cycles for memory to make the most use of each sample.
We'll also consider real world optics.
We'll observe and model the operation of physical cameras and consider the differences between real and virtual light.
Along the way, I'll provide links to further reading, resources and tutorials I've found useful or interesting to developing my understanding of ray tracing and the physics of optics.
# What does ray tracing do for us?
It's probably worthwhile nonetheless exploring why we want to do interactive ray tracing in the first place, given its substantial cost over more traditional rasterization.
Traditional games are built out of triangles in Euclidean space, but ray traced graphics give us an opportunity to draw something else.
We can draw geometry like spheres - or anything with a well-defined intersection function - to an arbitrary level of detail.
We can curve space, or the paths taken by light through it.
In a traditional rasterized application, this would have to be done by [distorting geometry in a vertex shader](openrelativity), which produces substantial distortion.