start writing rt intro

This commit is contained in:
ktyl 2022-01-02 00:42:00 +00:00
parent 730df72f26
commit 9624dd1600
2 changed files with 78 additions and 4 deletions

33
rt-intro.md Normal file
View File

@ -0,0 +1,33 @@
# Interactive Digital Light
In recent years ray tracing has been hailed as a generational leap in real-time computer graphics.
Modern graphics pipelines in games like [Control]() and [Cyberpunk 2077]() have made use of recent hardware to include ray traced shading passes, delivering stunning real-time photorealistic shadows, reflections and ambient occlusion far in excess of what was possible but a few years before.
NVIDIA achieved this through the implementation of dedicated RTX hardware on their graphics cards, specifically able to accelerate the crucial ray-triangle intersection operation needed to perform ray tracing on polygonal meshes.
Without this key development, ray tracing would still be the domain of render farms, and not at all relevant to real-time, interactive graphics.
Or would it?
As well as having dedicated ray tracing hardware, contemporary graphics cards are *fast*.
Even without the use of vendor-specific hardware acceleration, the general-purpose compute capability of graphics hardware is no slouch.
We can use this compute capability to write our own real-time ray tracers, giving us the fine control and unique capabilities of this rendering technique without being tied to any particular API, game engine, or hardware.
In this blog series I want to explore an alternative view of ray tracing, its quirks and implications, and what can be done in a world with two triangles (Inigo Quilez moment?) and a bunch of maths.
We'll start with a look into the fundamentals of ray tracing as a technique - how it works, what it lets us do differently, and the key performance characteristics to look out for in an interactive application.
Later, we'll dive deeper and begin to explore ideas and techniques important to accelerating rendering, by minimising the number of trace operations we need to do and trading cycles for memory to make the most use of each sample.
We'll also consider real world optics.
We'll observe and model the operation of physical cameras and consider the differences between real and virtual light.
Along the way, I'll provide links to further reading, resources and tutorials I've found useful or interesting to developing my understanding of ray tracing and the physics of optics.
# What does ray tracing do for us?
It's probably worthwhile nonetheless exploring why we want to do interactive ray tracing in the first place, given its substantial cost over more traditional rasterization.
Traditional games are built out of triangles in Euclidean space, but ray traced graphics give us an opportunity to draw something else.
We can draw geometry like spheres - or anything with a well-defined intersection function - to an arbitrary level of detail.
We can curve space, or the paths taken by light through it.
In a traditional rasterized application, this would have to be done by [distorting geometry in a vertex shader](openrelativity), which produces substantial distortion.

49
todo.md
View File

@ -1,4 +1,45 @@
* [ ] relativistic ray tracing
* [ ] c-ship
* [ ] model to describe aberration of light
* [ ] doppler and searchlight effects
* [-] interactive ray tracing
* [x] introduction post
* [x] introduce ray tracing as a topic
* [x] little bitta history
* [x] assert value of non-rtx ray tracing
* [x] break down upcoming chapters, basically a table of contents
* [-] what does ray tracing get us?
* [ ] global illumination
* [ ] real reflection, refraction
* [-] smooth curves
* [ ] camera artifacts
* [-] esoteric modelling - relativistic rendering
* [ ] technique fundamentals
* [ ] introduce other resources
* [ ] ray tracing in one weekend
* [ ] unity gpu compute
* [ ] dans opengl compute post
* [ ] constraints for interactivity
* [ ] 1spp
* [ ] necessary invalidation of accumulated samples for example when the camera moves, or moving objects
* [ ] anatomy of a trace: taking a sample from a scene
* [ ] pinhole camera model
* [ ] screen as view plane
* [ ] constructing rays
* [ ] camera jitter
* [ ] optics
* [ ] camera
* [ ] aperture
* [ ] focus
* [ ] shutter
* [ ]
* [ ] acceleration
* [ ] constructing a g-buffer
* [ ] depth importance sampling
* [ ] linearly blending frames
* [ ] relativistic ray tracing
* [ ] c-ship
* [ ] model to describe aberration of light
* [ ] doppler and searchlight effects
* [ ] colour spaces