diff --git a/blogs/unfinished/chaos-of-the-brain.md b/blogs/unfinished/chaos-of-the-brain.md deleted file mode 100644 index e8581d9..0000000 --- a/blogs/unfinished/chaos-of-the-brain.md +++ /dev/null @@ -1,37 +0,0 @@ -# Chaotic Flow - -I find I meander through tasks quite chaotically. -I often have many tasks on the go at a time, and work through them slowly, flitting between them as and when my interest takes me. -In the past I've considered this nature of being as a disruption of of [flow](https://en.wikipedia.org/wiki/Flow_(psychology)), surely a negative pressure on my productive and ability to focus, which is so crucial to complex undertakings requiring deep focus. -However, now I think that the chaos is a good thing, actually, and that a chaotic flow might be the best flow of all, from a certain perspective. - -My thinking is this - deep flow is great, but most of the time, I am distracted. -This is fine in itself, I might be distracted by work, by people, the weather, or anything. -I might just be daydreaming, thinking about future plans, or listening to music. -A wandering mind is curious, and ripe for inspiration and new ideas, in a way that a flowing mind is not. -Though literally distracted, in a creative domain ideas and inspiration are a fundamental currency, so I hesitate to consider time spent distracted as "wasted". -Instead, I frame it as still flowing, actually, but in a softer, looser sense than dedicated one's entire being to a singular focus. - -Of course, this can't apply to everything - in plenty of situations, failing to achieve a deep focus prevents the act from being performed at all. -However, when I consider my hobbies and projects, they are mostly being text-based, almost entirely computer-based, and I expect the same is true for most of this blog's audience, so I'll press on regardless. - -Being a text-based kind of person, I can touch-type comfortably, and have generally text- and keyboard-based workflows. -I also like shell scripting and optimising things, so -It's become easy to open a terminal (2 keypresses), a text editor (4 keypresses, 2 if a terminal is already focused) or a particular project (rarely more than 8 keypresses, even for an arbitrarily named project) before actually thinking about what it is I want to accomplish. -This means it's exceptionally easy to get to the point of expression for an idea - a short key sequence in muscle memory takes less than a second to input, and might take at most five seconds if I need to navigate a little. -On a system which reacts to these inputs as quickly as they are entered that means I could be starting to write about the - -That said, the wandering mind is flighty, and difficult to wrangle into action. - -I want to start by clarifying that I have nothing against deep, focused flow - I actively encourage it! -There's no substitute to spending good, quality time learning, practising, creating, playing, whatever your verb of choice might be. -You will get better at doing the thing and you'll have a good time doing it. -Go flow! - -What I want to be wary of is actually exactly the "productivity" of flow. -Engaging in deep focus activities is productive in the short-term, but can be exhausting, especially without taking adequate rest. -I also use my deep focus a lot professionally, and rarely have the energy to put in several hours more once I'm in a position to create for myself. - -So, I'm working on finding ways to make progress on the things I make outside of work, in a way that doesn't feel like working on them. -I want to be able to meander through my personal time doing either nothing, or if I am doing something then meandering only very relaxedly towards some ulterior goal. -To this end, this means that anything I want to develop in my personal time has to be really easy to start doing. diff --git a/blogs/unfinished/corporate-dependence.md b/blogs/unfinished/corporate-dependence.md deleted file mode 100644 index 5233f20..0000000 --- a/blogs/unfinished/corporate-dependence.md +++ /dev/null @@ -1,6 +0,0 @@ -# Corporate Dependence - -* renting my taste in music back to me -* data harvesting ops -* digital files as physical possessions -* community self-sufficiency diff --git a/blogs/unfinished/git/git-server.md b/blogs/unfinished/git/git-server.md deleted file mode 100644 index db68fb4..0000000 --- a/blogs/unfinished/git/git-server.md +++ /dev/null @@ -1,62 +0,0 @@ -# create a git user - -on debian, `sudo adduser git` - -switch to git user with `sudo su -l git` - -create a `.ssh` dir in the git user's home dir and make it only accessible by the git user - -``` -mkdir ~/.ssh -chmod 700 ~/.ssh -``` - -create an `authorized_keys` file in the `.ssh` folder, and make it accessible only by the git user - -``` -touch .ssh/authorized_keys -chmod 600 `.ssh/authorized_keys` -``` - -create a private/public key pair locally to authenticate a user on a machine to connect to the remote server - -``` -ssh-keygen -t rsa -``` - -and finally copy it into the (remote) git user's `.ssh/authorized_keys`, for example using `ssh-copy-id` or giving the public key to the server administrator. - -# creating bare git repositories - -create directories within git's home dir (nested paths are allowed) with the `.git` extension, for example `my-projects/my-repo.git` or just `my-repo.git`. - -``` -git init --bare repo.git -``` - -there now exists an empty git repository on the server. - -the remote can now be added to a local repository - -``` -git remote add origin git@server:my-repo.git -git push -u origin main -``` - -# connecting using the key - -add an entry to your local `.ssh/config` - -``` -Host myhost - HostName example.com - User git - IdentityFile ~/.ssh/id_rsa -``` - -and connect with - -``` -ssh myhost -``` - diff --git a/blogs/unfinished/orbits/orbits-intro.md b/blogs/unfinished/orbits/orbits-intro.md deleted file mode 100644 index ba68321..0000000 --- a/blogs/unfinished/orbits/orbits-intro.md +++ /dev/null @@ -1,27 +0,0 @@ -# An Interesting Title - -[Kerbal Space Program](https://www.kerbalspaceprogram.com/) (KSP) is my favourite game, ever. -Its physics model of Keplerian orbits makes it easy and fun to learn the basics of the field of astrodynamics, or how spacecraft travel between worlds in curvilinear trajectories and orbits. -In KSP one can build space stations, recreate the Apollo missions, and much much more with little green space people known as Kerbals. -The orbital sandbox is, to my mind, a bit of a miracle of physics simulation, but it is not perfect. - -To keep KSP approachable and stable (or at least, as stable as it can be), its physics model uses the [patched conic approximation](https://en.wikipedia.org/wiki/Patched_conic_approximation) of the n-body problem. -Its benefits are that it is relatively cheap to calculate and also is completely deterministic, which is extremely valuable to a game where the passage of time may be accelerated up to 100,000x to mitigate the boredom of long interplanetary cruise stages. - -For example, gravitational interactions from multiple bodies are not considered - a spacecraft moves under the influence of one and only one gravitational body, determined by the body's [sphere of influence](https://en.wikipedia.org/wiki/Sphere_of_influence_(astrodynamics)). -This means certain manoeuvres and features of celestial mechanics in real life are simply not available in KSP's stock physics model. -The recent launch of the [James Webb Space Telescope](https://www.jwst.nasa.gov/), for example, cannot be modeled in KSP because its position in space is dictated by the force of gravity from the Sun _and_ the Earth, out at the L2 [Lagrange point](https://en.wikipedia.org/wiki/Lagrange_point). -Similary, while Hohmann transfer orbits are the bread and butter of moving around KSP's universe, more esoteric [low-energy transfers](https://en.wikipedia.org/wiki/Low-energy_transfer) available to real mission planners are not available to players of the game. - -From a game design perspective, using a patched conic approximation makes sense, as it keeps it simple and lets the player experiment and learn the basics of navigating with elliptical orbits in a fun and intuitive way. -However, advanced players or budding rocket scientists may want more, and turn to mods such as [Principia](https://github.com/mockingbirdnest/Principia/wiki/A-guide-to-going-to-the-Mun-with-Principia) or entirely entirely separate games and software such as [Orbiter](http://orbit.medphys.ucl.ac.uk/) or [Space Engine](https://spaceengine.org/). - -Personally, I'm a games programmer with an interest in spacecraft, orbital mechanics, and reinventing wheels, so I'd like to make my own. -I'll use these blog posts to document and discuss its development, for the interest of others as well as keeping track of my own journey. -I'll also endeavour to maintain a list of references that have helped my learning, so that hopefully they can be useful to someone else too. - -With that, I'll lead into the first post - [Kepler's Laws of Planetary Motion](#). - -# References - -* [Unite talk](https://www.youtube.com/watch?v=mXTxQko-JH0) \ No newline at end of file diff --git a/blogs/unfinished/ray-tracing/rt-intro.md b/blogs/unfinished/ray-tracing/rt-intro.md deleted file mode 100644 index 4906cc1..0000000 --- a/blogs/unfinished/ray-tracing/rt-intro.md +++ /dev/null @@ -1,33 +0,0 @@ -# Interactive Digital Light - -In recent years ray tracing has been hailed as a generational leap in real-time computer graphics. -Modern graphics pipelines in games like [Control]() and [Cyberpunk 2077]() have made use of recent hardware to include ray traced shading passes, delivering stunning real-time photorealistic shadows, reflections and ambient occlusion far in excess of what was possible but a few years before. - -NVIDIA achieved this through the implementation of dedicated RTX hardware on their graphics cards, specifically able to accelerate the crucial ray-triangle intersection operation needed to perform ray tracing on polygonal meshes. -Without this key development, ray tracing would still be the domain of render farms, and not at all relevant to real-time, interactive graphics. - -Or would it? - -As well as having dedicated ray tracing hardware, contemporary graphics cards are *fast*. -Even without the use of vendor-specific hardware acceleration, the general-purpose compute capability of graphics hardware is no slouch. -We can use this compute capability to write our own real-time ray tracers, giving us the fine control and unique capabilities of this rendering technique without being tied to any particular API, game engine, or hardware. -In this blog series I want to explore an alternative view of ray tracing, its quirks and implications, and what can be done in a world with two triangles (Inigo Quilez moment?) and a bunch of maths. - -We'll start with a look into the fundamentals of ray tracing as a technique - how it works, what it lets us do differently, and the key performance characteristics to look out for in an interactive application. - -Later, we'll dive deeper and begin to explore ideas and techniques important to accelerating rendering, by minimising the number of trace operations we need to do and trading cycles for memory to make the most use of each sample. - -We'll also consider real world optics. -We'll observe and model the operation of physical cameras and consider the differences between real and virtual light. - -Along the way, I'll provide links to further reading, resources and tutorials I've found useful or interesting to developing my understanding of ray tracing and the physics of optics. - -# What does ray tracing do for us? - -It's probably worthwhile nonetheless exploring why we want to do interactive ray tracing in the first place, given its substantial cost over more traditional rasterization. - -Traditional games are built out of triangles in Euclidean space, but ray traced graphics give us an opportunity to draw something else. -We can draw geometry like spheres - or anything with a well-defined intersection function - to an arbitrary level of detail. - -We can curve space, or the paths taken by light through it. -In a traditional rasterized application, this would have to be done by [distorting geometry in a vertex shader](openrelativity), which produces substantial distortion. diff --git a/blogs/unfinished/ray-tracing/weekend.md b/blogs/unfinished/ray-tracing/weekend.md deleted file mode 100644 index e6aed0e..0000000 --- a/blogs/unfinished/ray-tracing/weekend.md +++ /dev/null @@ -1,9 +0,0 @@ -# Ray Tracing in One Weekend - -[The book](https://raytracing.github.io/books/RayTracingInOneWeekend.html) - -I became interested in ray tracing as an image synthesis technique some time around summer 2020. -I'd learned just the year prior about first-principles rasterization and working with graphics APIs in C++, and had found my taste for graphics programming. -On the recommendation of a friend I started working out a first-principles CPU ray tracer based on [Ray Tracing in One Weekend](https://raytracing.github.io/books/RayTracingInOneWeekend.html) (though it took longer) and started to generate my own first ray traced images. The source code for my final project is [on my GitHub](https://github.com/ktyldev/). - -The book describes a very simple sphere-based ray tracer, though there are later books in the series (available [here](https://raytracing.github.io/)) which expand on the first with forays into rendering more complex shapes, volumentrics, and camera artifacts.