Compare commits

...

53 Commits

Author SHA1 Message Date
Cat Flynn 57cb9a2471 isaac
fix url
2024-10-26 01:33:07 +01:00
ktyl 745cedab0f fix: print correct filename in usage 2024-06-16 16:01:21 +01:00
Cat Flynn 56fa0f9123 typo 2024-04-27 10:49:25 +01:00
Cat Flynn efbce4ec40 fix ending flow 2024-04-27 00:57:16 +01:00
Cat Flynn 3852f2c2d6 update date 2024-04-26 23:38:20 +01:00
Cat Flynn 05b6f83a5c conclude story 2024-04-26 23:16:16 +01:00
Cat Flynn 63f0a91b79 wording 2024-04-26 23:15:35 +01:00
Cat Flynn 0ea2ae92ca improve introduction 2024-04-26 20:34:42 +01:00
Cat Flynn bb92ba25c1 corrections from lee 2024-04-26 19:03:38 +01:00
Cat Flynn 1407bcc3d2 interactive astrodynamics 2024-04-25 23:13:43 +01:00
ktyl 6470541904 typos 2024-02-06 00:13:49 +00:00
ktyl e293f4df1e fix typo, update date 2024-02-05 00:17:49 +00:00
ktyl 0c3a32dacf fiction: we all dream about flying 2024-02-03 16:40:46 +00:00
ktyl 36c50e7965 blog: fix typos 2024-02-03 16:35:37 +00:00
ktyl 5a1de185c3 remove link 2024-01-01 16:15:21 +00:00
ktyl 30dd1e7e01 blog fixes before posting 2024-01-01 15:59:05 +00:00
Cat Flynn 683db46f39 blog: a tidy room is a happy room 2024-01-01 15:51:18 +00:00
ktyl 74f8158196 blog: digital gardens 2023-09-27 14:13:45 +01:00
ktyl 0c0d922afc blog: nonbinary masculinity 2023-09-09 23:23:23 +01:00
ktyl 6ea508c38a fix: typo 2023-08-10 23:46:43 +02:00
ktyl 9d7c004775 blog: doomswiping 2023-08-10 23:01:04 +02:00
ktyl f58187a207 fix: some typos, reword some things 2023-08-10 01:15:16 +02:00
ktyl 6e592660f9 blog: transactionism 2023-08-10 00:48:19 +02:00
ktyl 2d58ddf640 build: include <ul> in text panels 2023-08-10 00:47:54 +02:00
ktyl 358314e4cd fix: include entire date when building rss 2023-08-06 23:51:56 +02:00
Cat Flynn 4976587e59 chore: rename post 2023-08-05 21:04:53 +02:00
ktyl e4cd0c2e83 rename file 2023-08-03 22:29:33 +02:00
ktyl 407c3ac617 croissants are shit after noon 2023-08-03 22:23:31 +02:00
ktyl 5b72735edc post url prefix 2023-03-13 23:34:21 +00:00
ktyl 2ddb5cb04c update directory regex 2023-03-13 23:21:08 +00:00
ktyl 555c2e767a make images 2023-03-13 22:25:42 +00:00
ktyl e0382bfc81 make html 2023-03-13 22:25:17 +00:00
ktyl 00dae2c336 make rss 2023-03-12 23:22:07 +00:00
ktyl 865a2f7ca9 fix typo 2023-02-23 21:04:05 +00:00
ktyl 31a637d358 how not to be wrong 2023-01-08 21:48:15 +00:00
ktyl 6dade5c67d Pi+MPD 2022-12-19 21:13:23 +00:00
ktyl bb633a93c8 start writin words 2022-12-19 19:34:16 +00:00
ktyl 34988c272e add debian package name 2022-12-19 19:33:00 +00:00
ktyl f0eb8bc2a2 gpt game dialogue 2022-12-17 20:36:32 +00:00
ktyl 4a3adcd46a fix bad string 2022-12-15 21:51:46 +00:00
ktyl e3d3ac2df4 use jpg instead 2022-12-15 21:38:15 +00:00
ktyl 6d0715f5b5 more tweaks 2022-12-15 21:14:29 +00:00
ktyl 2d9b6b689a edits before posting 2022-12-15 21:14:29 +00:00
ktyl 1f6c9649a5 automount NFS 2022-12-15 21:14:29 +00:00
ktyl 5f8a0cefe5 stable diffusion 2022-12-15 20:59:36 +00:00
ktyl 89f56103f3 track png files 2022-12-15 20:17:08 +00:00
ktyl b7c2193ba5 merci à ethel 2022-11-15 00:41:09 +00:00
ktyl 9ca4dd6b62 remove author subtitle 2022-11-13 19:03:25 +00:00
ktyl d42897624f un cafe dans l'espace 2022-11-13 13:48:44 +00:00
ktyl c306093453 fix typos 2022-10-19 18:43:47 +01:00
ktyl 40fbdd93c0 the prince of milk 2022-10-18 19:24:00 +01:00
ktyl ef586f654d fix broken link 2022-10-17 20:59:24 +01:00
ktyl f31ebc88f4 add drone ci 2022-10-17 20:39:45 +01:00
36 changed files with 1726 additions and 5 deletions

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
*.png filter=lfs diff=lfs merge=lfs -text

View File

@ -0,0 +1,121 @@
# Drone CI
When it comes to automation, [GitLab CI](https://gitlab.com) has been my go-to for running builds, tests and deployments of projects from static websites to 3D open-world games.
This has generally been on a self-hosted installation, and often makes use of physical runners.
However, I have some gripes: I mostly only use it for the CI, but it comes with an issue tracker and Git hosting solution too - great for some cases, but overkill in so many others.
Because it's such a complete solution, GitLab is a bit of a resource hog, and can often run frustratingly slowly.
Recently I've been playing with a friend's self-hosted instance of [Drone CI](https://drone.io/) as a lightweight alternative, and I much prefer it.
I didn't set up the instance, so that part is out of scope for this post, but in case it's relevant, we're using a self-hosted [Gitea](https://gitea.io/) instance to host the source.
You can find out about configuring Drone with Gitea [here](https://docs.drone.io/server/provider/gitea/).
## Yet Another Yaml Config
Like GitLab, Drone is configured via a YAML file at the project root, called `.drone.yml`.
Drone is configured by creating 'steps' to the pipeline, where GitLab uses 'jobs'.
My first project's automation requirements were small - all I needed for a deployment was to copy all the files in a directory on every push to the `main` branch.
This means I needed secure access to the host, and the ability to copy files to it.
I didn't want to dedicate any permanent resources to such a small project, so opted for the `docker` pipeline option.
My pipeline would contain a single `deploy` step which would configure SSH access to the host, and then use it to copy the relevant files from the checked out version of the project.
I decided to use `ubuntu` as the Docker image for familiarity and accessibility - there are probably better options.
Drone widely supports Docker image registries; I have not used Docker much, but would like to get more experience with it.
```yml
kind: pipeline
type: docker
name: deploy
steps:
- name: deploy
image: ubuntu
when:
branch:
- main
commands:
- echo hello world
```
## Secrets
A hugely important aspect of automation is ensuring the security of one's pipelines.
Automated access between pipelines is a big risk, and should be locked down as much as possible.
For passing around secrets such as passwords and SSH keys, Drone has a concept of secrets.
I created a private key on my local machine for the runner's access to the remote host, and added a [per-repository secret](https://docs.drone.io/secret/repository/) to contain the value.
This is a named string value which can be accessed from within the context of a single pipeline step.
I also created secrets to contain values for the remote host address and the user to login as.
These are less of a security concern than the private SSH key, but we should obfuscate them anyway.
It's also a useful step towards generalising the pipeline for other projects: I can use the same set of commands in multiple CI configurations, and just update the secrets from the project page.
This block was placed in the same step definition as above, below the `image:` entry:
```
environment:
HOST:
from_secret: host
USER:
from_secret: user
SSH_KEY:
from_secret: ssh_key
```
## Connecting
To use the SSH key, we need to spin up `ssh-agent` and load our key into it.
Since it's passed into the job as an environment variable, this involves first writing it to a file.
We also need to disable host key checking (the bit that asks if you're sure you want to connect to a new host) as we're making an automated SSH connection, and therefore won't be there to type 'yes'.
```yml
# configure ssh
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- echo "$SSH_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-add
- echo "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
```
Finally, it's time to run some SSH commands.
I had a bit of trouble getting the hang of variable templating here - it took some trial and error to figure out what variables would get expanded and when.
Since my `HOST` and `USER` values are defined in secrets, I had to get them from my evironment variables and into a correctly formatted string for the SSH target.
As I would be running multiple commands, I also wanted to store this in a variable to keep the SSH commands short in the Drone config.
What ended up working for me was this:
```yml
# environment variables get expanded (twice?)
- host="$${USER}@$${HOST}"
# running 'hostname' on the deploy target
- ssh $host "hostname"
```
## Images
It's pretty cool to be able to pass a repository through several Docker images through the pipeline.
I have my website's Makefile set up to build off my local machine, which is on Arch.
It therefore depends on Arch-specific package names.
I didn't want to have to hack around my existing build configuration just to build it automatically, but I also found that the deploy steps I'd already written worked best on Ubuntu.
For Drone, this is no problem - I can simply specify `image: archlinux` in the build stage, and `image: ubuntu` for the deploy step.
My Makefile and local workflow requires no changes at all, but I can still use the more robust deploy steps from Ubuntu.
## Final thoughts
I like Drone's minimalist approach to CI.
There isn't much in terms of configuration, and the interface is much snappier than Gitlab's.
It will take a bit more work to get a full workflow - Gitlab basically has one out the box - but working with more separate components should provide flexibility and resilience in the long run.
I'd like to explore some more features, like [templates](https://docs.drone.io/template/yaml/) for steps shared between repositories, and spend more time tuning exactly when pipelines run.
I also want to try building some more complex projects, such as those using game engines like Godot, and those targeting multiple target platforms.
Those are adventures for another day, though.
That's all for now, thanks for reading and see you next time!
## References
* [GitLab CI config to deploy via SSH](https://medium.com/@hfally/a-gitlab-ci-config-to-deploy-to-your-server-via-ssh-43bf3cf93775)

View File

@ -0,0 +1,30 @@
# The Prince of Milk
The Prince of Milk is a science fiction novel by Exurb1a of YouTube fame.
It follows the story of a fictional village in southern England named Wilthail, which ends up the unwilling venue for the settling of an ancient grudge.
Deities ("Etherics") exist alongside the mundanity of 21st century Wilthail, and engage in absurdity, sodomy and violence with its quaint population.
The books makes reference to a number of popular philosophical debates, and takes inspiration from a number of classical sci-fi authors.
A common theme is the idea that power is relative.
The Etherics are immortal - their grudge has played out across hundreds of 'Corporic' incarnations - and have power and abilities far beyond the comprehension of their human counterparts.
However, they do not necessarily view themselves as gods.
This is particularly true of the character Beomus, who frequently plays down their immortality and returns fire with questions about modern humans' relationship to their primitive ancestors, or with ants.
This relativity of power recurs plenty, and is reminiscient of Arthur C. Clarke's assertion that sufficiently advanced technology is indistinguishable from magic.
As characters in a book, the Etherics are understandably cagey about how any of their abilities work - but broadly refuse to classify them as either magic or technology.
Reincarnation is viewed as a fundamental way of the world - Chalmers' panpsychism, or the Hard Problem of Philosophy.
This goes further than to suggest that people are simply reincarnated as others when they die, rather suggesting that consciousness is a fundamental force of the universe, in just the way electromagnetism is.
It's a recursive thing, from the lowliest atom up through rocks, mice, snakes, cats, people, stars and gods.
It's a neat and satisfying view, and one that has yet to be disproven by neuroscience.
The human characters are invariably damaged - mental health issues, broken relationships, toxic parentage, drug use, suicide, difficult histories.
This paints PoM's world as realistic, and grounds it through the fantastical happenings in the middle act.
It grips the reader with its variety of characters, and follows them all as they confront not only their own personal hells, but the one they now find themselves sharing, in a twisted take on country bumpkinism.
Overall, I thoroughly enjoyed this book, and am looking forward to reading more of Exurb1a's writing.
I am a little biased, as I have already enjoyed the YouTube channel for a number of years.
There is a short glossary at the end naming and exploring some of the particular concepts explored in the novel, which prompt the reader to explore further.
Top marks!

View File

@ -0,0 +1,11 @@
# Un Cafe Dans l'Espace
J'ai acheté ce livre quand j'ai visité la Cité de l'Espace à Toulouse. C'est écrit par Michel Tognini, un astronaute français qui été dans l'espace deux fois. Il a travillé sur la station spatiale de Mir, et sur la navette spatiale pour décoller CHANDRA, une observatoir dans le bas orbite. Depuis, il a selectionné et entrainé de nouveaux astronautes européens.
Ce livre parle de plusiers subjets en relation à l'espace: de l'entrainement de l'auteur à la Cité des Étoiles en Russie, de les échecs et défis dans l'espace, aux réalisations des sociétés privés comme SpaceX, Blue Origin et Virgin Galactic. Comme d'autres astronautes, Tognini a étudié comme pilote de chasse, et puis comme pilote d'essai. Il a rejoint l'agence spatiale française CNES avant la formation de l'ESA, qui existe encore aujourd'hui.
J'ai trouvé que je connaissais déjà beaucoup des histoires dans ce livre, parce que j'ai toujours eu une adoration pour l'espace, et c'est écrit pour une audience générale. Ma première raison de lire ce livre est que c'était mon premier français! Cela m'a pris quelques mois, mais c'etait une experience agréable. Au début, j'avais besoin de rechercer plusiers mots à chaque page, mais à la fin j'ai trouvé que je pouvais lire beaucoup d'aisance.
Je recommende ce livre aux francophones qui sont interessés par l'espace, mais qui sont peut-être moins familiers avec le jargon comme moi.
Encore, merci à mon cher Ethel pour m'aider avec mon français ! <3

View File

@ -0,0 +1,96 @@
# Automounting network drives with NFS
This is the first part of a series of posts about setting up a music server using a NAS and Raspberry Pi. The next part is [here](https://ktyl.dev/blog/2022/12/19/pi-mpd-music-server.md).
---
I have a NAS which supports NFS, which I use to store all of my photos, music and other media on my local network.
This gives me OS-independent to all of these files, and frees up drive space on my laptops and desktop - most of which are dual-booted.
On Windows it's fairly straightforward to establish a network drive, but on Linux-based systems - at least on the Debian- and Arch- based distros I find myself using - the process is a little more involved.
Here I'll use `systemd` to automatically mount a shared folder when they're accessed by a client machine.
There are other ways to do this, but as my machines predimonantly run Debian- or Arch-derived Linux distributions, `systemd` is a choice that works for both.
This post is largely based on the description on the [ArchWiki](https://wiki.archlinux.org/title/NFS#As_systemd_unit).
My NAS' hostname is `sleeper-service`, and I'll be mounting the `Music` shared folder.
You'll need the appropriate package to mount NFS filesytems.
On Arch Linux, `nfs-utils` is what you'll be after.
On Debian, the client pckage is `nfs-common`, which may already be installed.
You may also need to configure security on your NAS to allow NFS connections from your local machine's IP.
## Initial mount
Before doing anything automatically, we first need to create a `systemd` unit to mount the remote filesystem at a path in our local filesystem.
I'll mount the remote folder onto the local path `/sleeper-service/Music`.
When creating this file, pay attention to its name, as it's important for it to correspond to the path of the mountpoint.
The correct name can be determined using `systemd-escape` - pay attention to escape characters in the output, this caught me out several times.
```
$ systemd-escape /sleeper-service/Music
-sleeper\x2dservice-Music
$ sudo touch /etc/systemd/system/sleeper\\x2dservice-Music.mount
```
Don't ask me why `systemd` is like this - I think it's silly too.
After creating the unit file, we then need to edit it and fill out some information, specifying where the remote filesystem is and also when we need to initialise it.
Here I used a name instead of an address for the `What=` part - I have an entry for `sleeper-service` configured in `/etc/hosts`, but you can equally use an IP address just as well.
```
[Unit]
Description=Mount music at boot
[Mount]
What=sleeper-service:/volume1/Music
Where=/sleeper-service/Music
Options=vers=3
Type=nfs
TimeoutSec=30
[Install]
WantedBy=multi-user.target
```
Once we've created this, we can try to manually mount the shared folder by starting the unit:
```
$ sudo systemctl start sleeper\\x2dservice-Music.mount
$ ls /sleeper-service/Music
```
At this stage you ought to see the contents of your shared folder.
Next, we want to set up the automount, so that this remote folder is mounted automatically when we try to access it.
To do that, we need to first stop/disable the unit we just created:
```
$ sudo systemctl disable sleeper\\x2dservice-Music.mount
```
Then, let's create an `.automount` unit with the same name as the `.mount` file we already have.
The automount unit expects the mount unit to exist alongside it - it doesn't replace it.
```
$ sudo touch /etc/systemd/system/sleeper\\x2dservice-Music.automount
```
```
[Unit]
Description=Automount NAS music
[Automount]
Where=/sleeper-service/Music
[Install]
WantedBy=multi-user.target
```
Then, enable the new `.automount` unit to have it run automatically:
```
$ sudo systemctl enable sleeper\\x2dservice-Music.automount
```
The folder should now be automatically mounted at the target location when trying to access it.
As always, thanks for reading and I hope this was helpful.
If I got something wrong, or there's an easier way to do it, or you just want to say hi, please don't hesitate to [get in touch!](mailto:me@ktyl.dev)

Binary file not shown.

After

Width:  |  Height:  |  Size: 214 KiB

BIN
blogs/2022/12/15/astronaut_rides_horse.png (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,128 @@
# Local Stable Diffusion
![astronaut rides horse](astronaut_rides_horse.jpg)
Stable diffusion (SD) is an AI technique for generating images from text prompts.
Similar to DALL-E, which drives the popular [craiyon](https://www.craiyon.com/), SD is available as an [online tool](https://huggingface.co/spaces/stabilityai/stable-diffusion).
These web tools are amazing, and easy to use, but can be frustrating - they're often under high load, and impose long waiting times.
They use a good chunk of computational resources, specifically GPUs and so have generally been out of reach for even people with powerful personal machines.
Now, however, SD has reached the point it can be run using (admittedly, high-end) consumer video cards.
Stability AI - the model's developers - recently [published a blog post](https://stability.ai/blog/stable-diffusion-v2-release) open-sourcing SD 2.
There's a README for getting started [here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/README.md), but it has a couple of gotchas and assumptions which plenty of people (like myself) won't have known if they're not already familiar with the technologies in use, such as Python and CUDA.
This post is descibes my experience setting up SD 2 on my local workstation.
For hardware, I have an i7-6700k, RTX 2080 Super and 48GB of RAM.
If you have an AMD video card, you won't be able to use CUDA, but you may be able to use GPU acceleration regardless using something ROCm.
In this post I'm using Arch Linux, but I have successfully set it up on Windows too.
Python is an exceedingly portable language, so it should work wherever you're able to get a Python installation.
This post assumes that you already have a working Python installation.
## Install CUDA
CUDA needs to be installed separately from Python dependencies.
It is quite large, and as with all NVIDIA driver installations, can be a bit confusing.
On Linux, it's straightforward to install it from your distribution's package manager.
```bash
sudo pacman -Syu
sudo pacman -S cuda
```
On Windows, you will need to go to NVIDIA's site to download the correct version of CUDA.
At time of writing, the SD 2 script expects CUDA 11.7, and will not work if you install the latest 12.0 version.
To get older versions, go to their [download archive](https://developer.nvidia.com/cuda-toolkit-archive) and select the appropriate one.
## Set up a virtual environment and PyTorch
Python can be installed at a system level, but it's usually a good idea to set up a virtual environment for your project.
This isolates the project dependencies from the wider system, and makes your setup reproducible.
I will use [`pipenv`](https://pipenv.pypa.io/en/latest/index.html) as it's what I'm familiar with.
PyTorch is a deep-learning framework, used to put together machine learning pipelines.
To get a command to install the relevant dependencies, go to [PyTorch's site](https://pytorch.org/get-started/locally/) and choose the options for your setup.
In my case, I replaced `pip3` with `pipenv` as I want to install dependencies to a new virtual environment instead of to the system.
```bash
mkdir stable-diffusion && cd stable-diffusion
pipenv install torch torchvision torchaudio
```
## Install Stable Diffusion
SD 2 is provided by the `diffusers` package.
We can install it in our virtual environment as follows:
```bash
pipenv shell
pip3 install git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
exit
```
We use `pipenv shell` to enter a shell using the virtual environment, before using the `pip3` command described on their README.
After installing dependencies, we can leave the virtual environment shell and return to our original one.
`transformers` and `accelerate` are optional, but used to reduce memory usage and so are recommended.
## Create a Python script
Python does have an interactive envronment, but so save our fingers let's use a `stable-diffusion.py` script to contain and run our Python code.
Here I'll mostly copy the Python included in their README:
```python
import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "stabilityai/stable-diffusion-2"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt, height=768, width=768).images[0]
image.save("astronaut_rides_horse.png")
```
I've made two additions here.
First, I've added `import torch` at the top - I'm not sure why the code in the README omits this, but it's needed to work.
I've also added `pipe.enable_attention_slicing()` - this is a more memory-efficient running mode, which is less intensive at the cost of taking longer.
If you have a monster video card, this may not be necessary.
At this point, we're done - after running the script successfully, you should have a new picture of an astronaut riding a horse on mars.
## Some nice-to-haves
In this basic script we only have the one, hardcoded prompt.
To change it, we need to update the file itself.
Instead, we can change how `prompt` is set, and have it read from command-line parameters instead.
```python
# at the top of the file
import sys
...
prompt = " ".join(sys.argv[1:])
```
While we're at it, we can also base the filename on the input prompt:
```python
image.save(f'{prompt.replace(" ", "_")}.png')
```
## Wrapping up
And that's it!
Enjoy making some generative art.
My favourites so far have been prefixing "psychedelic" to things.
I've also been enjoying generating descriptions with [ChatGPT](https://chat.openai.com/chat) and plugging them into SD, for some zero-effort creativity.
As always, if anything's out of place of if you'd like to get in touch, please [send me an email!](mailto:me@ktyl.dev).

View File

@ -0,0 +1,58 @@
# Game dialogue with ChatGPT
[ChatGPT](https://chat.openai.com/chat) has become the latest AI application to enjoy viral popularity.
At time of writing it's a closed-source research tool developed by OpenAI, with the only access being via their web portal.
Users have to create an account to interact with the bot, and have no API access, though they no doubt have one internally.
I think given its capabilities, this is probably a good idea for now, but I'd like to outline the impact it can already have in game development, even in its fairly limited form.
However, it can already be made immensely useful for content generation, without any kind of API access.
Generally, characters come in two flavours: main characters, whose motivations and actions shape the story; and generic NPCs, who exist to fill out the world for the player.
For the story to carry the author's intent (which they might not necessarily care about), it would probably be best not to leave ChatGPT to generate a plotline on its own.
Its susceptibilty to bias is a problem - try generating men or women and count how often they're describing as petite, as having chiseled jaws or as wearing form-fitting dresses.
It can be coaxed out of this with enough description, but lots of manual intervention defeats any content generation technique.
The other group of characters, though, I think represents ripe pickings.
Often in a game world, background dialogue quickly becomes stale, as lines are reused.
ChatGPT can already easily be used as a supporting writer to generate a huge amount of less-than-critical dialogue.
Take, for example, a merchant.
![generating merchant dialogue](merchant_prompt.png)
This psuedo-format is instantly combatible with a simple templating system.
It would be trivial to generate variations using perfectly traditional programming techniques.
This prompt took a minute to write, and includes specific about the character's context, as well as a slightly more than default personality.
We've instantly generated 8 perfectly workable dialogue options for our character, from some basic and mostly templated information about their context.
However, we notice that our item choices weren't included in the output, though we described them.
So we ask:
![merchant items](merchant_items.png)
And, instantly, another 8 lines.
We now have, after a modicum of input, 16 possible lines for a background merchant character to respond with when interacted with.
With some templated prompt generation, this could be made even faster than the description given here.
It's also capable of going beyond just lines dialogue.
[Ibralogue](https://github.com/Ibralogue/Ibralogue)'s developer taught it the syntax, had it generate an example and then taught it a new feature:
![sprite prompt](sprite_prompt.png)
![sprite response](sprite_response.png)
All that's left is to copy the output and paste it into a text file for a game to use.
---
This is barely even a scratch on what ChatGPT or systems like it are already capable of.
At present, the website gets overloaded, you can't save and reload conversations, and its content filtering is very much evolving problem.
However, even with those limitations it's an extraordinarily powerful tool, and this is just one very minor example of an application.
That's it from me, but I'd love to read more discussion about use cases and the ethical issues at play.
If you have anything interesting, please [get in touch](mailto:me@ktyl.dev)!

BIN
blogs/2022/12/17/merchant_items.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
blogs/2022/12/17/merchant_prompt.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
blogs/2022/12/17/sprite_prompt.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
blogs/2022/12/17/sprite_response.png (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,126 @@
# NAS-based music with a Raspberry Pi
This follows on from my [previous post](https://ktyl.dev/blog/2022/12/03/automount-nfs.html) about setting up NFS.
---
I have a large digitised collection of music, and have been experimenting with ways to set up a communal music player in my living room without defaulting to Spotify, or any other such streaming platform.
Thus far I have used an old laptop with as much music as it will fit loaded onto it, running [MPD](https://www.musicpd.org/) and plugged into some speakers.
Then, on the laptop (or usually, another, closer laptop) I can connect to the MPD instance with [ncmpcpp](https://github.com/ncmpcpp/ncmpcpp) to change tunes.
This is an OK solution, but has a few drawbacks: I'm limited to the disk of the laptop, the laptop uses more power than it needs to, and I kind of want that laptop back!
I had the luck to grab a Raspberry Pi from a pop-up store a few weeks ago, and felt that would make a perfect, low-power, unintrusive box to attach to the speakers.
Ostensibly, the Pi is overkill for just playing music, but it's better than a whole laptop and I'm sure I'll find other jobs for it to do as time goes on.
As for requirements, I have a desktop machine from which I often work from home, and would like my music collection available there too.
I also often use my laptop in the living room or kitchen, which is also in earshot of the speakers, and I'd like to be able to control the music from my laptop with ease - no cables.
Ideally, these should be stored in the same place, to save having to manage duplicate files and manually synchronising locations, since I am likely to add to my collection from a variety of locations.
I have spent enough time `rsync`ing albums between machines, life is too short even on a gigabit local network.
I've recently had the good fortune to acquire a Synology NAS, so I'm going to use that to host my music collection.
However, it's more than possible to jerry-rig a NAS using anything with a hard-drive - maybe even a second Pi.
Nothing I'm doing should be specific to Synology's hardware or software, as we'll be using [NFS](https://wiki.archlinux.org/title/NFS) to mount remote drives - but exposing an NFS shared folder to the network is therefore out of scope for this post.
## Set up a shared folder
The first step is to centralise my music storage.
To do this, I created a shared folder from my NAS' web interface, and exposed it to the network.
In my case, I had to specifically add permission for other devices to access the folder via NFS - such as the Pi, my desktop and my laptop.
It was therefore prudent to assign each of these machines a static IP on my network, so that the NAS can continue to recognise them.
I also had to set it to map all users to admin, but this is almost certainly a misconfiguration on my part - don't follow me for security advice, I am just tinkering!
My previous blog post goes into detail regarding setting up the NFS configuration.
## Setting up the Pi
My Pi is a Pi 4 Model B, with 4GB of RAM.
This is more than enough for my needs, and you should be able to get by with much less.
I went through the initial default setup, noticing that it's much, much slicker than it was on my gen 1 Pi, which ultimately landed me on a graphical desktop.
First, I set a hostname and enabled SSH access, since this is to be a headless machine.
For the same reason, I disabled the auto-launch of the graphical user interface.
I would have thought that if it's booted headless, it shouldn't think to launch a graphical session in the first place, but better safe than sorry.
The point of the thing is to sip power!
## MPD
Next, I installed MPD.
By default, MPD sets itself up with a `systemd` unit, so it connects as soon as I run `ncmpcpp` from the Pi itself.
After a reboot, this still seems to be the case, so I'm happy with the default installation.
I pointed it to the automounted music directory by editing `/etc/mpd.conf` and added it to the `audio` group:
```
music_directory "/sleeper-service/Music"
group "audio"
```
Configured an output for ALSA (I was not able to make it work with Pulse):
```
audio_output {
type "alsa"
name "My ALSA Device"
mixer_type "software"
}
```
We also have to add the `mpd` user to the `audio` group to allow it to access sound devices:
```
sudo usermod -G audio -a mpd
```
And enable the driver on boot for the 3.5mm audio jack in `/etc/modules`:
```
sudo echo "snd-bcm2835" >> /etc/modules
```
I found I had errors with MPD failing to create a pid file, so I gave the `mpd` user ownership of the directory it was trying to create it in:
```
sudo chown -R mpd /run/mpd/
```
This was a bit of a weird one, since it didn't have this error to start with.
Nonetheless, after all of that, it works!
I'm able to play music by running `ncmpcpp` on the Pi itself.
## Remote access
The last thing to configure is access from remote machines.
I only intend to access it from the local network, so this is pretty straightforward.
First, to expose MPD to the network, I set its address and port in `/etc/mpd.conf`:
```
bind_to_address "192.168.1.17"
port "6600"
```
Then, I need only specify the location of the Pi on the network in a local machine's `ncmpcpp` config:
```
mpd_host = "pifi"
mpd_port = "6600"
```
Of course, `pifi` is an entry in my remote machine's `/etc/hosts`.
It's possible you have multiple MPD installations - one on your remote machine, such as a laptop, as well as an installation like the Pi.
In that case, recall that `ncmpcpp` can be launched with different configs using the `-c` flag:
```
alias bops="ncmpcpp -c ~/.config/ncmpcpp/config.alt
```
## Wrapping up
That's all for now.
At some point in the future I'll write another post on making this setup more accessible.
I certainly like `ncmpcpp`, but it often garners a scoff from houseguests.
So, I'd like to pursue the ultimate goal of making it as straightforward to use as something like Spotify.
As always, I hope this was helpful and please don't hesitate to [get in touch](mailto:me@ktyl.dev)!

View File

@ -0,0 +1,13 @@
# How Not To Be Wrong: The Hidden Maths of Everyday Life
_How Not To Be Wrong_ by Jordan Ellenberg explores mathematical concepts and ideas which permeate our everyday life.
A broad look at mathematical principles which govern some parts of everyday life, and some parts of the not-so-everyday life.
Generally well-written and approachable, as someone with a maths-adjacent background, there were some parts that I was familiar with, and others less so.
The author has a sense of humour, and writes well about topics he clearly understands deeply, mostly without boring the reader.
I particularly enjoyed the first few chapters, where a difference is established between the "default" view of mathematics as purely a numbers game about finding exact answers to questions, versus the author's view that it's about finding the questions to ask in the first place.
Such questions include those such as "how Swedish is too Swedish?", "does lung cancer cause smoking?" and "can slime mold predict elections?".
The book reminded me a bit of Chaos: Making a New Science which I read at the beginning of 2022, though less dry, and pitched to a more general audience.
I enjoyed some specific parts of the book a lot - particularly those involving geometry and calculus - though could have done without the extensive pieces on statistics, which was always my least favourite sub-discipline at school.

View File

@ -0,0 +1,126 @@
# Doomswiping
## I will profiter
One of the words that's stuck out to me learning French is the verb _profiter_, or _to profit_.
The direct translation is easy - English and French have substantial shared lineage, and this is a word that's unchanged between the two.
The usage and connotation of the word between the two languages does differ, however, and I'd like to explore the many senses of the word(s) for a spell.
In English, 'to profit' is more often than not associated with financial or economic contexts.
If one profits from something, they've made money from it, they've got out more than they put in, they've made a worthwhile exchange.
It is generally used in discussions of wealth, ventures, or commercially applied in business.
In French, _profiter_ means the same thing, but has a much weaker financial connotation.
Rather, it is associated with personal gain in terms of character growth, positive experiences, improved well-being.
For example, « *profite bien de tes vacances* » directly translates to "profit well of your holidays", but the meaning is closer to "enjoy your holidays".
In English, we are unlikely to talk of profiting from a holiday, or of a positive personal experience, although it makes perfect grammatical sense.
We'd understand someone's use of the word in this sense, though we'd think it an odd turn of phrase.
I think there is something of a knife-edge here, an unstable equilibrium where the same concept resolves to fundamentally different meanings depending on one's own native culture and experience.
---
A well-worn idiom in English is that time is money.
This makes perfect sense in a commercial setting: our economonic systems prize cost-efficiency and reward those that make the most with the least.
This is also true in biology; natural selection optimises and specialises organisms to be the best in their niche, and everything else is made extinct.
In the case of the individual, we could apply the same calculus: our lives each have a finite budget of time available, so it follows that we should optimise how to spend it in order to gain the most utility, whatever that may mean for us each individually.
This is the value proposition put forward by industries like match-making (Hinge), ready-made food delivery (HelloFresh), or educational course providers (Udemy).
Generally, they provide a means by which to do something one could already do, but with a much reduced time investment.
There's evidently demand for these industries, and undoubtedly they provide a service that's valued by some segment of the population, so I won't tilt against windmills decrying their existence here.
However, I think there's cause for concern with such time-optimisation.
Take strategies for meeting people to date, for example.
If I use a match-making service, I indicate preference towards some individuals, while they do the same to me and others, and some algorithm tries to match us up with people it thinks we'll like.
If a match is made, we talk, and can arrange to meet up, and from there, perhaps on to form whatever kind of relationship it is we are looking for.
This is straightforward and convenient.
If instead I rely on meeting people by chance, I have to regularly encounter situations in which I am likely to meet people.
I have to additionally hope that those people will be the kinds of people I am likely to get along with, and that they are also looking to meet new people.
I also have to be someone that is interesting enough in a chance encounter that someone I meet would like to see me again.
This is deeply complex, massively daunting, and extremely time-consuming.
It would seem therefore that dating apps are a much better time investment.
Instead of having to figure out things to do or places to go, presumably spend money to enable the ordeal, I can instead look for a date while in the midst of the rest of my daily life.
I know that the people I see there are interested, broadly speaking, in the same thing as me, and can precisely tune my preferences.
It should work out that not only do I spend less time looking for someone, I also find someone that is likely to closely match myself.
Therefore, using a dating application is a much better use of my time!
Or is it?
I think there's a flaw here in how we've valued our time.
Time we've put into our app is time spent we've spent directly pursuing a goal: "I want to find a relationship".
We've done this efficiently, as the application should optimise our time spent by matching us directly with people, and we're free to spend as much or as little time as we would like.
But there are several problems with this thinking: we're trusting the application's ability to find something we value; we're assuming our goal can be directly approached; and we're valuing our time in relation to having achieved this goal.
We'll examine the application first.
## Profiles aren't people
First, it should be re-iterated that when you're using a dating app, you are not looking at people.
You are looking at people's *profiles*, which I would argue are actually very poor indications of what the people behind them are like.
It's well-documented that people don't represent themselves honestly on online platforms, and it would be unreasonable to expect them to.
A profile also acts as a filter designed by whatever particular platform you happen to be using, restricting someone to share themselves in a specific format, which further limits any genuine self-expression someone can display.
This will probably act to negate some of the platform's matching ability.
We also should consider our own biases; applications will allow you to set an age range, political preferences, drink and drug tolerance, religious view, et cetera.
In plenty of cases this is perfectly reasonable, but isn't it also easy to see how this enables a user to set their own expectations unreasonably high?
This too, will reduce the algorithm's ability to match effectively.
Humans did not evolve for a digital existence, and relationships are comprehensively _not_ digital.
We evolved to have rich and complex social interactions, as our survival on the savannah depended on it.
We track each others' posture, tone, facial expressions, and keep tabs on the interactions between others that aren't ourselves, almost entirely automatically.
None of these values can be meaningfully put into a dating profile.
Even though they're perfectly available from the first date onwards, at that point you've committed your time and energy to something with a pretty low chance of working out - exactly what you wanted to avoid in the first place!
We also run the risk of cognitive exhaustion.
Thought profiles aren't people, the parts of our brains that deal with faces don't know that, and will still be running full-tilt as we swipe onwards.
This processing itself takes energy, and is the social equivalent of junk food, because there's no actual socialising backing it up.
Instead, by increasing the number of people we're likely to meet day-to-day, we give our honed social instincts more opportunity to do what they're there to do.
By training them on lots of people, we'll get a better sense of what it is we're after in the first place.
It's more effort to organise and to engage in, but it's certainly better for us overall.
## Pursue goals indirectly
Let's re-examine the things we have to do to meet people by chance: encounter new situations with new pople in them, go somewhere that I'd like to be, and be approachable and charming.
Put this way, don't these maybe sound like goals on their own?
We could directly pursue those other goals, which don't require any chance.
We each know where we could go to encounter new situations, and if we don't, we could probably find out if we applied ourselves to the problem.
We each have our insecurities we'd like to work on, to become more confident and outgoing.
Especially after an isolating pandemic we likely all need the face-to-face practice of being where people are anyway.
I think that directly pursuing a goal like "I want to find a relationship" is something of a façade.
Achieving it inherently depends on another person (who cannot be controlled) and the circumstances under which we find ourselves together (even in the best case, we need to be lucky).
That chance aspect is what makes dating so difficult, but also what makes it so rewarding.
## Saving time
Finally, let's examine the time spent on the application itself.
It's true that, like any number of modern mobile apps, the minimum time investement is very low.
You can set up a profile in minutes, and from there you can view profiles on the train, in the coffee queue, or taking a dump.
Because it's so easy to do, it means that *you do it easily*.
Most of us are already chronic smartphone users, and I absolutely count myself among them.
It's devastatingly easy to fall into a habit, and once a habit is dug in it will begin to effect how you think.
What started as a canny time saving becomes a time sink in itself.
Not only that, but it also expends our valuable energy making what are ultimately low-value decisions, culminating in decision fatigue.
A decision-fatigued person no longer has the energy to make energy choices, and so will but succumb to their habits more, reinforcing a vicious, energy-sapping cycle.
I've seen plenty of people find meaningful relationships through dating apps, and I think that's great: it's always nice to see technology bringing people together.
But any technology has a dark side, and in my personal experience I've seen more of that than the positive with match-makers and other time-saving propositions.
As a result of that, I tend to prefer old-fi approaches when they're available.
One could argue quite reasonably that the slow path is wasting time, but I think that's a matter of how one frames it.
Any time you enjoyed, and look back on after as having enjoyed, surely wasn't wasted.
If I'm taken the "optimised" path, I might be missing out on enjoying the thing in the first place, and if I'm not enjoying it, I can only wonder if I'm missing the point entirely.
---
I didn't reference anything directly, but there are a few books I've been thinking about which motivated this point of view. They are:
* [You Are Not A Gadget](https://www.goodreads.com/en/book/show/6683549-you-are-not-a-gadget)
* [Homo Deus](https://www.goodreads.com/book/show/31138556-homo-deus)
* [Zen and The Art of Motorcycle Maintenance](https://www.goodreads.com/book/show/19438058-zen-and-the-art-of-motorcycle-maintenance)

View File

@ -0,0 +1,111 @@
# Croissants are Shit Past Noon
![from the window](window.jpg)
I've been living in Paris for the past couple of months and I thought I'd share some of my observations on the place, the language, and the adventure as a whole.
I've spent most of my adult life living in London, so I'd primarily like to draw some comparisons between the two cities.
Though there are two [classic](https://en.wikipedia.org/wiki/A_Tale_of_Two_Cities) [books](https://en.wikipedia.org/wiki/Down_and_Out_in_Paris_and_London) comparing them, I'll open this post by admitting I haven't read them, and that's only the start of my ignorance.
## French
I spoke French as a 6 or 7 year old, and went to an international school in the Paris.
This means I have the [phonemes](https://en.wikipedia.org/wiki/Phoneme) associated with French, for example the trilled 'r' in F**rrr**ance, or the guttural 'yeugh' in meill**eur**.
However, this is about where my advantages end, much to my chagrin.
In the years since leaving Paris, my French had atrophied, virtually to completion.
Children are great at picking up accents and languages, but they're great at losing them too, it would seem!
After a few years of clawing it back - I'll save the details of my approach for another post - I felt comfortable visting France on holiday, ordering things in restaurants and navigating within or between cities.
I was not prepared for the intensity of spending an entire evening or day speaking nothing but French.
As it turns out, using a language you're not fluent in is **taxing**.
Trying to keep up with a conversation between natives is Sisyphean, as they'll speak to each other faster than I can parse what's said, let alone try to form a response.
This isn't the worst thing in the world for myself - I quite enjoy just watching and listening, rather than always taking a vocal part - but I can imagine the dynamic is strange for those I've spent time with in groups.
One-on-one, the situation is a little better.
I can't be more than a phrase or two behind in context, and if I've not understood something or make a nonsensical reply, it's an opportunity to check in and get myself back on firm ground.
The flipside is that I've no chance to recuperate.
On one evening, I went to a friend's house, had a beer, and played some chess.
We spoke in French the whole time, and it was a pleasant evening.
However, as it started to get late, I started to flag - I was slow to understand what he said, and even slower to put together a response.
As he walked me back to the metro station, he asked if I'd get home OK, to which I could barely manage a 'oui'!
Once on the train, the language-parsing part of my brain no longer in demand, I felt almost immediately more energetic.
I hadn't expected the impact of speaking another language for an extended period to be quite so physical, so visceral.
I'm very grateful to the few friends I've made here for putting up with me, though they all speak better English than I do French.
I'm also humbled.
The UK has a large immigrant population, all of whom have had to learn English.
People speaking accented English is so normal and widespread that it's become utterly unremarkable, though it very much is.
At some point every single one of them has gone through having to spend hours, days or weeks communicating in a language other than their mother tongue, often as a necessity for a job, without even having the fallback that I've had, being an anglophone in Paris.
Overall, my immersion strategy has been successful: I speak far better French than I did at the start of the summer.
It's developed primarily my ability to speak and listen, rather than to read and write.
Even then, my command of the grammar and vocubalary hasn't advanced so much as my confidence.
I think this has been driven in part by necessity.
In a conversation, you don't have time to translate completely what someone's said, so you draw on context and what little you've parsed in the split-second after the other has spoken.
This, coupled with a slapped-together reply, has been the unit of practice I've been trying to encounter as much as possible.
Then, it's been driven by having confirmation.
After putting together some phrase and speaking it, and the next response comes, it's brilliant: I've been understood!
Though a totally normal thing, every time I'm understood in French is a moment of magic for me.
Being able to carry and continue conversations on a wide range of topics, without constant faux pas or speaking gibberish, has bolstered my confidence like nothing else.
I feel that on my return to the UK, my continued French learning will be all the more effective as a result of this experience.
## The French
Parisians have a reputation for being rude to outsiders.
In my experience this hasn't been the case at all, I've found them to be accommodating and (almost overly) polite.
In London it's rare to greet others on the street, or even to look them in the eye.
That's not the case at all in Paris, though it's a denser city, and just as metropolitan.
Walking past people on the stairs in my building, or navigating a shop, or public transport, people are always sure to say hello, and to wish each other a nice day on departure.
There are more smiles in Paris than London.
They're very ready to talk about politics, language, culture, France itself of course and are very open on a number of topics I'm too British to risk mentioning here.
I think the notion of their rudeness is a misinterpretation of what is actually directness.
The British operate a social culture of subterfuge and doublespeak, whereby it's common to express your displeasure to someone and for them to receive it with a smile, left to understand the reproach only later, or never.
The French play no such game: they say what they think.
Personally, I'm a fan of this direct approach.
I find it hard enough to determine what someone means even when they're not trying to hoodwink me.
With the French, I'm much less worried that someone may be or have been duplicitous.
The stereotype of a smoking, drinking Frenchie, I'm sad to say, holds no water at all. Of the friends and acquiantances I've made, not a single one has smoked, only a handful have had alcohol and a good number of them have even been vegan.
## The Lifestyle
Parisian authorities have a concept of a ['15 minute city'](https://www.thelocal.fr/20230215/what-is-a-15-minute-city-and-how-is-it-working-in-paris).
The idea is that one's daily needs should be within 15 minutes of where they live.
For my flat in central Paris, this is absolutely true.
This is true even to the extent that Gare du Nord, my link back to London, is but 10 minutes on the metro from where I sleep.
Combined with the fact that metro lines run quite comfortably in to the early hours makes travel in and around the city faster, cheaper and dare I say even more convenient than that in London.
The French prioritisation of lifestyle over personal assiduity is clear as every night, weekend or otherwise, the eateries and bars are packed to the brim.
Even walking around to find bites for lunch, the brasseries are packed, and on any evening even approaching warmth the shores of the Seine are shoulder-to-shoulder.
The necessary ingredients for such a lifestyle are easily accessible, too; a coffee, croissant and a baguette - breakfast and most of the way to lunch - together cost less than a coffee on its own would in London.
Wine is readily available in almost every building with a door and far cheaper than a London pub.
Beer is the only loser here, which is still about blow-for-blow for when you're out in Soho.
The emphasis on freshness is palpable in a way one doesn't find in the UK - indeed, 'fresh' is a bit of a stretch for any food item in the country at the moment.
For most of my working life I've tended to take lunches quite late, but am having to adapt my strategies here.
If I fancy a pastry, I'm sure to get them in the morning.
From the same chain boulangerie, I made the mistake the other day of buying a croissant at 16h, or four in the afternoon, and it was horrible.
The same bakery has served me plenty of delicious ones both before and after, but at that time in the afternoon it was dry as a bone, and the butter in it tasted almost salty.
## Isolation
Though a fascinating, enjoyable and productive linguistic and cultural experience, I have experienced more homesickness than I expected to.
The last time I moved to a city alone was to the Midlands for university, but I quickly made friends on my course and made use of university societies.
It is much more difficult here.
Not only is it a new city and a new language, but as an adult in full-time work wanting to make adult friends who'll also be in full-time work, my opportunities for making friends are painfully finite.
When I do have time to find and make new friends, I still have to contend with plain old exhaustion.
This is also my first experience living alone for an extended stretch.
Again, though informative and an experience I'm grateful for, I think I'd prefer to live with others.
I've enjoyed my time in France, and plan to make the most of my last few weeks, but there is a part of me that I didn't expect to pine as much as it has for home.
À la prochaine!
![jacques](jacques.jpg)

BIN
blogs/2023/8/3/jacques.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

BIN
blogs/2023/8/3/window.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

View File

@ -0,0 +1,54 @@
# Digital Gardening
*“The black bird Gravious made to settle on her shoulder; she shooed it off and it landed flapping uproariously on the edge of the opened trap door. My tree! it screamed, hopping from leg to leg. My tree! Theyve - I - my - its gone! Too bad, she said. The sound of another great tumble of falling rock split the skies. Stay wherever it puts me, she told the bird. If itll let you. Now get out of my way. But my food for the winter! Its gone! Winter has gone, you stupid bird, she told it.”*
---
When I first designed the index page for my blog, I approached it in the simplest way I could think of.
A blog post is a web page, with a title, which was published at some point in time.
It wasn't - and, at time of writing, still isn't - very pretty, its form very much follows its function.
But what else is there to do?
I could style the links better, perhaps split them up to be grouped into months or years.
Or I could implement some kind of tagging system, and provide methods to organise posts by tags instead of choronology, but that would only be polishing the proverbial turd.
I felt, even when I first designed it, that something was wrong at its core, but haven't as yet been able to put a finger on it.
## Thinking doesn't happen in order
The grain of uncertainty started to develop into something more concrete as once I started writing about technical arrangements for computing across languages and setting up a personal music system.
Though I was writing to my best knowledge at the time, one's knowledge is hardly static and my thoughts and experiences have developed over time.
So, confronted with blog posts that I'm generally happy with, but are becoming somewhat aged, what should protocol be?
I could publish a new post, with amendments to the previous post, which would be linked at the beginning.
This preserves chronology, but leaves me with just a filtered version of the same problem.
If I publish something, and a month later think something different, does that justify a new post?
Do I now require the reader to trek backwards through the previous posts to fully understand the piece?
Do I need to go back and update all the previous posts in the series to point to the new 'latest' thinking?
I could equally edit or expand the original post with new ideas, perhaps keeping a list of changes somewhere obvious on the page.
This avoids falling into hyperlink hell, as everything stays on the one page.
My RSS feed would dutifully rebuild and re-publish the modified blog post, so anyone following along would be notified.
But this would break the chronology of the thing - if I originally published the post in 2021, and then edit it in 2024, that should surely be communicated somewhere obvious, like the blog index.
But the blog index only has one date in it, and indeed the blog is organised in folders by date.
So should this date be the original published date, or most recently edited?
Thinking through these potential solutions exposes my problem with the format: my thinking is fluid, but blog posts aren't.
I might be able to capture a snapshot of how I think of something in the moment, but it'll be out of date almost immediately if I retain any interest at all in the topic, as my views and experience develop.
In both cases, the problem we encounter is chronology, specifically the maintenance thereof.
This returns us to the most basic format of a blog, indeed its etymology - (web)log.
This suggests that for what I write, and for how I approach writing, a blog isn't actually what I want at all.
## Foraging for a new structure
[qntm's writing on short URLs](https://qntm.org/urls) was a hint to ways of doing things without date slugs.
Similarly, [Hundred Rabbits](https://100r.co/) regularly publish and update pieces on all sorts of topics, which they document in a monthly RSS post, with the vast majority having no obvious chronology.
After discovering the term [digital garden](https://joelhooks.com/digital-garden/) I think that Hundred Rabbits' site at least fits into this description better than into a blog, though it does have some blog components.
I'd like to convert at least part of the blog I maintain here to something more like a digital garden format.
I think some of what I've written probably is best presented as snapshots, and so I don't want to get rid of the blog completely.
However, other things - for example technical descriptions of multilingual or musical computing - are topics where I regularly develop my thinking, so would be better suited to a looser, garden-ier presentation.
I have some technical problems to solve with the design and implementation of how my website is put together to be able to support a garden, so I expect it will take a moment to materialise.
For the time being I expect I'll continue to make posts here, but as mentioned in the index page, content is very much subject to change.
I think some of those technical problems are interesting, so I think I'll post about them too, but maybe I'll just plant a seed and see how it grows instead.

View File

@ -0,0 +1,121 @@
# Nonbinary Masculinity
I've often been told that I've a masculine brain.
I'm not completely sold on the idea that brains can be masculine, or that if they can be, that mine is, but I suppose I'm willing to entertain the notion.
I think this is because of my tendencies to be analytical and competitive, and my traditionally "masculine" interests, such as maths, science or engineering.
These are fair observations - I do like those things!
However, I don't feel terribly like a man.
I'm comfortable with the body I inhabit, but as time has gone by the label of 'man' has seemed less and less fitting.
It's not that a definitely _am_ something else - at least there's nothing I've yet found that fits any better - but rather a lack of accord with whatever it is that a 'man' is.
Over time, I've come to the position that the non-fitting piece which doesn't fit isn't so much myself - my thinking as an angsty teenager - as the concept of a 'man' in the first place.
## The Male Caricature
Our media does not show us real people: it shows us fictions.
In our entertainment - our sitcoms, dramas and adverts - we show not real people, but archetypical characters used to communicate the story.
This is also true of non-fiction: our news, our reality TV, our documentaries also summarise; the only difference is that somewhere, somehow, the events we see are happening to real people.
Archetypes have been around since we started telling stories.
As storytellers, we strive to tell only that which is relevant to the story being told: to tell more would be unnecessary, and less wouldn't suffice.
So it follows naturally that our stories, all stories, draw from and reinforce existing archetypes.
They must, lest we bore or confuse our audience.
There is and always will be a place in our societies for stories.
Storytelling may well be our most human trait!
However, in my own Western culture I think we've a hyperactive media culture which has replaced much of our experience of real men with archetypes thereof, to the detriment of all.
As our idea of what a man is becomes increasingly based on fictions, and new representations on previous ones, archetypes become caricatures.
We exaggerate those traits pertaining to the story and the archetype while minimising the others.
This cycle continues with each successive generation of media.
Any number of reasons (social media; higher divorce rates; parents working longer hours; fear of strangers) mean children encounter examples of men more frequently in media, and less frequently in real life.
I think the term 'man' is more likely to conjure images of Homer Simpson, Walter White, or Paul Anderson for many people than it is to conjure images of one's own father, friend or teacher.
It is this idea of a man that I reject: an amalgamation of some supposed set of masculine traits.
## An Illusion of Choice
By and large people who have gender-reaffirming care do not regret it, and should be available to those who need it.
I am not interested in harping on the individual to demand they respect what God gave them; heavens, no.
Biology is not sacrosanct, it is a technical problem - this is be far from the first time we've interfered with our "natural" destinies.
I am however deeply interested in the reasons someone doesn't feel comfortable in their own skin.
The dissonance between one's lived and preferred biological realities is distressing thing, and babies are not born with any knowledge of their body or gender to be distressed by.
I think therefore that gender dysphoria generally has real causes, rather than people having been born in the "wrong" bodies.
Such causes will be plentiful and complex, but I think one can be identified as the caricatures we find in media.
Someone might not identify with the gender representation they're supposed or expected to, and so feel pushed towards the other.
Even in the liberal West, explorations of the true and vibrant diversities in human gender and sexuality remain under-explored and often taboo.
Without a developed understanding of such complexities, it can be difficult to form a narrative for oneself more nuanced than a binary choice between male/man and female/woman.
This binary is a horrific oversimplification, and an unhelpful false dichotomy reinforced by the hypertrophied archetypes in mass media.
This is at least what I felt I experienced when starting my trans journey nearly a decade ago.
Having had my own Gender Trouble, and watching many friends struggle through their own, I wonder how much pain and heartache could be alleviated if people weren't made to feel uncomfortable in their own bodies in the first place.
Storytelling and archetypes aren't problems in and of themselves, but they can easily become so as they take up more than their fair share of social experiences.
## Boys Don't Cry
Good men are traditionally stoic.
He has emotions, which he feels as strongly as anyone else, but he does not act on them.
He is strong for himself, his family and his friends, putting them before himself.
The idea of the stoic is pervasive enough that we demonise emotional men - what is it that draws Anakin to the Dark Side, and why is the Dark Side evil in the first place?
The stoic is ever-present in our media, but as a character in a story he gets to freely omit a crucial part of the process: actually having to deal with his emotions.
In reality, we don't get to stare broodingly into the sunset before flying off on a space adventure.
When we encounter hardships, we actually have to deal with them, and we have to manage their impact they have on us.
There is always an impact, though often small, and we need to recognise that before it can be dealt with.
I'm lucky to have a support network of friends and family who help me navigate my own emotions.
Plenty of times I've pretended not to have emotional reactions to things, to have been okay, actually.
That is, until encouragement from those I'm close to I'm able to dig deeper and outline exactly what my feelings are: they are often different upon closer inspection than what I'd originally thought.
I find that automatically, I try to avoid such introspection, but it's always been beneficial in the end, even (or perhaps especially) if it's difficult.
I wonder if my initial aversion is a natural instinct, or a behaviour learned from growing up in a media culture.
Probably it's a bit of both.
## The Call of the Wild
It is very difficult to deny the existence of a largely binary biological reality.
There are exceptions, as there must always be with something as complex as mammalian evolution, and a conscientious society should recognise, not erase them.
However, we are geared for sexual reproduction as a matter of biological and (current) technological fact.
In Western culture, whether or not one wants to engage in reproduction is now an open, invidual question, rather than a biological imperitive.
In plenty of other cultures and in every other species, this privilege does not exist, and in humans this is often tied to the continued existence of men's and women's gender roles.
I think that this is not something to erase, to be embarrassed by or ashamed of.
Humans are ultimately and forever still just animals.
We are good with our hands and have a knack for getting things done, but we still are subject to the same basic needs as everything else that breathes and moves around.
To some extent, we have a lot of control over these natural forces; we can delay eating, drinking or relieving ourselves until the time is right and proper.
We can repress our murderous urges towards those around us, even if they have left the seat up.
However, at some point our brains and bodies are still the primitive things we evolved on the savannah, and for better or for worse we're stuck with them for now.
It has mechanisms to reward behaviours it thinks will help us survive, and mechanisms to discourage things it thinks will not.
Even when we think or know better, we remain subject to these forces.
We should all do well - as individuals as well as a collective - to find ways to better communicate with the human animal, for it holds a power over us which we cannot yet fully control.
## She's My Man
The last thing I'd like to talk about is the idea of masculinity and feminity as a spectrum.
I think this view is reductive at best, and that dropping it opens up a much deeper and more complex view of people and culture.
Take make-up, for example.
Make-up - at least in my culture - is traditionally associated with femininity.
It is women who are expected to dress themselves up, to look a certain way, while men are free to roll out of bed and out the door without so much as a shower.
Bravery is a traditionally masculine thing: this is why it is men who fight, who defend, who provide, who trap spiders and throw them out the window.
So if a man wears make-up, which is he?
Is he a feminine man, and less masculine, for having engaged in a feminine activity?
Or is he masculine, for having the bravery to break the mould?
Then take the maternal instinct to protect one's child.
This is something experienced much more strongly in women than men, for any number of reasons.
A threatened child's mother will endure truly extreme hardships to come to their aid.
This is bravery, it's protection: masculine traits.
But she is a mother: surely the most feminine thing there is?
So again, which is she?
In such instances we can't simply view 'masculine' as the antipode of 'feminine'.
We are forced to accept that actions, objects and people can't be inherently one or the other, and it depends on perspective and context.
They are qualitatively different ideas with no intrinsic relation to one other.
It follows then that being a man, or a woman, or whatever, doesn't have much to do with having masculine or feminine traits at all.
I am not a man.
I am not a woman.
I am not one of a binary pair.
I am an animal.
I am feminine!
But I am masculine, too.

View File

@ -0,0 +1,103 @@
# A Tidy Room is a Happy Room
![room-grass.jpg](./room-grass.jpg)
In mid-December I attended a hackathon on Meta's campus in central London.
It was something of a novel experience, as I'm much more used to the kind of events put on by university student bodies.
I made some great new friends and enjoyed working with the Quest 3, but more importantly I put some cool wavy grass on the floor of a real room.
This post is a technical breakdown of the graphical components that went into the effect.
---
To begin with, I'll be upfront and say that I did not make the original implementation of the grass - that credit goes to [Acerola](https://www.youtube.com/watch?v=jw00MbIJcrk).
Thanks, Acerola!
It generates chunks of grass positions using a compute shader, which are then used to draw a large number of meshes with [`Graphics.DrawMeshInstancedIndirect()`](https://docs.unity3d.com/ScriptReference/Graphics.DrawMeshInstancedIndirect.html).
The grass mesh is drawn with a vertex shader which lets it move in the wind, and a gradient is calculated along the blades' length to give an impression of 3D lighting.
Chunks which are outside the field of view are culled, saving performance only for those chunks which are visible.
Our application has the user interacting with the grass, so we first needed to fit the grass to the physical room, regardless of its size or shape.
For simplicity, I first reduced the grass' footprint to 10x10m, which should be just bigger most reasonable rooms, and is significantly smaller than the terrain it was covering originally.
My approach would then be to scale and translate the generated grass positions to get them all inside the limits of the room.
The Quest 3 provides access to the generated mesh of the room at runtime, of which we can get an axis-aligned bounding box with [`Mesh.bounds`](https://docs.unity3d.com/ScriptReference/Mesh-bounds.html).
This gives the actual size of the room, and so this information needs to be passed into the compute shader responsible for generating grass positions.
By using the maximum and minimum limits on the X and Z axes, the required information can all be passed into the shader with a single `Vector4`.
```
// GrassChunkPoint.compute
// Original implementation
pos.x = (id.x - (chunkDimension * 0.5f * _NumChunks)) + chunkDimension * _XOffset;
pos.z = (id.y - (chunkDimension * 0.5f * _NumChunks)) + chunkDimension * _YOffset;
// Scale to fit the aspect of the room
float width = _Right - _Left;
float length = _Front - _Back;
pos.x *= width / dimension;
pos.z *= length / dimension;
pos.xz *= (1.0f / scale);
```
A world UV for each blade of grass is generated at this time, too.
In the original implementation this is used for sampling the wind texture and a heightmap.
We don't need a heightmap, and the resulting scale artifacts in the wind texture aren't perceptible.
However, we would need to use the UV to interact with the grass later, so I needed to transform the generated UV appropriately too.
```
float uvX = pos.x;
float uvY = pos.z;
// Scale UV to room
uvX *= dimension / width;
uvY *= dimension / length;
// Apply translation after scaling
float offset = dimension * 0.5f * _NumChunks * (1.0f / _NumChunks);
uvX += offset;
uvY += offset;
float2 uv = float2(uvX, uvY) / dimension;
```
With this I was able to fit the grass to the bounds of the room.
It is not an ideal solution, since it is always the same amount of grass scaled to fit into the room.
As a result, smaller rooms have denser grass.
However, most rooms are about the same order of magnitude in terms of area, so this was functional for our prototype.
The next problem to solve was interaction.
Our mechanic involved cutting the grass.
Since the grass was not GameObjects, there was no object to destroy or transform to compare, so we needed another way to relate a position within the room to something the grass could understand.
Moritz had created a an array of points covering the floor, taking into account raised parts of the scene mesh to determine where grass ought to be.
![free-spots.png](./free-spots.png)
We opted to use a render texture to communicate this information to the grass' vertex shader.
This approach meant we could write in information about the pre-existing furniture at startup, and use the same technique to update the grass at runtime.
UVs are generated for points and used to write into an initially black render texture.
Green points should have grass on startup, and so they write a white pixel into the render texture.
Everywhere else is initially black, which means the grass should be cut at that location.
When points are collected, they write black into the texture, cutting the grass at that location.
![room-rt.png](./room-rt.png)
![grass-rt.png](./grass-rt.png)
The last step was to sample the render texture and use it to remove grass at a particular location.
We can sample using the UV which was scaled to the room during position generation.
Then we clip the grass based on the read value.
```
// Sample the texture in the vertex stage to reduce texture lookups
o.grassMap = tex2Dlod(_GrassMap, worldUV);
// ...
// Clip pixels in the fragment stage if the read value is less than white
clip(i.grassMap.x - .99);
```
That's it for the main moving parts we implemented on top of Acerola's grass for the hackathon.
Thanks for reading, and please [get in touch](mailto:me@ktyl.dev) if you have any questions!
![team.jpg](./team.jpg)

BIN
blogs/2024/1/1/free-spots.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
blogs/2024/1/1/grass-rt.png (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

BIN
blogs/2024/1/1/room-rt.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
blogs/2024/1/1/team.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

View File

@ -0,0 +1,77 @@
# We All Dream About Flying
It's turning again.
Hmm?
The tide. It's turning.
Oh. Of course it's turning. It's always turning.
Earlier than yesterday.
It's always earlier than yesterday, too.
You don't know that.
Yes I do. I'm going back to sleep.
No you don't. How could you know that? You haven't seen every tide.
Those I haven't, you've told me about.
Well, I haven't seen them all either.
...
I just thought you'd want to know.
Thanks for sharing. I also want to sleep. Go away.
Sleep? How can you sleep at a time like this?
Mid-afternoon? Completely without issue.
But there's so much to see! Like, for instance, take that sand over there!
Sand? It's sand. You are winding me up on purpose.
Am not! Sand is cool! See how it's layered?
Layered?
Yeah! The sand higher up the slope is dry, while the area closer to the water is wet.
It's always like that. The wet bit has more worms in it. What's your point?
The proportion of dry to wet sand changes! Isn't that cool? Look closely, you can see the water running down the wet bit. It cuts little wavy channels, like tiny rivers. The water in the sand is higher than sea level, so it's doing what water always does and flowing down towards it. Then when the tide comes back in, the cycle starts again!
Riveting.
It's hard to get worms by the water, for the waves. And the sand is so wet it's hard to stand on. But the dry bit doesn't have any worms at all. How far up should we go to get the most worms the most easily, then?
I dunno. The middle?
Is there some part which has juicier worms? What about other beaches? I don't know either! Isn't it wonderful?
How is sand wonderful?
How isn't it?
It's mundane.
So? What do you think about all day, then? You like sleeping so much, what do you dream about?
Flying, usually.
Well of course you dream about flying. We all dream about flying. Isn't there anything else?
I dunno. Worms I guess? Going back to sleep?
You dream about sleeping while you're asleep? There's an irony in there somewhere, I'm sure of it.
Sleep's important. Speaking of, don't let me keep you from your statistical analysis of the distribution of worms in sand. It sounds like you've got a lot of work to do.
Suit yourself. I won't share my results.
I cannot fathom how I shall cope. Goodnight.

View File

@ -0,0 +1,191 @@
# Interactive Astrodynamics
_“Space is big.
You just won't believe how vastly, hugely, mind-bogglingly big it is.
I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.”_
---
Say you're in a simulation, and you want to throw a ball.
Easy enough, you take what you know about classical mechanics and impart some force to your ball to give it an initial velocity.
Then you take little steps forward in time, applying some force due to gravity at every one.
You consider applying forces from air resistance too, but then remember you're a physicist, and think better of it.
Next, as you're in a simulation, you want to have some fun.
You decide to throw it a bit harder - let's put this thing into orbit!
You add a few zeroes to your initial throw and throw it straight ahead.
To your dismay once the ball gets more than a few kilometres away it starts misbehaving.
It stops moving smoothly and instead begins to jitter.
Before long it's a complete mess, jumping from place to place and not really looking like it's been thrown at all.
Digging into your simulation, you find you've been using single-precision floating points.
A rookie mistake!
You change them all to double-precision, and throw again.
This time, the ball sails smoothly away, and you watch it disappear over the virtual horizon.
Satisfied, you pivot on the spot to await the ball's return.
You estimate it should take 90 minutes or so, but dinner is in an hour, so you decide to hurry up and wait by speeding up time by a factor of a hundred.
At this rate, the ball will only be gone for a few seconds.
You take these few seconds to recollect what you know about objects in orbit.
Accelerating only due to the force of gravity, such objects move along elliptical trajectories.
Every orbit, they pass through the same points, and every orbit, they take the same amount of time to do so.
Checking your watch, you predict to the second when the ball will arrive, and slow things back to normal just in time to watch the ball appear in the distance.
You hold your hand out in exactly the position you released the ball, since that position is on its orbit.
You smile smugly, proud of your creation, and are just thinking of dinner when the ball arrives, hitting you squarely in the face.
---
It is straightforward enough to [implement an _n_-body simulation](https://youtu.be/Ouu3D_VHx9o?t=403) of gravity based on Newton's law of gravitation.
Unfortunately, physics calculations in games are done with floating-point arithmetic, of which a limitation is that [numbers may only be approximately represented](https://www.youtube.com/watch?v=PZRI1IfStY0).
Besides scale, a limited precision presents a subtle and hard to solve problem which is present in any iterative algorithm, such as an _n_-body simulation based on universal gravitation.
As part of its operation, the state at the end of one discrete [frame](https://gameprogrammingpatterns.com/game-loop.html) forms some or all of the input for the next.
Due to the missing precision, error accumulates over time, resulting in larger discrepancies from reality the longer the simulation is run for.
Floating points are a fact of life in game physics, and we are not going to escape them here.
So how does Kerbal Space Program do it?
![ksp.jpg](ksp.jpg)
Kepler's laws make a simplification known as the two-body problem.
Kepler's laws of planetary motion describe the elliptical path of a object in a stable orbit.
This includes the paths of objects like the Moon and the International Space Station around the Earth, or the path of the Earth around the Sun.
More complex systems, such as Sun-Earth-Moon or the entire Solar System can be modelled as a composition of several two-body systems, rather than the _n_-body systems they really are.
This conveniently sidesteps the requirement to use approximate forces and therefore the accumulation of error over time.
The two body problem describes the motion of one body around another, or more precisely a common barycentre.
In the case of one massive and one tiny object, the barycentre is approximately in the same position as the centre of the larger object and can therefore be ignored.
In a two-body system, an orbit is perfectly stable and cyclical.
A body's position can thus be calculated directly as a function of time.
The ability to calculate positions directly as a function of time has some extremely valuable properties.
One of the many challenges faced in deep space exploration is the sheer duration required by such voyages - the Voyager probes were launched nearly half a century ago, and even reaching Mars takes the better part of a year.
To be of any real use, then, interactive astrodynamic simulations need a level of time control.
Kerbal Space Program has a time warp feature, allowing players to navigate missions taking years of simulated time in an evening of real time.
This would not be possible with an iterative physics simulation, as the accuracy of the result is directly dependent on the resolution of the time step.
Games have to produce several frames per second to both sell the illusion of motion and stay responsive, meaning each frame must be computed in a small fraction of a second.
Depending on the platform, target frame rates can vary from as low as 30Hz, for a mobile device preserving its battery, to 120Hz and beyond for competitive or virtual reality applications.
This means most games have about 10ms to compute each frame.
Processing limitations are of particular interest in games, which are severely constrained in the amount of processing they are able to complete in a single rendered frame.
In practice, the physics sub-system in a game may have significantly less than 10ms to work with, since other sub-systems could be competing for the same computational resources.
An astrodynamics simulation may have a single millisecond or less in which to make its calculations.
Simulating physics to a high degree of accuracy with an iterative model requires high-resolution time steps.
Interactive physics simulations will often run at many times the rendered framerate to maintain stability.
For example, simulating an [internal combustion engine](https://www.youtube.com/watch?v=RKT-sKtR970) requires extremely high simulation frequencies to be able to accurately produce sound.
To maintain accuracy at a higher rate of passage of time, the simulation frequency has to increase with it.
Kerbal Space Program's highest level of time warp is 100,000x normal speed, at which a physics simulation would require computation of 100,000x the time steps per rendered frame to remain stable.
If we instead calculate an object's position from a timestamp, the calculus changes completely.
The simulation is no longer constrained by the amount of simulated time the CPU can process per unit real-time.
Instead, we are able to jump directly to the time of our choosing, at whatever rate we like, and determine the position of the object afresh every time.
The object's position ceases to be dependent on its previous position, and the cost of our calculation becomes unbound from the rate of passage of simulated time, so long as we maintain a constant frame rate.
We can even run time backwards - from the simulation's perspective, it does not care a jot.
---
Planets move in elliptical paths, and [ellipses are surprisingly complex](https://www.chrisrackauckas.com/assets/Papers/ChrisRackauckas-The_Circumference_of_an_Ellipse.pdf).
Part of the determination of a planet's position as a function of time is the calculation of the [eccentric anomaly](https://en.wikipedia.org/wiki/Eccentric_anomaly) from the mean anomaly, which is related by Kepler's equation.
![keplers-equation.png](keplers-equation.png)
In this equation, _E_ is the eccentric anomaly, the goal of the computation.
_M_ is the mean anomaly, which increases linearly over time.
The mean anomaly of Earth's orbit increases by pi every six months.
_e_ is the eccentricity of the orbit.
Having no closed-form solution for _E_, the equation must be solved with numerical methods.
A common choice is [Newton-Raphson](https://en.wikipedia.org/wiki/Newton%27s_method), a general-purpose root-finding algorithm which converges on successively better approximations through an arbitrary number of iterations.
It is straightforward to understand and implement, but it has some major flaws for real-time physics simulation.
For low-eccentricity orbits - those that are nearly circular - Newton-Raphson converges quickly in just a handful of iterations.
However, when presented with an eccentric orbit, such as those of the [Juno probe](https://science.nasa.gov/mission/juno/), it takes exponentially more iterations to resolve to a high-accuracy solution.
With a lower number of iterations, the calculated value for _E_ is erratic and inaccurate, completely unsuitable for a stable simulation.
This presents a massive problem in a real-time system like a game.
To be fit for purpose a physics simulation needs to be stable in all reasonable cases the player might encounter, so a Newton-Raphson-based approach would need to be given the resources to run as many iterations as needed to determine a suitably accurate value for _E_.
Since the behaviour is of an exponential increase in computation as orbits become increasingly eccentric, the worst-case performance is many orders of magnitude worse than the best- or even average-case performance.
Even if the worst case could be computed accurately while maintaining the target frame rate, in the majority of cases the simulation would use but a fraction of the resources allocated to it.
This inefficiency comes at the cost of taking resources away from other sub-systems that could do with it, impacting the quality of the whole application.
---
There is, thankfully, a solution.
While Newton-Raphson is likely to have the best best-case performance - scenarios with nearly-circular orbits - it has appalling worst-case performance.
As soft real-time systems, games need to be optimised for the worst-case, such that framerates are consistently high.
We also want to make the most of our available resources on every frame, rather than leave most of them unused most of the time.
For this, we can turn to that old favourite of computer scientists: a binary search.
```c
Real M = meanAnomaly;
Real e = eccentricity;
Real E;
Real D;
Real F = sign(M);
M = abs(M) / (2.0 * pi);
M = (M - (int)M) * 2.0 * pi * F;
if (M < 0)
{
M = M + 2.0 * pi;
}
F = 1.0;
if (M > pi)
{
F = -1.0;
M = 2.0 * pi - M;
}
E = pi * 0.5;
D = pi * 0.25;
for (int J = 0; J < iterations; J++)
{
Real M1 = E - e * sin(E);
Real val = M - M1;
E = E + D * sign(val);
D *= 0.5;
}
E *= F;
return E;
```
Eccentric anomaly determination from _Meeus, J. Astronomical Algorithms. Second Edition, Willmann-Bell, Inc. 1991._
Translated to pseudo-C from BASIC source.
---
This approach, while obtuse and archaic, has the property of running in a predictable, constant amount of time, and computes a result to a particular level of precision determined by the number of iterations.
On average, it performs worse than Newton-Raphson, taking more runtime to arrive at a result of similar accuracy.
However, in the worst-case, it performs just as well in terms of accuracy and runtime as the best-case, which makes it suitable for real-time applications in a way that Newton-Raphson is not.
Instead of having to pad the physics budget to account for a worst-case scenario, we can make much more efficient use of a smaller budget, despite the algorithm almost always costing more.
---
Cheek still smarting, you weigh the ball in your hand.
Looking out past the horizon, you think of the ground below your feet.
You think through the core of your perfectly spherical virtual planet, and out into the space beyond.
You think as far as double-precision floating points can take you - which is, wait, how far?
The distance doesn't really matter anymore, you realise, since the precision is the same for an ellipse a metre across as it is for an astronomical unit.
The limit now is double-precision time.
Idly bouncing the ball in one hand, you work out what that means.
To be accurate to a single frame, you want to be able to represent units of time as short as a frame, or 0.01 seconds.
There are about ten million seconds in a year, or billion frames: looking good so far, that's only 10 significant figures, there are a few more to go yet.
In a thousand years, the smallest step we can represent is still less than a microsecond, so you keep going.
After a million years, the minimum increment finally creeps up to half a millisecond, which sounds about right.
With a grin, you add a healthy number of zeroes, aim, and throw.
You watch the ball disappear over the horizon - it does so much more quickly than last time - and turn around to await its return.
Sitting down, you look at your watch, start to crank up the passage of time itself.
You watch as the ball comes back around... and around, and around, and around again, until it becomes a steady blur over your head, and your mind starts to wander.
You start to think about double doubles.
You've come a long way, but what's a million years, when you're a planet?
---
I don't know how Kerbal Space Program computes positions on elliptical orbits - if you do, [please get in touch!](mailto:me@ktyl.dev) - but I would be surprised if they used Newton-Raphson.
In practice, Kepler's laws are limited in their utility for complex, real-life astrodynamical systems, as they don't take into account tidal effects, irregular gravitational fields, atmospheric drag, solar pressure or general relativity.
They also cannot model Lagrange points, which are fundamental to many modern spacecraft such as JWST, or Weak Stability Boundary transfers as used by modern robotic lunar probes such as SLIM.
Nonetheless, game developers are a resourceful bunch, so I'm still holding my breath for a game about the [Interplanetary Transport Network](https://en.wikipedia.org/wiki/Interplanetary_Transport_Network).

BIN
blogs/2024/4/27/keplers-equation.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
blogs/2024/4/27/ksp.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

64
blogs/isaac.md Normal file
View File

@ -0,0 +1,64 @@
---
created: 2024-08-22T17:43:52+01:00
modified: 2024-08-22T17:45:52+01:00
---
# Isaac
"Thank you, and if I could just confirm your type?"
"O negative."
"And lastly, how much were you looking to provide this evening, sir?"
"One-point-two litres."
"Very good. if you'll just wait a moment..."
Isaac let his eyes wander as the clerk busied herself with the terminal. It wasn't the worst waiting lounge he'd seen, all of the lights were working and the chairs seemed unsoiled. Donors sat dotted around, largely occupying themselves with their personal devices or reading material. Mostly younger than himself, he noted idly. One or two of the older folk weren't reading but gazing, glassy-eyed ahead of them. There was no conversation.
"Thank you for your patience, sir," he focused again on the clerk, "we can exchange 17,205 credits for that volume."
"Seventeen thousand? It was twenty just last month!"
"I'm afraid I can't control the price, sir. If you'd like, I can increase the volume to twenty thousand credits."
"I-", Isaac closed his eyes and bowed his head a little. He breathed out. "No. No thank you, 17 thousand is fine."
"Excellent. That's all gone through, your number is four-thirteen. If you'd like to take a seat a technician will call you when they're ready."
"Thank you."
It was still be enough. Annoying, but not disastrous. He took a seat against the wall, and looked down at the magazines on the low table in front of him. He picked up one with a smiling cocktail-wielding woman in a swimsuit on the front, and started idly leafing through pictures and descriptions of exotic package holidays, each more breathlessly unique and enticing than the last. No prices, he noted. They each implored the reader to book a consultation call.
He'd dozed off by the time his number was called, and came to with a start. Waving to acknowledge his technician at the front of the room, he got to his feet. His momentarily forgotten magazine slipped off his legs and fell onto the floor in a clumsy heap as he stood. He let out a little grunt of exertion as he bent to pick up and replace it upon the table, and started to make his way over to the tech. As he neared, they turned on a heel and started down the corridor behind them. Isaac didn't mind the efficiency; time spent on pleasantries would only eat into the price more.
Turning to follow the tech into a room some distance down the corridor, he found another tech adjusting the chair in the middle.
"You can hang your coat by the door", she said tersely, gesturing over his shoulder.
"Thanks," he did so and she turned away wordlessly, busying herself with some equipment on the bench at the side of the room. The tech he'd followed from the lounge was replacing their gloves. Not needing more instructions, he set himself down in the central chair and rolled up his right sleeve. His left was still bruised from the week before. The freshly-gloved tech turned back round to him, now clutching a needle.
"Are you comfortable?" they asked, and started to swab his arm without waiting for an answer. He didn't give one, and they continued in silence. "You'll feel a sharp prick," and he did, though he saw nothing as he'd already averted his gaze.
He bit thoughtfully into the recovery cookie as he stepped out of the clinic and onto the street. He'd usually not have bothered, but the clerk had explained they were complementary at this clinic as she transferred his funds. It hardly made up for the poor take, but silver linings he supposed. He stood and ate the whole thing before starting on his way, wary that walking would drive his heart rate up. Nonetheless, so would standing, and they charged extra for recovery chairs. He tried not to look at the figure slumped against the clinic's façade and set off, slowly, for the metro.
He leaned against a pillar on the platform when he arrived and closed his eyes, breathing deeply and slowly. He wished they hadn't removed the benches the previous year, but dared not seat himself on the platform floor. He couldn't afford a loitering charge, and so steeled himself against the pillar with gritted teeth, listening for the sounds of an approaching train. He hoped it wouldn't be full.
What must have been minutes felt like hours, but eventually the rails began to sing and he opened his eyes to the welcoming illumination of the front car's headlights on the slick tunnel walls. He watched it pull in, with its mercifully off-peak complement of passengers, and waited for the last moment to push himself from his pillar and made his way to the nearest door. It opened before him - he'd timed it perfectly not to break step - he collapsed into a chair just as the doors closed. He let out a sigh of relief and the train began to move.
Lights on the inside of the tunnel wall flicked by hypnotically as the train rattled onwards. Isaac looked instead at the adverts above and below the windows, trying to keep his focus engaged and his mind off his light-headedness. A pair of smiling pensioners gleamed down at him from a particularly colourful spread. "Try plasma today," read the tagline "after all, age is just a number!". Annie and Joe, read the advert, gave a testimonial of celebrating their fifteenth decade together. He imagined them speaking animatedly to an agent about one of the holidays in the magazine. "Why, of course!" Annie would exclaim, "we wouldn't dream of missing Istanbul!". Joe is quieter, Isaac imagined, but smiles and nods along with his partner. They hold hands under the table as the agent successfully upsells them city tour after city tour. Age is just a number, Isaac thought, much like a bank balance. With that, he promptly passed out.
The train was stopped when he was prodded awake by the guard.
"End of the line, sir," he said unhelpfully, "train's going out of service."
"Eugh..." groaned Isaac as he righted himself. He felt a bit better for having slept, but his neck was stiff and his mouth dry. "Thanks," he rasped to the guard, who took a step away but remained watching him, clearly awaiting Isaac's departure. Isaac pulled himself up with the help of a pole, and noticed only then the breeze on his feet. He looked down, and saw his own socked feet. His boots were nowhere to be seen. "Aw, fuck. My shoes..." The guard remained stalwart and unmoved, volunteering nothing in the way of aid. Isaac looked both ways along the train, but there was no one else. He looked back at the guard, hoping for a sign of compassion, perhaps, but received only a nod towards the open door. The platform outside was wet.
He made his way out of the station and looked over at the bus stop, which held neither buses nor awaiting passengers. He fished for his phone in his coat pocket. What had been idle frustration at losing his shoes rapidly escalated into blind panic; his phone wasn't there either. He tried his other pockets, but quickly gave up, it was gone. He swore, with feeling this time, and stamped his sodden socked feet in a puddle. A teller in the station looked up momentarily to raise an eyebrow in Isaac's direction, but went back to closing up almost immediately. Isaac calmed down, defeated, and studied the map out the front of the station. It was a long way home from here.
He'd all but fallen asleep on his feet as he dragged himself up the stairs to his unit. The lock was biometric, thankfully - too far from the city centre to be integrated into the phone network - and he let himself in, closing the curtains against the rising sun. He had a shower, grateful to be free of his now ruined socks, and fell, finally, comfortably asleep.
He allowed himself a lie-in the following morning. It was his weekend, and it was past noon when he finally rose. After breakfast he poured himself a cup of coffee and turned his attention back to the papers scattered over the kitchen, where they'd been before he'd left the previous morning. He sighed, and scribbled a few new numbers into a notepad. He dug out his spare phone - a horrid, ancient thing - and went through the online process of activating it with his details. At least the stolen phone wouldn't compromise him - these things were pretty secure now, and it would have wiped itself as soon as someone failed the biometric checks. The spare came to life with a sickly chime, which he made a note to disable later. First, though, he punched what had become a familiar number.
"Hi, good afternoon... yes... yes.. next weekend would be fine... One-point-three litres, please..."

68
build/index.py Normal file
View File

@ -0,0 +1,68 @@
#!/usr/bin/env python3
# should this operate on the same basic files as the rss script?
import sys
import re
# we expect the arguments to be filepaths to each blog post
def print_usage():
print("\nusage: python mkblogindex.py POSTS\n")
print("\n")
print("\t\tPOSTS\tfilepaths of blog posts")
# check args for at least one file path
if len(sys.argv) < 2:
print_usage()
sys.exit(1)
# posts are arguments from index 1 onwards
posts = sys.argv[1:]
dir_pattern = re.compile("(.+)\/(\d{4}\/\d+\/\d+\/.+\.html)")
path_pattern = re.compile("(.+)\/(\d{4})\/(\d{1,2})\/(\d{1,2})\/(.+).html")
title_pattern = re.compile("<h1>(.+)</h1>")
posts.reverse()
links = []
# for each file we want to output an <a> tag with a relative href to the site root
for path in posts:
m = re.match(path_pattern, path)
if not m:
# path/to/file.ext -> file
title = path.split('/')[-1].split(".")[0]
date = f'<span class="post-date">0000-00-00</span>'
url = path.split("/")[-1]
else:
year = m.group(2)
month = m.group(3).rjust(2, '0')
day = m.group(4).rjust(2, '0')
date = f'<span class="post-date">{year}-{month}-{day}</span>'
title = ""
with open(path) as f:
for line in f:
if title_pattern.match(line):
title = re.sub(title_pattern, r'<span class="post-title">\1</span>', line).strip()
break
# clean leading directories to get the relative path we'll use for the link
url = re.sub(dir_pattern, r"\2", path)
item = (date, f'<li><a href="blog/{url}">{date}{title}</a></li>')
links.append(item)
# make sure we're properly ordered in reverse date order lol
links = sorted(links, key=lambda x: x[0])
links.reverse()
for l in links:
print(l[1])

91
build/page.py Normal file
View File

@ -0,0 +1,91 @@
#!/usr/bin/env python
import os
import sys
import markdown
import re
# SRC
# +-2022/
# | +-10/
# | +-12/
# | +-25/
# +-2023/
# | +-1/
# | +-26/
# | +-3/
# ...
def print_usage():
print(f"\nusage: python {sys.argv[0]} SRC DEST\n")
print("\n")
print("\t\tSRC\tinput markdown file")
print("\t\tDEST\tdestination html file")
# check args
if len(sys.argv) != 3:
print_usage()
sys.exit(1)
src_file = sys.argv[1]
dest_file = sys.argv[2]
# check blog root exists
if not os.path.isfile(src_file):
print("{blog_root} doesn't exist")
sys.exit(1)
# make dest dir if it doesnt exist
dest_dir = os.path.dirname(dest_file)
print(dest_dir)
if not os.path.isdir(dest_dir):
os.makedirs(dest_dir)
# write markdown into a dummy file first so that we can add lines before it in the final output
dummy_file = f"{dest_file}.bak"
open(dummy_file, 'w').close()
print(f"{src_file} -> {dummy_file}")
markdown.markdownFromFile(input=src_file, output=dummy_file, extensions=["fenced_code"])
# TODO: a lot of this templating work is specific to the ktyl.dev blog - ideally, that stuff should
# be in *that* repo, not this one
print(f"{dummy_file} -> {dest_file}")
with open(dummy_file, 'r') as read_file, open(dest_file, 'w') as write_file:
write_file.write("#include blogstart.html\n")
# modify the basic html to make it nicer for styling later
html = read_file.read()
# extract images from their enclosing <p> tags and put them in img panels
html = re.sub('(<p>(<img(?:.+)/>)</p>)', r'<div class="img-panel">\2</div>', html)
# wrap <ul> elements with <p> to get them into blocks
html = re.sub('(<ul>(\n|.)*</ul>)', r'<p>\1</p>', html)
# insert text-panel start between non-<p> and <p> elements
html = re.sub('((?<!</p>)\n)(<p>)', r'\1<div class="text-panel">\n\2', html)
# insert para-block end between <p> and non-<p> elements
html = re.sub('(</p>\n)((?!<p>))', r'\1</div>\n\2', html)
# insert code-panel start before <pre> elements
html = re.sub('(<pre>)', r'<div class="code-panel">\n\1', html)
# insert code-panel end after </pre> elements
html = re.sub('(</pre>)', r'\1\n</div>', html)
# replace horizontal rules with nice separator dot
html = re.sub('<hr />', r'<div class="separator"></div>', html)
lines = html.split("\n")
# tack on a closing div because we will have opened one without closing it on the final <p>
lines.append("</div>")
for line in lines:
write_file.write(line + "\n")
write_file.write("\n#include blogend.html\n")
os.remove(dummy_file)

83
build/rss.py Normal file
View File

@ -0,0 +1,83 @@
#!/usr/bin/env python3
import markdown
import pathlib
import sys
import re
from datetime import datetime
def print_usage():
print("\nusage: python mkblogrss.py POSTS\n")
print("\n")
print("\t\tPOSTS\tfilepaths of blog posts")
# check args for at least one file path
if len(sys.argv) < 2:
print_usage()
sys.exit(1)
# posts are arguments from index 1 onwards
posts = sys.argv[1:]
# header and footer to enclose feed items
header = """<?xml version="1.0" encoding="utf-8" ?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>ktyl.dev</title>
<link>https://ktyl.dev/blog/index.html</link>
<description>mostly computer stuff!</description>
<atom:link href="https://ktyl.dev/blog/index.xml" rel="self" type="application/rss+xml"/>
"""
footer = "</channel></rss>"
# regex patterns
title_pattern = re.compile("<h1>(.+)</h1>")
path_pattern = re.compile("(.+)\/(\d{4})\/(\d{1,2})\/(\d{1,2})\/(.+).md")
def make_item(path):
str = "<item>\n"
# get the HTML version of the file
text = ""
with open(path) as f:
text = f.read()
html = markdown.markdown(text, extensions=["fenced_code"])
# title
title = None
m = title_pattern.match(html)
if m is None:
title = path.split('/')[-1]
else:
title = m.group(1)
str += f"<title>{title}</title>\n"
# link
url = "/".join(pathlib.Path(path).parts[1:])
url = url.replace(".md", ".html")
link = f"https://ktyl.dev/blog/{url}"
str += f"<link>{link}</link>\n"
# content
description = html
description = re.sub('<', '&lt;', description)
description = re.sub('>', '&gt;', description)
str += f"<description>{description}</description>\n"
# does the path have a date in it
if (path_pattern.match(path)):
date = re.sub(path_pattern, r'\2-\3-\4', path)
else:
date = datetime.now()
str += f"<pubDate>{date}</pubDate>\n"
str += "</item>"
return str
# print everything!
print(header)
for p in posts:
print(make_item(p))
print(footer)

View File

@ -3,21 +3,42 @@ OUT_DIR = out/
HTML_DIR = $(OUT_DIR)html HTML_DIR = $(OUT_DIR)html
GEMINI_DIR = $(OUT_DIR)gemini GEMINI_DIR = $(OUT_DIR)gemini
MAKE_GEMINI = build/markdown2gemini.py MAKE_GEMINI = build/markdown2gemini.py
MAKE_HTML = build/markdown2html.py MAKE_RSS = build/rss.py
PAGES = $(shell find $(SRC_DIR) -wholename "$(BLOG_SRC_DIR)*.md") PAGES = $(shell find $(SRC_DIR) -wholename "$(BLOG_SRC_DIR)*.md")
HTML_TARGETS = $(PAGES:$(SRC_DIR)/%.md=$(HTML_DIR)/%.html) HTML_TARGETS = $(PAGES:$(SRC_DIR)/%.md=$(HTML_DIR)/%.html)
HTML_RSS = $(HTML_DIR)/index.xml
HTML_INDEX = $(HTML_DIR)/index.html
GEMINI_TARGETS = $(PAGES:$(SRC_DIR)/%.md=$(GEMINI_DIR)/%.gmi) GEMINI_TARGETS = $(PAGES:$(SRC_DIR)/%.md=$(GEMINI_DIR)/%.gmi)
IMAGES = $(shell find $(SRC_DIR) -wholename "$(SRC_DIR)*.png" -o -wholename "$(SRC_DIR)*.jpg")
PNG_TARGETS = $(IMAGES:$(SRC_DIR)/%.png=$(HTML_DIR)/%.png)
JPG_TARGETS = $(IMAGES:$(SRC_DIR)/%.jpg=$(HTML_DIR)/%.jpg)
IMAGE_TARGETS = $(PNG_TARGETS) $(JPG_TARGETS)
_dummy := $(shell mkdir -p $(HTML_DIR) $(GEMINI_DIR)) _dummy := $(shell mkdir -p $(HTML_DIR) $(GEMINI_DIR))
$(HTML_DIR)/%.html: $(SRC_DIR)/%.md $(HTML_DIR)/%.html: $(SRC_DIR)/%.md
python $(MAKE_HTML) $< $@ pipenv run python build/page.py $< $@
html: $(HTML_TARGETS) $(HTML_RSS): $(PAGES)
echo $(HTML_TARGETS) pipenv run python $(MAKE_RSS) $(PAGES) > $@
$(HTML_INDEX): $(HTML_TARGETS)
pipenv run python build/index.py $(HTML_TARGETS) > $@
$(HTML_DIR)/%.png: $(SRC_DIR)/%.png
mkdir -p $(shell dirname $@)
cp $< $@
$(HTML_DIR)/%.jpg: $(SRC_DIR)/%.jpg
mkdir -p $(shell dirname $@)
cp $< $@
html: $(HTML_TARGETS) $(HTML_RSS) $(HTML_INDEX) $(IMAGE_TARGETS)
gemini: gemini: