Compare commits
9 Commits
e4b7d7af2b
...
7271899b78
Author | SHA1 | Date |
---|---|---|
ktyl | 7271899b78 | |
ktyl | 3372bb108d | |
ktyl | 7a29773169 | |
ktyl | 89bdaf95b9 | |
ktyl | 2d0fb9ed84 | |
ktyl | 623a463d9f | |
ktyl | f72c3e7f35 | |
ktyl | 1cc66385ab | |
ktyl | 17c12cadf4 |
|
@ -4,3 +4,12 @@
|
||||||
[submodule "blog"]
|
[submodule "blog"]
|
||||||
path = blog
|
path = blog
|
||||||
url = https://sauce.pizzawednes.day/ktyl/blog.git
|
url = https://sauce.pizzawednes.day/ktyl/blog.git
|
||||||
|
[submodule "journal"]
|
||||||
|
path = sr/garden/journal
|
||||||
|
url = git@sauce.pizzawednes.day:ktyl/journal.git
|
||||||
|
[submodule "src/garden/period3.xyz"]
|
||||||
|
path = src/garden/period3.xyz
|
||||||
|
url = git@sauce.pizzawedney.day:ktyl/period3.xyz
|
||||||
|
[submodule "src/garden/journal"]
|
||||||
|
path = src/garden/journal
|
||||||
|
url = git@sauce.pizzawednes.day:ktyl/journal
|
||||||
|
|
2
makefile
2
makefile
|
@ -67,7 +67,7 @@ blog: $(HTML_INCLUDES) $(CSS_TARGETS)
|
||||||
done
|
done
|
||||||
|
|
||||||
garden:
|
garden:
|
||||||
make --directory $(GARDEN_BASE_DIR) html
|
make --directory $(GARDEN_BASE_DIR) site
|
||||||
|
|
||||||
clean:
|
clean:
|
||||||
make --directory $(GARDEN_BASE_DIR) clean
|
make --directory $(GARDEN_BASE_DIR) clean
|
||||||
|
|
|
@ -0,0 +1,2 @@
|
||||||
|
__pycache__
|
||||||
|
*.html
|
|
@ -1,10 +1,19 @@
|
||||||
html: feed.py Makefile
|
py = feed.py books.py
|
||||||
|
poetry = journal/poetry/fallen-leaves.md
|
||||||
|
journal-images = journal/poetry/tree.jpg
|
||||||
|
md = rss.md book-collecting.md gardens.md $(poetry)
|
||||||
|
html = $(md:%.md=%.html)
|
||||||
|
|
||||||
|
site: Makefile $(md) $(py) $(journal-images)
|
||||||
mkdir html
|
mkdir html
|
||||||
cp feed.py Makefile html
|
python md2html.py $(md)
|
||||||
cp *.md html
|
cp -R $(html) $(py) $(journal-images) Makefile html
|
||||||
|
|
||||||
|
journal: $(poetry)
|
||||||
|
python journal.py $<
|
||||||
|
|
||||||
clean-html:
|
clean-html:
|
||||||
[[ -d html ]] && rm -r html
|
rm -r html
|
||||||
|
|
||||||
.PHONY: clean-html
|
.PHONY: clean-html
|
||||||
|
|
||||||
|
@ -15,4 +24,4 @@ rss: feed
|
||||||
|
|
||||||
clean: clean-html
|
clean: clean-html
|
||||||
|
|
||||||
.PHONY: feed clean
|
.PHONY: feed clean journal
|
||||||
|
|
|
@ -0,0 +1,106 @@
|
||||||
|
how do you define a book collection?
|
||||||
|
my book collection is the set of all books.
|
||||||
|
|
||||||
|
i prefer physical books to e-readers.
|
||||||
|
unfortunately i have quite a few these days.
|
||||||
|
|
||||||
|
i want to read them all eventually!
|
||||||
|
i also tend to live in quite small places
|
||||||
|
and i want to be able to move city easily!
|
||||||
|
|
||||||
|
so here's my system for organising my physical book collection.
|
||||||
|
|
||||||
|
i want to:
|
||||||
|
* read books i already have
|
||||||
|
* read as many different books as possible
|
||||||
|
* minimise physical storage requirements
|
||||||
|
* keep track of books i've read
|
||||||
|
* gather books i don't already have
|
||||||
|
|
||||||
|
constraints
|
||||||
|
* i don't know for sure what book i will want to read next
|
||||||
|
|
||||||
|
for every book in the world
|
||||||
|
* i either have or have not read it
|
||||||
|
* i have access to it or i don't
|
||||||
|
|
||||||
|
so i sort my book collection with 4 categories
|
||||||
|
|
||||||
|
*-------------------*-----------------------*
|
||||||
|
| | |
|
||||||
|
| unread | read |
|
||||||
|
| have | have |
|
||||||
|
| | |
|
||||||
|
| 37º2 le matin | L'Homme des Jeux |
|
||||||
|
| | |
|
||||||
|
*-------------------*-----------------------*
|
||||||
|
| | |
|
||||||
|
| unread | read |
|
||||||
|
| haven't | haven't |
|
||||||
|
| | |
|
||||||
|
| Das Kapital | Frankisstein |
|
||||||
|
| | |
|
||||||
|
*-------------------*-----------------------*
|
||||||
|
|
||||||
|
i can then begin to optimise my collection.
|
||||||
|
|
||||||
|
* i do not have this book, and i have read it.
|
||||||
|
* i have this book, but i have not read it.
|
||||||
|
* i have this book, and i have read it.
|
||||||
|
* i do not have this book, but i have not read it.
|
||||||
|
|
||||||
|
the books i am most interested in having nearby are unread ones, as i would like to read as many different books as possible.
|
||||||
|
|
||||||
|
books i have already read i don't need nearby anymore.
|
||||||
|
i might pass it on, or store it somewhere with less of a premium on space.
|
||||||
|
i could also attempt to track where it is!
|
||||||
|
|
||||||
|
based on my requirements and my categories, i create four lists for the books
|
||||||
|
|
||||||
|
* ready
|
||||||
|
* all done
|
||||||
|
* read and gone
|
||||||
|
* hunted
|
||||||
|
|
||||||
|
that looks like a decent start to the system, so i suppose now i'll start collecting!
|
||||||
|
|
||||||
|
so i'll use markdown lists in the format
|
||||||
|
|
||||||
|
```
|
||||||
|
* [x] author - title # reading
|
||||||
|
* [ ] author - title # nearby
|
||||||
|
```
|
||||||
|
when collecting music i use artist - year - name
|
||||||
|
however, publication year is an extra step that will slow data entry, so won't use this to start with - i have a lot of books
|
||||||
|
|
||||||
|
i realised i had a gpt-4 sub and that it could look at pictures now so i gave it a go
|
||||||
|
i fed it some photos and some formatting preferences and i got out perfect markdown lists
|
||||||
|
|
||||||
|
```
|
||||||
|
- [ ] Doctorow, Cory - Walkaway
|
||||||
|
- [ ] Ferreira, Pedro G. - The Perfect Theory
|
||||||
|
- [ ] Hadfield, Chris - An Astronaut's Guide to Life on Earth
|
||||||
|
- [ ] Heinlein, Robert A. - Beyond This Horizon
|
||||||
|
```
|
||||||
|
|
||||||
|
books are lovely are great to look at, but the mishmash of fonts and presentation are a nightmare for indexing.
|
||||||
|
now we have some good and lovely metadata :)
|
||||||
|
this is an imperfect method, as the only way i can check it is still by combing through the physical books manually
|
||||||
|
but it does let me target my combing after identifying problems in the index
|
||||||
|
and in the meantime gives us a bunch of data to play with
|
||||||
|
|
||||||
|
markdown lists also allow me to mark some items in a list
|
||||||
|
this is looks flexible, so i think in my 'ready' list i will mark which book(s) i am currently reading
|
||||||
|
|
||||||
|
```
|
||||||
|
- [ ] Doctorow, Cory - Walkaway
|
||||||
|
- [ ] Ferreira, Pedro G. - The Perfect Theory
|
||||||
|
- [ ] Hadfield, Chris - An Astronaut's Guide to Life on Earth
|
||||||
|
- [ ] Heinlein, Robert A. - Beyond This Horizon
|
||||||
|
```
|
||||||
|
|
||||||
|
the other lists i will leave unmarked for now, until i think of something to do with them.
|
||||||
|
|
||||||
|
as for doing things with them, i wrote a [python script](#) which processes the data in the now-populated all-done and ready lists to yield some interesting (?) and fun (?) results?
|
||||||
|
|
||||||
|
[example output](#)
|
|
@ -0,0 +1,87 @@
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
|
||||||
|
def print_usage():
|
||||||
|
print(f"usage: python {sys.argv[0]} DIR")
|
||||||
|
print(f"")
|
||||||
|
print(f"\tDIR\tdirectory containing markdown lists in files.")
|
||||||
|
|
||||||
|
|
||||||
|
if len(sys.argv) != 2:
|
||||||
|
print_usage()
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
base_path = os.path.abspath(sys.argv[1])
|
||||||
|
ready_list_name = "ready.md"
|
||||||
|
done_list_name = "all-done.md"
|
||||||
|
|
||||||
|
def get_path(list_name : str) -> str:
|
||||||
|
return os.path.join(base_path, list_name)
|
||||||
|
|
||||||
|
|
||||||
|
def get_matches(list_name : str) -> list[re.Match]:
|
||||||
|
# Matches a markdown list item
|
||||||
|
entry_pattern = re.compile(r"^[*-] \[([ *x])\] (.+) - (.+)")
|
||||||
|
|
||||||
|
matches = []
|
||||||
|
with open(get_path(list_name)) as f:
|
||||||
|
matches = [entry_pattern.match(l) for l in f.readlines()]
|
||||||
|
return [m for m in matches if m is not None]
|
||||||
|
|
||||||
|
|
||||||
|
class Book:
|
||||||
|
def __init__(self, match : re.Pattern):
|
||||||
|
self.mark = match.group(1) != " "
|
||||||
|
self.author = match.group(2)
|
||||||
|
self.title = match.group(3)
|
||||||
|
|
||||||
|
def is_metadata_complete(self):
|
||||||
|
if not self.title or not self.author:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if self.title == "???" or self.author == "???":
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_list(list_name : str, filter_partial_metadata = True) -> []:
|
||||||
|
books = [Book(m) for m in get_matches(list_name)]
|
||||||
|
|
||||||
|
if filter_partial_metadata:
|
||||||
|
books = [b for b in books if b.is_metadata_complete()]
|
||||||
|
|
||||||
|
return books
|
||||||
|
|
||||||
|
|
||||||
|
def print_section(title : str, books : list[str]):
|
||||||
|
print(f"# {title} ({len(books)})\n")
|
||||||
|
|
||||||
|
longest_title = max([len(b.title) for b in books])
|
||||||
|
title_column_width = longest_title + 2
|
||||||
|
|
||||||
|
for book in books:
|
||||||
|
row = [book.title, book.author]
|
||||||
|
format_str = "- {: <" + str(title_column_width) + "} {: <20}"
|
||||||
|
print(format_str.format(*row))
|
||||||
|
print()
|
||||||
|
|
||||||
|
def print_in_progress():
|
||||||
|
books = [b for b in Book.get_list(ready_list_name, False) if b.mark]
|
||||||
|
print_section("in progress", books)
|
||||||
|
|
||||||
|
def print_completed():
|
||||||
|
books = Book.get_list(done_list_name)
|
||||||
|
print_section("up for borrowing", books)
|
||||||
|
|
||||||
|
def print_partial_metadata():
|
||||||
|
books = Book.get_list(ready_list_name, False)
|
||||||
|
books += Book.get_list(done_list_name, False)
|
||||||
|
books = [b for b in books if not b.is_metadata_complete()]
|
||||||
|
|
||||||
|
print_section("metadata incomplete", books)
|
||||||
|
|
||||||
|
print_in_progress()
|
||||||
|
print_completed()
|
||||||
|
print_partial_metadata()
|
|
@ -5,21 +5,20 @@ import pathlib
|
||||||
import sys
|
import sys
|
||||||
import re
|
import re
|
||||||
import glob
|
import glob
|
||||||
|
import os
|
||||||
|
|
||||||
def print_usage():
|
def print_usage():
|
||||||
print("\nusage: python feed.py ROOT\n")
|
print("\nusage: python feed.py ROOT\n")
|
||||||
print("\n")
|
print("\n")
|
||||||
print("\t\ROOT\tbase folder")
|
print("\t\ROOT\tbase folder")
|
||||||
|
|
||||||
def validate():
|
# check args for at most one file paths
|
||||||
# check args for at least one file path
|
if len(sys.argv) > 2:
|
||||||
if len(sys.argv) < 2:
|
print_usage()
|
||||||
print_usage()
|
sys.exit(1)
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
validate()
|
base_folder = sys.argv[1] if len(sys.argv) == 2 else os.getcwd()
|
||||||
|
print(base_folder)
|
||||||
base_folder = sys.argv[1]
|
|
||||||
|
|
||||||
def get_paths() -> [str]:
|
def get_paths() -> [str]:
|
||||||
return [x for x in glob.glob(f"{base_folder}/*.md")]
|
return [x for x in glob.glob(f"{base_folder}/*.md")]
|
||||||
|
@ -31,14 +30,20 @@ def get_text(path):
|
||||||
#def to_html(md : str) -> str:
|
#def to_html(md : str) -> str:
|
||||||
# return markdown.markdown(md, extensions=["fenced_code"])
|
# return markdown.markdown(md, extensions=["fenced_code"])
|
||||||
|
|
||||||
|
def get_title(md):
|
||||||
|
m = re.compile(r"^# (.+)\n").match(md)
|
||||||
|
if m is not None:
|
||||||
|
return m.groups(1)[0]
|
||||||
|
|
||||||
|
# truncated first line of file for auto-title
|
||||||
|
return md.splitlines()[0][0:30]
|
||||||
|
|
||||||
def get_entry(path):
|
def get_entry(path):
|
||||||
return get_title(get_text(path))
|
return get_title(get_text(path))
|
||||||
|
|
||||||
def get_title(md):
|
|
||||||
return re.compile(r"^# (.+)\n").match(md).group(1)
|
|
||||||
|
|
||||||
def get_entries() -> [str]:
|
def get_entries() -> [str]:
|
||||||
return "\n\n".join([get_entry(p) for p in get_paths()])
|
entries = [get_entry(p) for p in get_paths()]
|
||||||
|
return "\n\n".join(entries)
|
||||||
|
|
||||||
def get_header() -> str:
|
def get_header() -> str:
|
||||||
return """<?xml version="1.0" encoding="utf-8" ?>
|
return """<?xml version="1.0" encoding="utf-8" ?>
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
Subproject commit 170fb442a8c4a0c06b47e28821ab5fb475e35be1
|
|
@ -0,0 +1,30 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
import md2html
|
||||||
|
|
||||||
|
def print_usage():
|
||||||
|
print(f"usage: python {sys.argv[0]} PATHS")
|
||||||
|
print("")
|
||||||
|
print("\t\PATHS\tpaths of input markdown files")
|
||||||
|
|
||||||
|
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print_usage()
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
# we don't want to publish *everything* in the journal, so for now let's just
|
||||||
|
# hardcode the files we want.
|
||||||
|
files = sys.argv[1:]
|
||||||
|
|
||||||
|
# TODO: copy images
|
||||||
|
# TODO: separate md fromm images
|
||||||
|
|
||||||
|
for f in files:
|
||||||
|
md2html.write_html(f)
|
||||||
|
html_path = f.replace(".md", ".html")
|
||||||
|
print(html_path)
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,35 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import markdown
|
||||||
|
|
||||||
|
def print_usage():
|
||||||
|
print(f"usage: python {sys.argv[0]} PATHS")
|
||||||
|
print("")
|
||||||
|
print("\t\PATHS\tpaths of input markdown files")
|
||||||
|
|
||||||
|
|
||||||
|
def write_html(src : str):
|
||||||
|
with open(src) as md:
|
||||||
|
dest = src.replace(".md", ".html")
|
||||||
|
with open(dest, "w") as html:
|
||||||
|
html.write(markdown.markdown(md.read()))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print_usage()
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
paths = sys.argv[1:]
|
||||||
|
|
||||||
|
bad_paths = [p for p in paths if not p.endswith(".md")]
|
||||||
|
if len(bad_paths) != 0:
|
||||||
|
for p in bad_paths:
|
||||||
|
print(f"Not a markdown file: {p}")
|
||||||
|
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
for p in paths:
|
||||||
|
write_html(p)
|
Loading…
Reference in New Issue