unrelatedly, i AM working on building a unified RDF/OWL ontology for my projects using ⛩️📰 书社 and it IS definitely stress-testing its capacity to handle “actual workloads”
jury is still out whether something complexly interconnected like that or something with Just A Lot Of Data (e·g a dictionary) will bog it down more
but i’m pretty sure it’s still faster than Any Static Site Generator
the thing with the approach of “treat everything as an include, create a conceptual archive containing the entire site, and then extract that into public” is that (1) being able to just xpath into any file at any time is actually extremely powerful, and (2) this does just mean loading your entire site into memory and doing every transformation on it all in one go
the approach is to define a
<x:wrapper>
<书社:link xlink:href="about:shushe?include=things/" xlink:show="embed"/>
</x:wrapper>
which then gets all the files in $(INCLUDEDIR)/things/ and transforms to a
<书社:archive 书社:expanded="">
<html:article 书社:archived-as="index.html"><!--…--></html:article>
<!--…-->
</书社:archive>
not hard to do at all
one time someone in fandom coders was complaining because their static site generator took like 10 seconds to rebuild their very simple site and i was just like “…what are they doing in there?”