@wallhackio @vaporeon_ a more traditional model of software development conceives of software as a purroduct developed in stages; in purrticular, development and opurrations (that is, maintanence, suppurrt, the shit IT handles really) are completely sepurrate stages. the entire point of a DevOps culture is to integrate development and opurrations; the development lifecycle is generally much shorter; purrocesses are automated (especially testing and deployment); the hope is to find and fix purroblems faster, and deploy new changes more quickly and correctly

had a bad case of “need to Do Something to my websites at all times” lately

Show thread

i’m gonna slot this as a long-term TODO; i’ve already caused myself enough issues tweaking on my servers as it is

Show thread

re: ? 

@vaporeon_ i literally just did not have any source of income fur a long time, not even an allowance, and i did not have any real control of the networking in the house, so a lot of my time in college and several years afterwards was doing things i didn’t have to pay money fur and didn’t have to open ports in the router fur. Tor happens to satisfy both those needs. it’s actually very easy to set up a hidden service—imo, much more so than a “normal” website. you don’t have to worry about DNS or TLS or NAT at all. just set up a server, reverse proxy Tor to it, and it’s accessible globally, in minutes, with end to end encryption to all clients. you can do this behind 300 layers of NAT between you and the Internet and it would work. neat stuff

@amy @vaporeon_ here’s the raw markdown source of the tutorial, which is available only on GitHub because fuck you

@amy @vaporeon_ no, it was a developer fur DragonflyBSD, who once wrote a tutorial fur hosting onion-service only email

@vaporeon_ i sent an email to somebody over Tor with SMTP written by paw once

@vaporeon_ they incur more overhead server-side, which is a consideration worth making, but page loads are (usually) faster with HTTP/2 (it’s pawsible to be worse than HTTP/1.1 when networkning conditions are very poor) and basically always faster with HTTP/3 because multiple simultaneous downloads are multiplexed over a single stream in the latter purrotocols

i don’t think it’s necessesarily a requirement fur a hobbyist, but if you’re serving a lot of outbound connections, HTTP/2 and more so HTTP/3 will save you a lot of bandwidth. if you really, really need fast page loads, you might as well take advantage of them in your infrastructure. the one consideration fur a hobbyist is that HTTP/3 handles poor networking conditions (say, a connection from California to India) much better than the purrevious two purrotocols, so if you want your shit to be reachable globally without giving into the CDNs, HTTP/3’s a nice-to-have

QUIC does indeed use UDP, but HTTP/3 adds TCP’s reliability back to the connection at the application layer..... somehow, so reliability is not a concern. the reason QUIC even got introduced was in fact TCP: because TCP requires packets to be sent in order, the entire connection gets blocked if a packet is lost, which completly undermines the point of muliplexing the connection. (HTTP/1.1 using sepurrate connections purr resource is why it can be faster than HTTP/2 when networking conditions are very poor; one connection dropping a packet does not affect the other connections)

i have zero technical reason to do this (in fact, i’d lose HTTP/3, which is a good technical reason not to do this), but apache is much closer to being actual commewnity software than commercial-ass NGINX

Show thread

wondering how much of a pain in the ass it would be to move GlitchCat from nginx to apache

@wallhackio i’ve opened porn in public without thinking at least once

Show older
📟🐱 GlitchCat

A small, community‐oriented Mastodon‐compatible Fediverse (GlitchSoc) instance managed as a joint venture between the cat and KIBI families.