why are there so many youtube videos about deploying a dockerized version of nginx. why do you need docker to deploy fucking nginx

Follow

is using your fucking package manager too old school, now.

in one video somebody had listed their docker images and their nginx image was over a hundred fucking megabytes. nginx is tiny!!! what the fuck do you need a hundred megabytes FUR???

nginx uses less than 10 megs of ram on my servers!!! it’s small!!!!! it has few dependencies!!!! just run it!!!!!!

@aescling Stupid question: How to measure how much RAM a program uses?

@vaporeon_ that is hard to do well because purrograms commonly allocate way more memory than they actually ever use (idk how this really works tbh but the kernel does a lot of magic here). htop will give you reasonable enough answers. if your purrogram is being supervised by systemd, systemctl status will tell you current and peak memory usage but idk what exactly it is calculating

@aescling > idk how this really works tbh but the kernel does a lot of magic here

In a lecture that I attended, I was told that it uses something called "first touch policy", where only once the program actually accesses a page of memory, the kernel actually backs it by a page of physical memory (and for a system with NUMA and multiple cores, it'll choose to locate the page closest to the processor that touched it!)

For a practical test, I ran this program (after getting bored of trying to find the maximum value manually; the size of memory + swap on my system is 12'713'656'320 bytes):

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main() {
size_t i = 12713656300;
char *p;
while (p = malloc(i)) { free(p); i++; }
printf("%zu\n", i-1);
return 0;
}

And I can allocate up to MEM+SWAP-21 bytes before malloc() starts returning a NULL pointer!

If I sleep for 60 seconds with all that memory allocated, still, it doesn't affect my system at all, since I didn't actually touch that memory, so it doesn't need to be backed by a physical page

@aescling Also, you should be already aware at this point that Vaporeon doesn't use systemd LOL

Especially since the Apache process that I wanted to measure runs on NetBSD

@vaporeon_ yeah i know, i’m just saying how i specifically know how much memory nginx uses on “my” servers

@aescling I wonder what's up with those 24 bytes, why it doesn't let me allocate all available memory...
(I found a stupid mistake where it doesn't check whether the allocation already fails at the starting value, real maximum is 12713656296 bytes on my systems, 24 bytes less than all available swap and memory, not 21 bytes less)

I could imagine that some kernel parts always need to stay in memory, but if that was the reason, I think that they would take up more than just 24 bytes, that they would requires at least one full page to not be allocated to other programs, since the granularity that the kernel works with is pages

@aescling If you've access to a C compiler right now, I'm curious what the following will say on your system (set the initial value of i to slightly less than the total amount of memory + swap installed in your system)

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main() {
size_t i = 12713656000, i_start;
char *p;
i_start = i;
while (p = malloc(i)) { free(p); i++; }
if (i_start == i)
printf("Too big start value.\n");
else
printf("%zu\n", i-1);
return 0;
}

I wonder whether it'll also be 24 bytes less than the maximum or not

@aescling people like to learn one single tool and then act like that's the only possible way to do everything instead of ever learning anything else at all

only have a hammer everything's a nail etc etc

Sign in to participate in the conversation
📟🐱 GlitchCat

A small, community‐oriented Mastodon‐compatible Fediverse (GlitchSoc) instance managed as a joint venture between the cat and KIBI families.