Self Proclaimed Internet user and Administrator of Reddthat
2nd best reporting in.
A faster db. Just the regular performance benefits, https://www.postgresql.org/about/news/postgresql-16-released-2715/
Also, Lemmy is built against v16 (now) so at some point it will eventually no longer JustWork
The script will be useless to you, besides for referencing what to do.
Export, remove pg15, install pg16, import. I think you can streamline with both installed at once as they correctly version. You could also use the in place upgrade. Aptly named: pg_upgradeclusters
But updating to 0.19.4, you do not need to go to pg16… but… you should, because of the benefits!
Congratulations! 👏 Happy B-day and here’s to many more to my across the river friends 🎉
Ps. The video works and is great!
That awkward moment when you are the person they are talking about when running beta in production!
Since the 11th @ 9am UTC, LW has seen a 2 fold increase of activities. If my insider knowledge is right (and math) it’s 7req/s average up from 3req/s.
Lucky for both of us we are not subbed to every community on LW but I think we are subbed just enough to be affected.
Relevant: https://reddthat.com/comment/8316861 tl;dr. The current centralisation results in a lemmy-verse theoretical maximum for of 1 activity per 0.3 seconds, or 200 activities per minute. As total transfer of packets is just under 0.3 seconds between EU -> AU and back.
Edit: can’t math when sleepy
Bah! I totally forgot that they have the new “efficiency” cores…
Performance Cores: 6 Cores, 12 Threads, 2.5 GHz Base, 4.8 GHz Turbo
Efficient Cores: 8 Cores, 8 Threads, 1.8 GHz Base, 3.5 GHz Turbo
Hmmm, I’d still say its totally worth it because the 12500 only has 6 core (12 threads) total. You are getting 8 extra core/threads.
Linux/docker/anyOS will make use of 8 extra cores regardless of the workload. Sure they might not be as performent on the lower end but a process running 12 threads vs a process running 20 threads will always be more performant.
I’m always look at ongoing costs rather than upfront and mostly thats the TDP, which is exactly the same. So I would agree with your sentiment. The major cost is performing it.
Single thread has a small increase 5% or so, but you have double the amount of threads. So your two dozen (24) docker containers could have a thread per container! Thid could benefit you a lot if you were running anywhere near 100% or have long running multithread jobs.
If I had the disposable money and I thought I could sell the 12th gen CPU then maybe. But i’m still rocking some old E3-12xx v3 Xeons which probably costs me more per year than what you will pay to upgrade!
Make sure to read the side bar. Support questions go to !askandroid@lemdro.id
See my PR for a new backup script. https://github.com/LemmyNet/lemmy-ansible/pull/210
I’ll get to adding it to the main docs on the weekend.
Tldr, piping your backups via docker is CPU expensive. Directly writing to filesystem in a postgres compatible format with compression is faster and more efficient on the CPU.
My 90GB+ (on filesystem) db compresses to 6GB and takes less than 15 mins.
Been running this since -rc1 & 0.19.1 for the past 13 hours. No issues related to Federation since! just higher CPU load compared to 0.18.x releases.
Thanks for another great release. Suppose I should go fix our ansible ey?
Should be fixed. (It already got merged: https://github.com/LemmyNet/lemmy/pull/4213)
No worries! I hope the community comes back around!
Yeah funding is good. Completely community funded :) Ours is on open collective if you want to see it 😉 (we are completely transparent)
We (reddthat) would welcome your community. 😉
If you don’t see create community in the top next to create post, then your home server doesn’t allow users to create community
No. You have to have an account on that server. (And have to use that account regularly as well, otherwise you won’t see reports about your community)
You make posts.
Don’t forget & in community names and sidebars.
Constantly getting trolled by &
that’s only an issue if you’re telling nginx the internal IP of the container container names
Oh how naive I thought so to. Nope.
If you have an nginx container (swag) that is inside the docker network, without a resolver 127...
configuration line. Upon initial loading of the container it will resolve all upstreams
. In this case yours are sab
and sonarr
. These resolve to 127.99.99.1 and 127.99.99.2 respectively (for example purposes). These are kept inside memory, and are not resolved again until a reload happens on the container.
Lets say sab
was a service that could scale out to multiple containers. You would now have two containers called sab
and one sonarr
. The IP resolutions are 127.99.99.1 (sab), 127.99.99.2 (sonarr), 127.99.99.3 (sab).
Nginx will never forward a packet to 127.99.99.3, because as far as nginx is concerned the hostname sab
only resolves to 127.99.99.1. Thus, the 2nd sab
container will never get any traffic.
Of course this wouldn’t matter in your usecase, as sab and sonarr are not able to have high availability. BUT, lets say your two containers were restarted/crashed at the same time and they swapped ips/got new IPs because docker decided the old ones were still inuse.
Swag thinks sab = 127.99.99.1, and sonarr = 127.99.99.2. In reality, sonarr is now 127.99.99.3 and sab is 127.99.99.4
So you launch http://sonarr.local and get greeted with a sonarr is down message. That is why the resolver lines around the web say to have the ttl=5s
to enforce a always updating dns name.
This issue is exactly what happened here: https://reddthat.com/comment/1853904
I know nginx
Oh don’t get me wrong, nginx/Swag/NPM are all great! I’ve been trialing out NPM myself. But the more I use nginx with docker the more I think maybe I should look into this k8s or k3s thing, as the amount of networking issues I end up getting and hours I spend dealing with it… It just might just be worth-it in the end :D
/rant
Ah. I see you too enjoy the debian approach