While you certainly can run AI models that require such a beefy GPU, there are plenty of models that run fine even on a CPU-only system. So it really depends on what exactly Ollama is going to be used for.
While you certainly can run AI models that require such a beefy GPU, there are plenty of models that run fine even on a CPU-only system. So it really depends on what exactly Ollama is going to be used for.
I wouldn’t run it as a router due to its high power consumption, but it would be a fine computer for retro gaming for games up until ~2005 if you add a graphics card.
Edit: 75 LXC containers, 22VMs.
That’s a lot of power draw for so few VMs and containers. Any particular applications running that justify such a setup?
I found both whoBIRD and Birdnet-Pi to give good results, as long as you dismiss the low confidence results. For results with a confidence of 80 % or higher I very rarely have incorrect results. Every once in a while it confuses one kind of thrush with another, but they do sound similar to my human ears as well.
The guide mentions:
This isn’t correct. While some ISPs do give you the first 64 bit (a /64 prefix), this isn’t recommended and not terribly common either. An ISP should give its users prefixes with less than 64 bit. Typically a residential user will get a /56 and commercial users usually get a /48. With such a prefix the user can then generate multiple /64 networks which can be used on the local network as desired.