I've been using Warewulf (&co.) for provisioning bare-metal clusters for decades (back into the Perceus days between Warewulf 1 and 2), it's a solid easy-to-comprehend tool that does things in ways that are transparent and built from generic [u/li]nux tools enough that they're not hard to think about when needed, but automated enough you usually don't have to.
Definitely shows its research roots, best-tested with RHEL-alikes, reasonably well tested with Suse and Debian, and you may be in for some extra work if you need provision something else, but that pretty much covers the common cases (and it integrates with containerization tools if you need some specific environment on the nodes).
It's a nice to have when you need to spin many nodes.
It’s that old? I can’t believe it took me this long to find Warewulf! I’ve tried the more complex solutions and this looks like what I’ve always dreamed of
Always feels like it will be simpler... you start with some iPXE, start building, and 6 months later you have a poor imitation of a product like this that works only for your specific use cases and causes you a headache if the company pivots and you have to make it do something new.
Been there, built that. Next time I'm using something with a community, and if it doesn't do what I need, I'm contributing upstream until it does.
Where does this fit in the Ansible + PXE boot vs Terraform vs NixOS scale? Seems to be within that space, but before the "infrastructure as code" phrase was coined.
So with the overlay you can make node-specific changes? I was reading through the initial setup guide but I couldn't figure out how you actually specify which node gets which overlays (or one-off edits of whatever kind).
Right now I'm searching for the optimal solution to host containers in a small but cool way. Thinking about just using a plain Linux host, configured via ansible and compose.
Used Portainer so far, but that's a bit bloated for my simple use (one host, no-HA, lab env). Kubernetes is way too complex as well.
Warewulf sounds fun to try :D all of my profiles would probably only have one node. Does Warewulf make a fraction of sense when having a tiny, quasi-local environment?
EDIT: ah, nevermind. stateless and temporary makes no sense for my usecase, as my containers will run 24/7 with rare changes. But I will think about Warewulf if I ever dive into large-scale containerization :)
If you want small and cool, Wasmtime should be mentioned. It’s a WebAssembly runtime to it can only run WebAssembly programs, which narrows the set of programs that can be executed. However, if you can get your program in WebAssembly then Wasmtime is very cool. Startup time is a few ms versus hundreds of ms for Docker. Memory usage is also orders of magnitude less. And security guarantees are similar.
If you really want a lightweight experience, go with alpine and then run all the daemon processes using supervisor [1] and for HA, you can use keepalived which uses VRRP for HA
> Thinking about just using a plain Linux host, configured via ansible and compose.
That's what I've been using since 2019 (plus caddy as reverse proxy for various web services), and in general I've been happy. Upgrading to docker-conpose-v2 caused me some headaches recently though, and I'll soon have to upgrade the underlying Ubuntu server, which I am dreading.
We're always looking towards IPv6 support! And you _can_ do it today, with a little bit of work. But it's been difficult to prioritize in the main project when so few of our users (read: maybe one) have expressed interest.
existing users, I suppose. New users are looking for exactly this feature and will walk away. So, you can now count it up to 3.
IPv4 has gotten quite expensive. A newer company I'm working with doesn't even have IPv4 access past the edge. There is just a little proxy that handles IPv4 translation on the edge; it barely gets any traffic.
True, but on the other hand it might be that all private use blocks are already in use (10.0.0.0/8 is totally in use in our internal LAN), so if I want the nodes to reach those private IPs, I can't assign the same block. And we do have services on IPv6.
Really depends on who you ask. You still need v4 to be "globally reachable", but v6 is optional.
AWS seems to finally be feeling the pinch of IPv4 exhaustion and is pushing v6 support everywhere now, and starting to charge for v4.
Mobile networks already have, and many are natively IPv6, with NAT64/464XLAT or other tech for bridging to v4. Apple's App store requires apps to support IPv6-only networks.
CDNs and clouds etc mean that websites don't even really need to worry about their own IP allocation, and just let their provider figure out exposing things worldwide.
> Apple's App store requires apps to support IPv6-only networks.
I read that and thought "huh, is that recent?" and found posts that were 9 years old about it. I guess apps just have to work on an IPv6-only network but I'm honestly surprised my apps do. I don't test in IPv6, my home network has it disabled, most of my servers don't have anything for IPv6 that I know of. Odd.
For longer, I expect. For a long time email has been partially centralised so for most real people and a lot of systems mail goes out through a specific host (or small number of hosts) on the edge of their network or completely outside it (sending individuals sending via services like gmail, and systems using services like sendgrid, and so forth) so the need to push for IPv6 is less apparent for mail sending than a number of other things. There are orders of magnitude less hosts sending mail than, say, making HTTP(S) requests.
Its main thing is taking a system that’s just powered on, and giving it an operating system. Said system can then run HPC jobs (or shell sessions, or web sites, or data-transfer); until it’s rebooted, and the cycle begins again.
I don’t know how well it would work in IoT: The device needs to PXE-boot, which requires support from the DHCP server and the hardware boot environment (UEFI).
The words in blue that are underlined are called hyperlinks. You can click them ;)
This readme howto is truly way too long and excessively prescriptive, and the author goes too far with his inserting his opinions (ie, don’t use github, don’t use discord etc.). I couldn’t possibly recommend this howto.
Definitely shows its research roots, best-tested with RHEL-alikes, reasonably well tested with Suse and Debian, and you may be in for some extra work if you need provision something else, but that pretty much covers the common cases (and it integrates with containerization tools if you need some specific environment on the nodes).
It's a nice to have when you need to spin many nodes.