After reading some comments: this probably goes without saying, but one should be very careful what to expose to the internet. Sounds like the analytics-service maybe could have been available only over VPN (or similar, like mTLS etc.)
And for basic web sites, it's much better if it requires no back-end.
Every service exposed increases risk and requires additional vigilance to maintain. Which means more effort.
The ietf standardization was irrelevant so I would give them some slack. ISPs were using CGNAT already in a widespread fashion. The ietf just said, “if we’re gonna do this shit, at least stay out of the blocks used by private networks”.
It has been a non-existent problem for roughly 20 years now. Why do people still keep pulling out "uniquely identified down to the device" as an argument?
Windows, macOS and most Linux distros by default rotate SLAAC addresses every 24 hours.
For learning vim, I recommend searching for a "vim cheat sheet" that has an image of a keyboard layout with vim commands in it and printing that. Makes it easier to check and learn more, little by little.
Another one is online tutorials that make you practice interactively. Haven't used those much but the little I did, it was helpful.
I'm running NixOS on some of my hosts, but I still don't fully commit to configuring everything with nix, just the base system, and I prefer docker-compose for the actual services. I do it similarly with Debian hosts using cloud-init (nix is a lot better, though).
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
Ditto on having services expressed in more portable/cross distro containers. With NixOS in particular, I've found the best of both worlds by using podman quadlets via this flake in particular https://github.com/SEIAROTg/quadlet-nix
If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.
Doing all that with containers is a spaghetti soup of custom scripts.
Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Your understanding of not-containers is incorrect.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
I don't think that makes 100 / 100 the most likely result if you flip a coin 200 times. It's not about 100 / 100 vs. another single possible result. It's about 100 / 100 vs. NOT 100 / 100, which includes all other possible results other than 100 / 100.
In statistics, various examples (e.g., coin flips) often stand in for other activities which might prove expensive or infeasible to make repeated tries of.
For "coin flips", read: human lives, financial investments, scientific observations, historical observations (how many distinct historical analogues are available to you), dating (see, e.g., "the secretary problem" or similar optimal stopping / search bounding problems).
With sufficiently low-numbered trial phenomena, statistics gets weird. A classic example would be the anthropic principle: how is it that the Universe is so perfectly suited for human beings, a life-form which can contemplate why the Universe it so perfectly suited for it? Well, if the Universe were not so suited ... we wouldn't be here to ponder that question. The US judge Richard Posner made a similar observation in his book "Catastrophe: Risk and Response" tackles the common objection to doomsday predictions that all have so far proved false. But then, of all the worlds in which a mass extinction event has wiped out all life prior to the emergence of a technologically-advanced species, there would be no (indegenous) witnesses to the fact. We are only here to ponder that question because utter annihilation did not occur. As Posner writes:
By definition, all but the last doomsday prediction is false. Yet it does not follow, as many seem to think, that all doomsday predictions must be false; what follow is only that all such predictions but one are false.
-Richard A. Posner, Catastrophe: Risk and Response, p. 13.
I'm not sure where you're going with this, but since they have actually researched how it grows, I think it's more likely your calculations/assumptions are incomplete.
For example:
> Energy needed to grow 1g of microbial biomaterial
based on what?
Edit: Maybe you meant that radiation alone wouldn't be enough for that growth, so there'd be other components that it's helping with.
Don't do this, and don't then share the resulting numbers as fact publicly without disclosing you just asked a chatbot to make up something reasonable sounding.
If the chatbot refers to a source, read the source yourself and confirm it didn't make it up. If the chatbot did not refer to a source, you cannot be sure it didn't make something up.
The property measured in the source you linked, "enthalpy of formation", is not the same as the energy required to grow 1g of biomatter. One clue of this is that the number in the paper is negative, which would be very strange in the context you requested (but not in the context of the paper). For the curious: "A negative enthalpy of formation indicates that a compound is more stable than its constituent elements, as the process of forming it from the elements releases energy"
You're feeding yourself (and others) potentially inaccurate information due to overconfidence in the abilities of LLMs.
> If i understand that correctly the "energy required to grow" would be bigger than the "enthalpy of formation"?
They are almost completely unrelated concepts. The enthalpy of formation from the paper is the free useable energy that would be generated if you assembled all the molecules in the biomatter from the constituent atoms. E.g. the energy that would be released if you took pure hydrogen and pure oxygen and combined it into 1 gram of water. But the fungi takes in water from the environment to grow, it does not make it's own water from pure hydrogen, and it certainly does not generate any free energy from growing larger. With some margin for error in my understanding, since I'm not a chemist (but neither are you, and neither is the chatbot).
> It was really just food for thought.
It was more poison than food, since you just parroted randomly generated misinformation from the chatbot and passed it of as authentic insight.
Um right did not think of that, if you burn a organism you get to core components but the organism was not originally made of the core components.
The core idea was not generated from a chat bot. Neither was the article i gave (that was my own googling).
The core idea (that there is a requirement and a availability of energy that may differ) was generated from my brain not that i personally think the origin of an idea matters to its value.
General rule of thumb: If you're going to ask an LLM and then make a post based on that, simply don't post it. If we wanted a randomly generated take on this, we would just ask an LLM ourselves.
reply