Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it will scale up on a single machine to make use of all the cores (unlike Nodejs, Python, or Ruby)

Python definitely does "use all cores" on a machine with the multiprocessing package, not sure what you mean?



BEAM has a preemptive scheduler and being able to use all available core is a part of the standard runtime and language primitives, and does not require a separate library. The standard library, OTP, builds on top of those language primitives. The whole language and runtime is designed from its foundations to work with massive concurrency.

This is one of the things I think why Elixir isn't as popular: people think that Nodejs or Python can do with Elixir or Erlang do, but they don't.


I assume they mean Python doesn't do so by default (nor in a lightweight fashion). You can certainly use all cores with any programming runtime if you just run multiple OS processes. Indeed, that's how you implement multi-core on Ruby and Node as well. Although even then, the cores themselves aren't necessarily being fully utilized, even if you're ostensibly using all cores.


Because it is not by default, you don't really use all the cores by default either.


Python 3.13 beta1 can be compiled with no-GIL mode, which allows free-threading that could perhaps use all cores.


Let’s say we do that. How many threads can we create?

On BEAM, running 10,000 lightweight processes is normal. Phoenix is designed so that every incoming http request is its own lightweight process.

How does one manage that number of lightweight processes? The runtime’s scheduler keeps track of every single process so nothing is orphaned. It is preemptive, so no lightweight process can starve out another (though there are some exceptions).

They can also suspend cheaply, as such, works well with async io.

The closest thing to this are the new virtual thread feature of recent Java. I don’t think Java has the same kind of properties that will allow it to manage it as well as BEAM. There is a lot more to Elixir than being able to use all the cores.


Oh certainly; I never meant to think that Python competes with Elixir in this regard. It doesn't.


And if you have 4 cores, and 3 of them are blocked by IO, now you only have 1 core to answer requests. What happens if this core also stops in an IO? Your service stops.

That can NEVER happen in the Erlang Virtual Machine (the BEAM), because of the preemptive scheduler. This is only one of 100s of examples why the BEAM is the right choice for web systems where real-time can be soft real-time.


Nodejs will also move on if io is blocked.

However, since Nodejs queues bits of execution, rather than messages, errors can get lost. So then you have unhandled promise rejections… and then relying on a linter to look for those times you did not write something to catch the error. I was told this is a good feature of the linter.

Contrast that with the BEAM languages. An unhandled error crashes the lightweight process. Any linked process (such as a supervisor) can decide what to do about the crash (restart, or crash as well). We don’t even need liveliness probe (like in Kubernetes) because the supervisor is immediately informed (push, not pull).

You don’t need a linter to make sure it has a sensible default, because this is handled by design.

Nodejs, on the other hand, is error prone, _by design_, even though it does not block on IO either.

No amount of typespecing is going to fix that.


It's self-evidently false because you depend on multiple processes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: