That's fine, but it's exactly the opposite of the attitude expressed in the post.
"The ratio of bytes loaded to load time should be very close to the I/O throughput of the machine."
How do you think the bytes served for a few hundred requests a minute compares to the theoretical I/O throughput of the relevant machines?
And because no one can see my tongue in my cheek across the internet, I'll be explicit that I'm sure he is a user of his publishing environment, not it's author, and so it's fair to view this as just another symptom of the problem he's discussing.
But it will be unreadable when most people want to read it. Sure, the 200 hundred readers that normally read your blog in the course of the month won't be much disturbed. It's just the 20000 readers that tried to reach it in in a couple of minutes that you will lose.
I wouldn't dream of keeping a blog that couldn't handle a HN traffic spike. That's the whole point of having a blog.
Failing at this is like launching your startup's MVP to an audience of the first fifty people who were able to load your product launch page before it crashed under the load. In other words, a giant, easily-avoidable waste of effort.