Good article but I found this remark at the start interesting:
"The first of these are as easy to draw as they are easy to make a computer draw. Give a computer the first and last point in the line, and BAM! straight line. No questions asked."
For some reason that comment brings back memories of writing Basic on an Apple II when I was kid, where it only had “hlin” and “vlin” commands - only axis-aligned lines, no function for diagonal lines.
Yeah the API counts as a hood, I would say modern GPUs and CPUs have multiple hoods… multiple API layers… hoods under hoods. It’s hoods all the way down. :P
Sure, but bresenham's algorithm is a highly optimized form of "drawing a line" in the same way that we can write a highly optimized forward prediction algorithm for Bezier curves and it will be more work than Bresenham's algorithm for lines.
The basic "draw a line" is the incredibly simple "single lerp to get the next on-line pixel coordinate" (which for naive drawing, may be the same coordinate, of course).
Bresenham's algorithm really is all about integer division, and its applications are far broader than just drawing a line. When I was still doing a lot of CAD work Bresenham's was the proverbial Swiss Army Knife, from stepper control across five axis (including rotation axis for thread cutting) to 3D visualization and all kinds of other odds and ends.
Over the past couple of weeks I've been diving into WebGL, and even at that level of abstraction (several layers higher than the GPU itself), the incredible complexity involved in rendering stuff to the screen becomes apparent. It's truly amazing.
Ha! Of course every graphics API makes it easy, but there is much going on under the hood. (Um, do GPU's have hoods?) See "Bresenham's algorithm": https://en.wikipedia.org/wiki/Bresenham's_line_algorithm