> The solution required modern techniques and computer algorithms. “Twenty years ago, [this question] was absolutely out of reach; 10 years ago it would require an enormous effort of writing all necessary software, so only now all the factors came together,” wrote Anton Zorich, of the Institute of Mathematics of Jussieu in Paris, in an email.
Right here is the most interesting part of the article for me. It's definitely cool that we're making these sorts of discoveries still, but it's more interesting how so many more problems are now being solved via computer. Had I gone all the way through my PhD (in math), I'd be some combination of worried that my computing skills were up to snuff, and intrigued about the increasing role of computers in mathematics.
The fact that these mathematical problems can be solved by computer doesn't mean that any computer scientist or teenage hacker can go around solving math problems. The solution to these problems is more a matter of mathematical skill combined with some programming than a purely programming effort. So I don't think that mathematicians have any reason to fear for their jobs, they just have another tool to learn to be good at what they do.
I think of computers as being telescopes for math. They can help you see deeper than you could before, but you still need to know where to look, make sure you have enough resolving power, etc.
Any computer scientist or teenage hacker could always go around solving math problems. I actually think that being able to afford access to computer power makes the problems amenable to computing power less accessible to outsiders. All outsiders needed before was access to a good library, and now all they need is access to archive.org, library genesis and sci-hub.
It's worth noting that computer-assisted proofs date back to 1976 with the proof of the four-color theorem (although it turned out that it was another thirteen years before all the flaws and gaps were resolved).
They date back much further. McCarthy in the 1960s, and over a half dozen groups by 1970, had developed computer assisted proof tools that were proving theorems.
The 4 color theorem in 1976 was just a very famous example.
I think Turing (30s? 40s?) and later von Neumann (50s) also did some computer assisted theorem proving, but I’d have to dig for precise references.
Given most of us carry a computer with us at all times that by itself can do billions of calculations a second, I'd be more concerned/intrigued if this kind of stuff WASN'T being solved by computers these days. Even if it's just for brute forcing, that's obviously a lot more practical with computers than if we were doing stuff by hand or slide rule.
Given incompleteness and like the difficulty of the busy beaver problem, its pretty clear why brute forcing isn't really effective for much of math. Plus, the results it gives are maybe a bit less interesting because they don't provide as much insight on a clean and simple structure. How math works is cool and all, but why path works can open many new insights.
Do you really think humans have some magic powers for doing computationally intractable proof search??
Do you think human-written proofs vs machine-written proofs magically overcome what Gödel points out a limitation of the language itself both proofs are written in (or, in principle, "compile to")??
Why so much of this human skills mysticism on HN of all places... :/
Because many of us deal with computers all day and realize deep down there is a fundamental difference between humans and computers that isn't going anywhere?
It's not going anywhere until it does. Look at the history of AI in various tasks and fields. It only ever goes one direction, and there does not seem to be a fundamental block.
We appear to exist in a mechanical universe of laws. There's really no way it's possible both that humans exist and that we can't replicate or approximate enough parts of the human brain enough to match or better human reasoning.
If the human part bothers you, substitute animals instead. They do much of what we need to various, usually somewhat lesser levels.
This line of thinking is reinforced by the results we've had so far. Why should we assume there's a limit if we haven't hit one yet, and see no fundamental block? Where would this limit be? Even our extremely limited understanding so far has provided some pretty amazing results.
The only question is performance and difficulty at various levels of technology and understanding. It's a matter of time. It won't be tomorrow, but unless we cease to exist first, it will happen. And once it's a matter of engineering that can be iterated upon, things get really interesting.
On the flip side, the computer proof at least resolves the question of whether a given conjecture is true. The proof of the four color theorem by computer in 1976–89 opened the door to simplifications and clarifications of the proof in the decade following. This is not unlike a lot of the difficult proofs of the last century. The receommended path to understanding Wiles's proof of Fermat's Last Theorem begins with Yves Hellegouach's book, Invitation to the Mathematics of Fermat Wileshttp://www.amazon.com/exec/obidos/ASIN/0123392519/donhosek and then Gary Cornell's Modular Forms and Fermat's Last Theoremhttp://www.amazon.com/exec/obidos/ASIN/0387946098/donhosek which assumes a graduate student's understanding of number theory and then the reader is ready to actually read Wiles.
When I was in grad school, I remember reading a comment from someone talking about looking at her roommate's materials for her PhD in English and realizing that while what she had was comprehensible to her even if she didn't necessarily understand any of it, her own materials for her math PhD were completely unintelligible to outsiders.
An arbitrary problem is most likely be either trivial or many orders of magnitude too hard for a computer, not solvable in the tiny range of scale that a computer can handle. No matter how many computations your computer can do in a decade, almost every calculation is bigger than that.
Sure, but as far as I know, for neither the busy beaver problem nor godel's incompleteness are the capabilities that humans bring to the table different than computers.
While computer proofs are better than no proofs at all, they aren't as exciting as "classic" proofs, which typically build new theory and inspire new questions to be solved.
The 4 color theorem was a boring case analysis, but that doesn't mean that all computer proofs are boring and tedious.
We have proof assistants using many beautiful type theoretic languages, and tools which can (attempt) to write proofs in this languages.
In fact, this is one of the best usages of ML because we hardly care whatever impenetrable bullshit goes into the proof search when the proof must be self-justifying, and, just like a ML go game log, potentially lead to new techniques for human mathematicians.
If you watch many mathematicians on YouTube a lot of them are just hunt and peck Pythonistas. It’s perfectly fine when the code is the tool, not the product.
The real curse seems to be MATLAB in my experience. No engineering book is incomplete without an untested, uncommented, unindented MATLAB code listing at the back. I don't like Python, but at least it forces you to write somewhat legible code. Luckily it usually doesn't matter, but when you have things like COVID-19 models written in this matter I fear for the worst.
This has not been a concern in mathematics up to relatively recently, mostly because computers were generally too puny to be of much use. By "recently," I mean from the birth of the computer up through probably 15-20 years ago. Since then, there's been a real convergence of tools and computing power that's going to send mathematics in new and interesting directions over the next decades.
Unfortunately, the mathematics curriculum doesn't seem to have caught up. (Please correct me if I'm wrong here -- and I'd be happy to be wrong.) If you wanted to learn these skills as a grad student even 10 years ago, you would have to go to the CS department, most likely, unless you were at one of the handful of departments doing formalized mathematics. I believe that's still the case.
When I got my undergrad about a decade ago, one of the core freshman classes was a lab course on Mathematica. Not CS style programming, but it was immensely helpful in the rest of my classes for tedious calculations, eg calculating a Groebner basis or Taylor expansions of a Lie group.
Unfortunately, the mathematics curriculum doesn't seem to have caught up. (Please correct me if I'm wrong here -- and I'd be happy to be wrong.) If you wanted to learn these skills as a grad student even 10 years ago, you would have to go to the CS department, most likely, unless you were at one of the handful of departments doing formalized mathematics. I believe that's still the case.
I would guess that by "these skills" you mean something beyond just programming in the general sense, or even numerical computation, as there shouldn't be any harm in leaning on the C.S department for that. No?
Are you instead proposing that a new course be added to the mathematics curriculum, that might be titled something like "Computational Mathematics" or "Experimental Mathematics" or something along those lines? If so, what kinds of things would go in such a class in your eyes?
Since then, there's been a real convergence of tools and computing power that's going to send mathematics in new and interesting directions over the next decades.
Indeed. One of the things I'm interested in is learning more about Interactive Theorem Provers. I just bought a book on Coq and intend to start diving down that rabbit-hole. Interestingly, I'm only just now starting to learn to do proofs based math at all, so I'm especially (but not exclusively) curious about whether or not there's an pedagogical value in starting to learn to use these proof assistant type tools right alongside learning to do proofs. It may well turn out that the answer is "no", but it should be fun to explore all of this in either case.
There could be more courses that teach the necessary programming techniques but also case studies of problems that have been solved this way. For example, the course "Experimental Mathematics" given by Doron Zeilberger.
Counterexample here, I started working with William Stein in 2006, contributing to Sage Math. As a math student, I actually learned these skills (including source control, unit testing, peer review, etc in addition to high-performance algorithms) as an undergrad and continued to sharpen them through my grad career. True, I was seen as an oddity, but my skills were valued by colleagues and I took every opportunity I could to teach in computational summer programs.
You raise an important point. These skills are very important, but they are barely raised when we teach scientists how to copy and paste python snippets. In a British university it's even harder to acquire these skills because the course is often set in stone (e.g. I wrote my first - not very good! - compiler several years ago now but I still have to take Introduction to Python in effect twice)
When I was 15/16, I was very lucky to have a computing teacher who - although he didn't teach us much actual programming - absolutely drilled into us the importance of a program's quality and aims. The absolute basics go a very long way: Should this function be smaller, is this logic simple etc.
More or less no, in my experience, unless you're specializing in such a way that it would be useful. And, in that case, you're probably better off doing your degree in a CS department, anyway.
Well, Github archived it's 2020 snapshot in Svalbard. This probably won't be a one off thing, so if you're concerned; upload it to a github repo and they'll probably do it again.
We have the word of a world known mathematician (Fermat). But yeah if he had written down the proof somewhere that he had in mind, maybe it would have turned out to have had mistakes in it, and it wouldn't have ever become such a large story/theme that it is now.
Anyways, it's important to archive the knowledge in a way that it can be understood by future humans as well. Some random website under an .edu can disappear at any moment.
What's incredible is that the most basic statement can be proven with just a line: they just show a single, simple path that returns to the vertex without touching any other vertices. It's kind of amazing that we are proving this fact now, 2370 years after a dodecahedron was first described.
I know they answered more than just the yes/no question but looking at that 2d diagram I find myself unable to believe a computer was needed to answer just the yes/no question.
Or even with a computer, this should have been trivial to brute-force, no? I mean, you wouldn't know that it would work ahead of time, but I find it hard to believe that no one tried it? Or am I missing some deep complexity of this problem?
I am wondering this as well. Disclaimer I did not read the paper.
On the one hand, there are infinitely many geodesics on a dodecahedron, of unbounded length, so you can't really brute force all of them.
On the other hand, the actual solution only goes through each face at most once. There are infinitely many unfoldings to try, but for each unfolding there are only finitely many paths to try, and you can get the solution on one of the minimal unfoldings.
Yeah, I mean 12! is ~479 million, and there are 20 vertices, so that is very much within brute-force-on-a-laptop range. For some net, I think given a vertex + list of faces, there is a simple-to-get yes/no answer on whether a ray passing through the faces and returning to the vertex exists. I wonder if you can just write a Python script to do it...
Edit: Even simpler - there are 43,380 nets of a dodecahedron, and each has 20 vertices. So if you just draw ~8 million straight lines and see if any are both on the net and do not intersect another vertex, you'll find (at least) one that works!
Agreed. This must have been known for centuries, because anybody could cut one open and draw the line, and that's the natural way to start looking at the problem. More believable is that everybody considered that one too trivial to publish. The claim that they needed sophisticated maths to prove it works is, too, obvious nonsense. Therefore, something else, not explained, was the actual point. I wonder what it was...
Anyway the real work that made it worth publishing has to be the other paths.
What is the exact rule here for how to follow a geodesic along a polyhedron? If we pick a vertex on a polyhedron and start marching across a face, in order for that to be a geodesic, how do we continue when we cross an edge onto an adjacent face? Is it simply that the there must be an equal and opposite angle of incidence?
(Like, suppose we imagine ascending an A-frame roof on an angle, and then, passing the ridge, descend down the other side, following a path that matches the ascent such that there is a 180 degree of rotational symmetry around the intersection. Both segments of the path lie in a plane that makes a vertical cut through the roof. Is that it?)
This sounds complicated. Couldn't one just flatten (temporarily) the two adjacent faces and draw a straight line across the edge? (I would imagine also that a geodesic crossing more than two faces or faces that just have a vertex in common might have to follow some edges or cross a vertex.)
It is not merely simple, it contains an explanation of why it works as a definition.
(Un)flattening is an isometry, it doesn't change lengths of lines. If you flatten a polyhedron and draw a straight line between two points, it is the shortest path between them (property of Euclidean space). Now fold the surface back, it remains the shortest path (and the geodesic).
Even if we are discussing a pair of points on adjacent polygons? Wouldn't the shortest path go across the one and only edge the polygons have in common?
I'm looking at the diagrams on that Wikipedia page. It looks as if we require the path to visit certain polygons, then it is in fact the shorted path which visits those polygons.
In both solutions, the path traced along the unfolded jacket of the box is in a straight line, obeying the principle we have been discussing.
I don't think that what we are looking at there are simply two different "different unfoldings" of a polyhedron; they are different choices about which configuration of edges to separate in forming what is evidently called the net of the polyhedron:
”Even if we are discussing a pair of points on adjacent polygons?”
Yes. Consider a pointy triangular wedge of two 1 by 100 rectangles and a third one that’s 1 by 1, and pick points on the two large rectangles that are close to the edge they share with the small one. The shortest path will visit all three rectangles.
Also: I was only refuting the claim that such a line is the shortest line between the two points. I happened to know an (IMO) interesting counterexample, so I mentioned it. I think the conclusion that such a line is a geodesic is valid, but not because of the argument given.
There are almost certainly no real world applications. Most pure mathematical research never has real world applications.
On the other hand, every once in while it gives you the foundations of e-commerce and is worth a trillion dollars. And it doesn't cost much, since all it requires is coffee and computers. So all told it's pretty cost effective.
And no, we never have the faintest idea which fundamental research will turn out to be insanely profitable.
Is it the case this time? No, probably not. But it cost very little to do, and maybe it enables something else.
To quote Feynman: "Physics is like sex: sure, it may give some practical results, but that's not why we do it." - this applies to all research.
Mathematics especially so, because it is not science, but an art form. The only difference between what people consider "good mathematics" and what isn't is what people consider beautiful, interesting, intriguing (usually, being correct is necessary for being beautiful, but not always: Fermat's Last Theorem has been on the minds of mathematicians for a very long time despite its proof being incorrect; and interesting conjectures are the driving force of this art).
It's a very humane experience, this mathematics of ours.
New vistas of mathematics open up when what was once ugly comes to be appreciated, usually two or more generations after its first exploration.
The history shows up in terminology: "irrational" numbers were once anathema, but had to be accepted (although the feelings about the word might have arisen in the other direction). "Negative" numbers came a lot later, and only after zero, which was disliked. "Imaginary" is another. After the bad numbers had to be accepted, we needed a name for the good ones: thus, "natural". Transcendentals got a free pass because of pi, so got a nice name.
Infinities are still causing trouble. Like irrationals, they are named for what they are not.
>Modern mathematics, like much of science, often isn't directed at real world problems.
So is ancient mathematics. And non ancient mathematics.
A lot of good mathematics is inspired or comes from real-world problems, but even more does not.
At a very fundamental level, mathematics is a curious exploration of an interesting "What if...", where you get to make up whatever you want in that "...", and see where it gets you.
Right here is the most interesting part of the article for me. It's definitely cool that we're making these sorts of discoveries still, but it's more interesting how so many more problems are now being solved via computer. Had I gone all the way through my PhD (in math), I'd be some combination of worried that my computing skills were up to snuff, and intrigued about the increasing role of computers in mathematics.