Hacker Newsnew | past | comments | ask | show | jobs | submit | nromiun's commentslogin

Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.

It is always the eternal tomorrow with AI.


Remember, gen AI produces so much value that companies like Microsoft are scaling back their expectations and struggling to find a valid use case for their AI products. In fact Gen AI is so useful people are complaining about all of the ways it's pushed upon them. After all, if something is truly useful nobody will use it unless the software they use imposes it upon them everywhere. Also look how it's affecting the economy - the same few companies keep trading the same few hundred billion around and you know that's an excellent marker for value.

Unfortunately, it’s also apparently so useful that numerous companies here in Europe are replacing entire departments of people like copywriters and other tasks with one person and an AI system.

Large LANGUAGE models good at copywriting is crazy...

Examples, translations and content creation for company CMS systems.

> On the other hand I am trying hard to remember anything famous created by any LLM.

That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.


Do you have any evidence that an LLM created something massive, but the person using it received all the praise?

Hey now, someone engineered a prompt. Credit where it's due! Subscription renews on the first.

Maybe not autonomously (that would be very close to economic AGI).

But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).

Also, almost by definition, every incentive is aligned for people in charge to deny this.

I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.


I will ask again because you have not give us an answer.

Do you have any evidence that an LLM created something massive?


You wish. AI has no shortage of people like you trying so hard to give it credit for anything. I mean, just ask yourself. You had to try so hard that you, in your other comment, ended up hallucinating achievements of a degree that Rob Pike can only dream of but yet so vague that you can't describe them in any detail whatsoever.

> But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did

Other people see that kind of statement for what it is and don't buy any of it.


So who has used LLMs to create anything as impressive as Rob Pike?


I would never talk down on Rob Pike.

But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".

Am I wrong on this? Or if this "doesn't count", why?

I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.

I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".

Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.


The signal to noise ratio cannot be ignored. If I ask for a list of my friends phone numbers, and a significant other can provide half of them, and a computer can provide every one of them by listing every possible phone number, the computer's output is not something we should value for being more complete.

He's also in his late 60's. And he's probably done career's worth of work every other year. I very much would not blame him for checking out and enjoying his retirement. Hope to have even 1% of that energy when/if I get to that age

> It is always the eternal tomorrow with AI.

ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.

I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.

Your phrasing seems overly pessimistic and premature.



Argument from authority is a formal fallacy. But humans rarely use pure deductive reasoning in our lives. When I go to a doctor and ask for their advice with a medical issue, nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."

> But humans rarely use pure deductive reasoning in our lives

The sensible ones do.

> nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."

I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).



> I am trying hard to remember anything famous created by any LLM.

not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...


You're absolutely right!

If you think about economic value, you’re comparing a few large-impact projects (and the impact of plan9 is debatable) versus a multitude of useful but low impact projects (edit: low impact because their scope is often local to some company).

I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.


One difference is that Rob Pike did it without all the negative externalities of gen ai.

But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.


If ChatGPT deserves credit for things it is used to write, then every good thing ever done in Go accrues partly to Rob.

> If you think about economic value

I don't, and the fact you do hints to what's wrong with the world.


All those amazing tools are internal and nobody can check them out. How convenient.

And guys don't forget that nobody created one off internal tools before GPT.


>On the other hand I am trying hard to remember anything famous created by any LLM.

ChatGPT?


ChatGPT was created by people...

Surely they used Chatgpt 3.5 to build Chatgpt 4 and further on.

Maybe that's why they can't get their auth working...

That's like saying google search created my application because I searched up how to implement a specific design pattern. It's just another tool.

I can imagine AI being just as useless in 100 years at creating real value that their parent companies have to resort to circular deals to pump up their stock.

Because not many people prioritize syntax design like GvR. Even now if someone releases a new programming language most people will ask what features it has, how fast it is, how fast is the package manager etc. Because these questions are simple yes and no ones. Unlike syntax design choices.

Even if they ask about the syntax design people just dismiss their question with saying "syntax is not important". Python did the opposite, it focused on syntax over everything else. That caught on with beginners and now here we are.

Of course with AI Python got even more popular, but even before ChatGPT was released it was already dominant.


I remember my first encounter with Matlab. Some YouTuber was building a toy rocket and he was simulating it in Matlab (Simulink). He just put in the weight of the rocket and it gave him the trajectory, apogee, flight time etc. It was like magic to a beginner like me.

You can do the same thing in other languages but it won't be built in like that.


> It's possible for a Rust program to be technically compilable but still semantically wrong.

This was my biggest problem when I used to write Rust. The article has a small example but when you start working on large codebases these problems pop up more frequently.

Everyone says the Rust compiler will save you from bugs like this but as the article shows you can compile bugs into your codebase and when you finally get an unrelated error you have to debug all the bugs in your code. Even the ones that were working previously.

> Rust does not know more about the semantics of your program than you do

Also this. Some people absolutely refuse to believe it though.


I think the key idea is that Rust gives you a lot of tools to encode semantics into your program. So you've got a much greater ability for the compiler to understand your semantics than in a language like JavaScript (say) where the compiler has very little way of knowing any information about lifetimes.

However, you've still got to do that job of encoding the semantics. Moreover, the default semantics may not necessarily be the semantics you are interested in. So you need to understand the default semantics enough to know when you need something different. This is the big disadvantage of lifetime elision: in most cases it works well, but it creates defaults that may not be what you're after.

The other side is that sometimes the semantics you want to encode can't be expressed in the type system, either because the type system explicitly disallows them, or because it doesn't comprehend them. At this point you start running into issues like disjoint borrows, where you know two attributes in a struct can be borrowed independently, but it's very difficult to express this to the compiler.

That said, I think Rust gives you more power to express semantics in the type system than a lot of other languages (particularly a lot of more mainstream languages) which I think is what gives rise to this idea that "if it compiles, it works". The more you express, the more likely that statement is to be true, although the more you need to check that what you've expressed does match the semantics you're aiming for.


Yes, that's a very common misconception.

Of course, if your program compiles, that doesn't mean the logic is correct. However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc). In Rust's case, this means that the compiler is much more restrictive, exhaustive and pedantic than others like C's and C++'s.

In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.

Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).


> However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc).

That is one hell of a copium disclaimer. "If you hold it right..."


Rust certainly doesn't make it impossible to write bad code. What it does do is nudge you towards writing good code to a noticeably appreciable degree, which is laudable compared to the state of the industry at large.

Rust is just a tool. It’s as fallible as any other tool. I wish we took it off the pedestal and treated it as such.

Are all tools equal in all dimensions or can they be compared for fitness of purpose?

A hand saw, a table saw and a SawStop are all tools, but they have different characteristics even though they all are meant to cut the same wood.

Ada, C, and lisp are all tools, but they have different characteristics even though they are all meant to cut through the same problems.

...yes?

A tool is a tool. I didn’t realize I needed to spell it out.

Cloudflare used a tool, broke parts of the internet.


I feel like you're attacking a strawman here. Of course you can write unreliable software in Rust. I'm not aware of anyone who says you can't. The point is not that it's a magic talisman that makes your software good, the point is that it helps you to make your software good in ways other languages (in particular C/C++ which are the primary point of comparison for Rust) do not. That's all.

> The point is not that it's a magic talisman that makes your software good, the point is that it helps you to make your software good in ways other languages (in particular C/C++ which are the primary point of comparison for Rust) do not.

Citation needed.


>In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.

I don't believe that it's guaranteed in Rust either, despite much marketing to the contrary. It just doesn't sound appealing to say "somewhat reduces many common problems" lol

>Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).

C++ has a strong type system too, probably fancier than Rust's or at least similar. Most people do not want to write complex type system constraints. I'm guessing that at most 25% of C++ codebases at most use complex templates with recursive templates, traits, concepts, `requires`, etc.


Comparing type systems is difficult, but the general experience is that it is significantly easier to encode logic invariants in Rust than in C++.

Some of the things you can do, often with a wild amount of boilerplate (tagged unions, niches, etc.), and some of the things are fundamentally impossible (movable non-null owning references).

C++ templates are more powerful than Rust generics, but the available tools in Rust are more sophisticated.


Note that while C++ templates are more powerful than Rust generics at being able to express different patterns of code, Rust generics are better at producing useful error messages. To me, personally, good error messages are the most fundamental part of a compiler frontend.

Concepts make it possible to generate very clear (even user-friendly) template errors.

True but you lose out on much of the functionality of templates, right? Also you only get errors when instantiating concretely, rather than getting errors within the template definition.

No, concepts interoperate with templates. I guess if you consider duck typing to be a feature, then using concepts can put constraints on that, but that is literally the purpose of them and nobody makes you use them.

If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later? This behavior is in fact used to decide between alternative template specializations for the same template. Concepts do it better in some ways.


> If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later?

Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.

A big concern here would be accidentally depending on something that isn't declared in the concept, which can result in a downstream consumer who otherwise satisfies the concept being unable to use the template. You also don't get nicer error messages in these cases since as far as concepts are concerned nothing is wrong.

It's a tradeoff, as usual. You get more flexibility but get fewer guarantees in return.


Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.

>Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.

What I meant is, if the thing is not instantiated then it is not used. Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that. Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to. But that's not a problem with the language.


> Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.

I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P

As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled. IIRC Swift takes advantage of this (polymorphic generics by default with optional monomorphization) and the Rust devs are also looking into it (albeit the other way around).

> Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that.

I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.

> Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to.

Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.


>I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P

The actual effects depend on a lot of things. I'm just saying, it seems contrived to me, and the most likely outcome of this type of broken template is failed compilation.

>As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled.

This is incompatible with how C++ templates work. There are methods to separately compile much of a template. If concepts could be made into concrete classes and used without direct inheritance, it might work. But this would require runtime concepts checking I think. I've never tried to dynamic_cast to a concepts type, but that would essentially be required to do it well. In practice, you can still do this without concepts by making mixins and concrete classes. It kinda sucks to have to use more inheritance sometimes, but I think one can easily design a program to avoid these problems.

>I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.

This sounds wrong to me. Template parameters plus template code actually turns into real code. Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable". No language I can dream of that has generics could do any different.

>Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.

I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.


> I'm just saying, it seems contrived to me

Sure. Contrivance is in the eye of the beholder for this kind of thing, I think.

> and the most likely outcome of this type of broken template is failed compilation.

I don't think that was ever in question? It's "just" a matter of when/where said failure occurs.

> This is incompatible with how C++ templates work.

Right, hence "tangentially related". I didn't mean to imply that the aside is applicable to C++ templates, even if it could hypothetically be. Just thought it was a neat capability.

> This sounds wrong to me.

Wrong how? Definition checking was undeniably part of the original C++0x concepts proposal [0]. As for some reasons for its later removal, from Stroustrup [1]:

> [W]e very deliberately decided not to include [template definition checking using concepts] in the initial concept design:

> [Snip of other points weighing against adding definition checking]

> By checking definitions, we would complicate transition from older, unconstrained code to concept-based templates.

> [Snip of one more point]

> The last two points are crucial:

> A typical template calls other templates in its implementation. Unless a template using concepts can call a template from a library that does not, a library with the concepts cannot use an older library before that library has been modernized. That’s a serious problem, especially when the two libraries are developed, maintained, and used by more than one organization. Gradual adoption of concepts is essential in many code bases.

And Andrew Sutton [2]:

> The design for C++20 is the full design. Part of that design was to ensure that definition checking could be added later, which we did. There was never a guarantee that definition checking would be added later.

> To do that, you would need to bring a paper to EWG and convince that group that it's the right thing to do, despite all the ways it's going to break existing code, hurt migration to constrained templates, and make generic programming even more difficult.

I probably could have used a more precise term than "backwards compatibility", to be fair.

> Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable".

I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.

> No language I can dream of that has generics could do any different.

I've mentioned Swift and Rust already as languages with generics and definition-site checking. C# is another example, I believe. Do those not count?

> I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.

My apologies for the misdirected focus.

In any case, that type of error might be "new" in the context of the conversation so far, but it's not "new" in the PL sense since that's basically Rice's theorem in a nutshell. No real way around it beyond lifting semantics into syntax, which of course comes with its own tradeoffs.

[0]: https://isocpp.org/wiki/faq/cpp0x-concepts-history#cpp0x-con...

[1]: https://www.stroustrup.com/good_concepts.pdf

[2]: https://old.reddit.com/r/cpp/comments/cx141j/c20_concepts_an...


That is all very good information. I don't often get into the standards and discussions about the stuff. Maybe ChatGPT or something can help me find interesting topics like this one but it hasn't come up so much for me yet.

>I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.

I see now. I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime. For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it. If we had reflection, I think this could also be worked out at compile time somehow. But I'm not very up to speed with what has been tried in this space. I'm guessing that concept definitions can be very extensive and also depend on complex expressions. That sounds hairy compared to what could be done without concepts, for example with an abstract class.


> I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime.

The bit of my comment you quoted was just talking about definition checking. Separate compilation of templates is a distinct concern and would be an entirely new can of worms. I'm not sure if separate compilation of templates as they currently are is possible at all; at least off the top of my head there would need to be some kind of tradeoff/restriction added (opting into runtime polymorphism, restricting the types that can be used for instantiation, etc.).

I think both definition checking and separate compilation would be interesting to explore, but I suspect backwards compat and/or migration difficulties would make it hard, if not impossible, to add either feature to standard C++.

> For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it.

Sounds a bit like `dyn Trait` from Rust or one of the myriad type erasure polymorphism libraries in C++ (Folly.Poly [0], Proxy [1], etc.). Not saying those are precisely on point, though; just thought some of the ideas were similar.

[0]: https://github.com/facebook/folly/blob/main/folly/docs/Poly....

[1]: https://github.com/microsoft/proxy


> If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later?

This seems like a very strange argument to me. For a pleasant experience you generally want to report errors as early as possible.


> but you lose out on much of the functionality of templates, right?

I don't think so? From my understanding what you can do with concepts isn't much different from what you can do with SFINAE. It (primarily?) just allows for friendlier diagnostics further up in the call chain.


You're right but concepts do more than SFINAE, and with much less code. Concept matching is also interesting. There is a notion of the most specific concept that matches a given instantiation. The most specific concept wins, of course.

Oh, interesting! I didn't know about that particular feature of concepts. I'll have to keep an eye out for potential places it can be used.

I don't agree that Rust tools are more sophisticated and they definitely are not more abundant. You just have a language that is more anal up front. C++ has many different compilers, analyzers, debuggers, linting tools, leak detectors, profilers, etc. It turns out that 40 years of use leads to significant development that is hard to rebuild from scratch.

I seem to have struck a nerve with my post, which got 4 downvotes so far. Just for saying Rust is not actually better than C++ in this one regard lol.


I think it’s because you misread my comment by skimming over the important parts.

This isn’t about tooling, it’s about language features and type systems.


I don't think anyone believes the “if it compile it works” phrase literally.

It's just that once it compiles Rust code will work more often than most languages, but that doesn't mean Rust code will automatically be bug free and I don't think anyone believes that.


Yeah, even the official Rust book points it out and if my memory serves me right (not punintended) also gives an example in the form of creating a memory leak (not to be confused with memory unsafe).

A memory leak can be unsafe though.

Then why is Box::leak not marked unsafe?

"unsafe" or `unsafe`? One is the general meaning of the word, the latter is "it invokes undefined-behavior".

As "unsafe". An example would be of how AMD GPUs some time ago didn't free a programs' last rendered buffers and you could see the literal last frame in its entirety. Fun stuff.

Could've been clearer above.


That is not a memory leak though! That's using/exposing an uninitialized buffer, which can happen even if you allocate and free your allocations correctly. Leaking the buffer would prevent the memory region from being allocated by another application, and would in fact prevent that from happening.

This is also something that Rust does protect against in safe code, by requiring initialization of all memory before use, or using MaybeUninit for buffers that aren't, where reading the buffer or asserting that it has been initialized is an unsafe operation.


It's a security hole. Rust doesn't prevent you from writing unsafe code that reads it. The bug wasn't that it could be read by a well conforming language, it was that it was handed off uninitialized to use space at all.

Fair, bad example.

There are definitively people in the ecosystem who peddle sentiments like "Woah, Rust helps so much that I can basically think 'if this compiles, everything will work', and most of the times it does!", and that's the confusing part for many people. Examples found in 30 seconds of searching:

- https://bsky.app/profile/codewright.bsky.social/post/3m4m5mv...

- https://bsky.app/profile/naps62.bsky.social/post/3lpopqwznfs...


I read the comments you linked and don't really think they literally believe Rust is magic. I dunno though I guess I could imagine a vibe coder tacitly believing that. Not saying you're wrong. I just think most people say that tongue in cheek. This saying has been around forever in the Haskell community for decades. Feels like a long running joke at this point

Rust isn't magic, but it has an incredibly low software defect rate. Google published a few studies on this.

If Rust code compiles, it probably has a lower defect rate than corresponding code written by the same team in another language, all else being equal.


Agreed. I've been working professionally with Rust for a year and it is my experience that it's rock solid.

There is no hint of irony in the linked posts.

When I’ve said that, I’ve meant that almost the only remaining bugs were bad logic on my part. It’s free from the usual dumb mistakes I would make in other languages.

I don't know the authors of those posts, so I don't want to put word in their mouth, but neither seem to be delusional about the "if it compiles, it works" phrase. The first one qualifies it with "most of the time", and the second one explicitly mentions using type state as a tool to aid correctness...

But I don't doubt there are people who take that phrase too literally, though.


Both examples you linked are people talking casually about the nature of Rust, rather than about the specific rule. That goes very much with your parent commenter's assertion that nobody takes it literally. The first example even starts with 'Most of the time' (this is true, though not guaranteed. I will explain further down). Human languages are imperfect and exaggerations and superlatives are common in casual communication.

But I have not seen any resource or anyone making technical points ever assert that the Rust compiler can verify program logic. That doesn't even make sense - the compiler isn't an AI that knows your intentions. Everybody is always clear that it only verifies memory safety.

Now regarding the 'most of the time' part. The part below is based purely on my experience and your mileage may vary. It's certainly possible to compile Rust programs with logical/semantic errors. I have made plenty. But the nature of C/C++ or similar manually memory-managed languages is such that you can make memory safety bugs quiet easily and miss them entirely. They also stay hidden longer.

And while logical errors are also possible, most people write and test code in chunks of sizes small enough where they feel confident enough to understand and analyze it entirely within their mind. Thus they tend to get caught and eliminated earlier than the memory safety bugs.

Now since Rust handles the memory safety bugs for you and you're reasonably good at dealing with logical bugs, the final integrated code tends to be bug-free, surprisingly more often than in other languages - but not every time.

There is another effect that makes Rust programs relatively more bug-free. This time, It's about the design of the code. Regular safe Rust, without any runtime features (like Rc, Arc, RefCell, Mutex, etc) is extremely restrictive in what designs it accepts. It accepts data structures that have a clear tree hierarchy, and thus a single-owner pattern. But once you get into stuff like cyclic references, mutual references, self references, etc, Rust will simply reject your code even if it can be proven to be correct at compile time. You have three options in that case: Use runtime safety checks (Rc, RefCell, Mutex, etc. This is slightly slower) OR use unsafe block and verify it manually, OR use a library that does the previous one for you.

Most of the code we write can be expressed in the restricted form that safe Rust allows without runtime checks. So whenever I face such issues, my immediate effort is to refactor the code in such way. I reach for the other three methods only if this is not possible - and that's actually rare. The big advantage of this method is that such designs are relatively free of the vast number of logical bugs you can make with a non-tree/cyclic ownership hierarchy. (Runtime checks convert memory safety bugs into logical bugs. If you make a mistake there, the program will panic at runtime.) Therefore, the refactored design ends up very elegant and bug-free much more often than in other languages.


> "Woah, Rust helps so much that I can basically think 'if this compiles, everything will work', and most of the times it does!"

I think is is a fairly bad example to pick, because the fact that the person says “I can basically think” and “most of the time it does” (emphasis mine) shows that they don't actually believes it will makes bug-free programs.

They are just saying that “most of the time” the compiler is very very helpful (I agree with them on that).


> Some people absolutely refuse to believe it though.

Who says this? I've never seen someone argue it makes it impossible to write incorrect code. If that were the case then there's no reason for it to have an integrated unit testing system. That would be an absurd statement to make, even if you can encode the entire program spec into the type system, there's always the possibly the description of a solution is not aligned with the problem being solved.


"If it compiles it works" isn't true. But "If it compiles it won't eat your homework" sort of is.

Neither are true, `std:fs::remove_dir_all("/home/user/homework")` will happily compile and run, no matter if that's what you wanted or not.

Rust programs can't know what you want to do, period.


The better phrase is that "if it compiles, then many possible Heisenbugs vanish"

https://qouteall.fun/qouteall-blog/2025/How%20to%20Avoid%20F...


They certainly know that you dont want to process garbage data from freed memory.

Soundness does not cover semantic correctness. Maybe you want to wipe $HOME.


>They certainly know that you dont want to process garbage data from freed memory.

It depends on what you mean by "freed". Can one write a custom allocator in Rust? How does one handle reading from special addresses that represent hardware? In both of these scenarios, one might read from or write to memory that is not obviously allocated.


Both of those things can be done in Rust, but not in safe Rust, you have to use unsafe APIs that don't check lifetimes at compile time. Safe Rust assumes a clear distinction between memory allocations that are still live and those that have been deallocated, and that you never want to access the latter, which of course is true for most applications.

You can indeed write custom allocators, and you can read to or write from special addresses. The former will usually, and the latter will always, require some use of `unsafe` in order to declare to the compiler: "I have verified that the rules of ownership and borrowing are respected in this block of code".

"If it compiles, then only logical bugs will make it eat your homework"

Everyone supporting this in the comments deserves to live under CCP style internet censorship.


> Is it time to rewrite sudo in Zig?

Taking the current RIIR movement and casting it on Zig as the next hyped language is clever.

> ITER achieves net positive energy for 20 consecutive minutes

Hilarious. I guess not many people talk about the challenge of proper shielding material against fusion radiation. Otherwise we would get hallucinations about new exotic heavy metals too.


Im surprised that there are no Rust headlines.


There was one: "100% rust Linux kernel upstreamed"


I have noticed LLMs tend to generate very verbose code. What an average human might do in 10 LoC, LLMs will stretch that to 50-60 lines. Sometimes with comments on every line. That can make it hard to see those bugs.


I wonder if it is another bug , like unwrap, in their rewritten code.

Also, I don't think their every service got affected. I am using their proxy and pages service and both are still up.


That is surprising. It is the opposite for me.

  $ time curl -L 'https://codeberg.org/'
  real    0m3.063s
  user    0m0.060s
  sys     0m0.044s

  $ time curl -L 'https://github.com/'
  real    0m1.357s
  user    0m0.077s
  sys     0m0.096s


A better benchmark is done through the web browser inspector (network tab or performance tab). In the network tab I got (cache disabled)

  Github
  158 requests
  15.56 MB (11.28 MB transferred)
  Finish in 8.39s
  Dom loaded in 2.46s
  Load 6.95s

  Codeberg
  9 requests
  1.94 MB (533.85 KB transferred)
  Finish in 3.58s
  Dom loaded in 3.21s
  Load 3.31s


I guess Github uses a lot of cache vs Codeberg.


I think you read that backwards. In skydhash's test, Codeberg's data was 72% cached, and GitHub's data was 28% cached. Maybe you meant that GitHub's cached 4.28MB was, in absolute terms, more than Codeberg's cached 1.41MB?


Some parts of Github are SPA island, which is why the DOM load fast, but then it has to wait for the JavaScript files and the request made by those files. Codeberg can be used with JavaScript disabled and you don’t have that much extra requests (almost everything is rendered serverside).

The transferred part is for the gzipped transfer. That makes sense if the bulk of the data is HTML (I have not checked).

I’ve disabled the cache for the network requests.


Oh, thank you for the correction. That was a dumb mistake on my part.


Yeah, that is what I meant. It looks like Github's strategy is to push all the initial data they need to cache, to optimize subsequent requests.


That depends on location and GitHub pages generally take a while to execute all the javascript for a usable page even after the html is fetched while pages on Codeberg require much less javascript to be usable and are quite usable even without javascript.

Here are my results for what it's worth

  $ time curl -o /dev/null -s -L 'https://codeberg.org'

  real    0m0.907s
  user    0m0.027s
  sys     0m0.009s

  $ time curl -o /dev/null -s -L 'https://github.com/'

  real    0m0.514s
  user    0m0.028s
  sys     0m0.016s


Sure, it depends on your internet connection. But for Codeberg I see a blank page for 3-4 seconds until it shows something. On a big repo like Zig the delay is even worse.

On Github any page loads gradually and you don't see a blank page even initially.


Try changing tabs when reviewing a PR. 5-10 seconds on basic PRs often


GitHub frontpage is very quick indeed, but browsing repos can sometimes have load times over a full second for me. Especially when it's less popular repos less likely to be in a cache.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: