Hacker Newsnew | past | comments | ask | show | jobs | submit | cjfd's commentslogin

The problem could also be with the senior.....

With 'auto' it is so very verbose. It can be shorter. Let us put "using TP = std::chrono::steady_clock::time_point;" in some header file to be used in many places. Now you can write

  TP start = TP::clock::now();
  do_some_work(size);
  TP end = TP::clock::now();

I prefer to put the `using` in the block where you need the std::chrono code, which keeps it local and tidy. Putting it in a header is declaring a global type and asking for trouble; at least bound it in a namespace or a class.

Some organizations don't like putting using declarations in headers since now you've got a global uniqueness requirement for the name "TP."

You put the using as class member (private) or as local in the function.

how is TP more descriptive than auto here?

I agree that auto should be used as little as possible. There are good uses, though. It is okay to use when the type is trivially inferred from the code. What is auto in "auto ptr = std::make_shared<MyNiceType>();". Everybody who knows any C++ knows. Also, lambdas do not have a type that can be written down, so it is okay to use auto for them.

I also prefer not to use auto when getting iterators from STL containers. Often I use a typedef for most STL containers that I use. The one can write MyNiceContainerType::iterator.


Pre LLM agents, a trick that I used was to type in

auto var = FunctionCall(...);

Then, in the IDE, hover over auto to show what the actual type is, and then replace auto with that type. Useful when the type is complicated, or is in some nested namespace.


That's what I still do. Replacing auto with deduced type is one of my favorite clangd code actions.

Documentation like doxygens is almost completely opposite from literate programming. The comment you are responding to emphasizes the ability to determine yourself the order in which to present the documentation. Literate programming is writing a document in the first place where, as an afterthought, a program can be extracted. Source code with doxygen is source code where, as an afterthought, documention can be extracted from. In many cases doxygen documention is quite worthless. Very often it is very helpfully documented that the method get_height, "gets the height". It is very fragmentary documentation where the big picture is completely missing. There is also a case where doxygen-like documentation is needed. This is when writing a library that is going to be used by many people. But then the doxygen comments should only be used on methods that you want those other people to use. And then there is still the danger that there will be too little higher level documentation because the doxygen is treated like it is sufficient.

Literate programming is, in my opinion, only used very seldomly because keeping an accurate big picture view of a program up to date is a lot of work. It fits with a waterfall development process where everything that the program is supposed to do is known beforehand. It fits well with education. I think it is no coincidence that it was brought to prominence by D.E. Knuth who is also very famous as an educator.


OK. Fair enough, but remember that Doxygen also analyzes code structure, and can generate things like UML diagrams, and inheritance trees.

Maybe a tool like Rational Rose is more along those lines.

I’ve always been a proponent of writing code in a manner that affords analysis, later. That’s usually more than just adding headerdoc.


This is the kind of development that one needs for safety critical applications. E.g., nuclear power plants or airplane control software. I don't think it is economically feasible for less critical software. It presumes a great degree of stability in requirements which is necessary for such applications.


Indeed, a lot of it is pulled from NASA's "Power of Ten – Rules for Developing Safety-Critical Code." The original tigerbeetle doc cites them explicitly: https://github.com/tigerbeetle/tigerbeetle/blob/ac75926f8868...


I have been programming professionally for 17 years and I think this guideline is fine. I have difficulty imagining a function of 70 lines that would not be better off being split into multiple functions. It is true that if a function is just a list of stuff longer functions can be allowed then when it does multiple different things but 70 lines is really pushing that.


You don't understand very much about entropy. This reasoning is very, very, very sloppy.


Now I remember why I stopped commenting here.


low-effort comment with ad hominem and zero rationale. fairly toxic.


Now there is your problem. It is only true in the context of grave incompetence, though. I have worked on tickets with 'remove' in the title.


(1) and (2) sound reasonable enough. What do they mean by "dynamic object generation"?


Runtime instantiation.

From the link above:

"Instead of seeing a program as a monolithic structure, the code of a SIMULA program was organized in a number of classes and blocks. Classes could be dynamically instantiated at run-time, and such an instance was called an "object". An object was an autonomous entity, with its own data and computational capabilities organized in procedures (methods), and objects could cooperate by asking another object to perform a procedure (i.e., by a remote call to a procedure of another object)."


One way that is very common to have decidable dependent types and avoid the paradox is to have a type hierarchy. I.e, there is not just one star but a countable series of them *_1, *_2, *_3, .... and the rule then becomes that *_i is of type *_(i+1) and that if in forall A, B A is of type *_i and B is of type *_j, forall A, B is of type type *_(max(i, j) + 1).


I'm no expert myself, but is this the same as Russell's type hierarchy theory? This is from a quick Google AI search answer:

    Bertrand Russell developed type theory to avoid the paradoxes, like his own, that arose from naive set theory, which arose from the unrestricted use of predicates and collections. His solution, outlined in the 1908 article "Mathematical logic as based on the theory of types" and later expanded in Principia Mathematica (1910–1913), created a hierarchy of types to prevent self-referential paradoxes by ensuring that an entity could not be defined in terms of itself. He proposed a system where variables have specific types, and entities of a given type can only be built from entities of a lower type.


I don't know that much about PM but I from what I read I have the impression that for the purposes of paradox avoidance it is exactly the same mechanism but that PM in the end is quite different and the lowest universe of PM is much smaller than than that of practical type theories.


This is correct but just delays the problem. It is still impossible to type level-generic functions (i.e. functions that work for all type levels).

The basic fundamental reality that no type theory has offered is an ability to type everything


>if in forall A, B A is of type _i and B is of type _j, forall A, B is of type type *_(max(i, j) + 1).

Minor correction: no +1 in forall


Ah is that what Lean does with its type universes?


Yes, it is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: