Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.

> The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.

Hahaha. It is certainly not a fundamental misunderstanding.

All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.

> Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.

You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use. But can you emulate `finally`? Again, no. FTA:

> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost. Update: Adam Rosenfield points out that Python 3.2 now saves the original exception as the context of the new exception, but it is still the new exception that is thrown.

> In C++, an exception thrown from a destructor triggers automatic program termination if the destructor is running due to an exception.

C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, and have spent a lot of my time on -fno-exceptions (among many other reasons.)

> The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at ...

Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter. You can typically do that in `finally`.

When Java introduced `finally` (I do not know if Java was the first language to have it, though it certainly must have been early) it was intended for just resource cleanup, and indeed, I imagine most uses of finally ever were just for closing files, one of the types of resources that you would want to be scoped like that.

However, in my experience the utility of `finally` has actually increased over time. Nowadays there's all kinds of random things you might want to do regardless of whether an exception is thrown. It's usually in the weeds a bit, like adjusting internal state to maintain consistency, but other times it is just handy to throw a log statement or something like that somewhere. Rather than break out a scope guard for these things, most of the time when I see this need arise in a C++ program, instead the logic is just duplicated both at the end of the `try` and `catch` blocks. I bet if I search long enough, I could find it in the wild on GitHub search.



> All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.

You are still looking at it backwards. C++ chose to tie user-defined object lifetimes to lexical scopes (for automatic storage objects defined in that scope) via stack-based creation/deletion because it was built on C's abstract machine model. Thus the implicit function calls to ctor/dtor were necessitated which turned out to be a far more general mechanism usable for scope-based control via function calls.

But the lifetime of a user-defined object allocated on the heap is not limited to lexical scope and hence the connection between lexical scope and object lifetime does not exist. However the ctor/dtor are now synchronous with calls to new/delete.

So you have two things viz. lexical scope and object lifetime and they can be connected or not. This is why i insist on disambiguating both in one's mental model.

Java chose the heap-based object lifetime model for all user-defined types and thus there is no connection between lexical scope and object lifetimes. It is because of this that Java had to provide the finally block to provide some sort of lexical scope control even-though it is GC-based. The Java object model is also the reason that finalize in Java is fundamentally different to dtor in C++ which i had pointed out earlier.

> You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use.

For lexical scopes you don't need anything new in C++, you can just use RAII at different levels using various techniques. However to make it even more clearer the upcoming C2Y standard does have proposals for syntactic sugar for defer (https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3489.pdf) and Scoped Guards (https://github.com/bemanproject/scope/blob/main/papers/scope...).

We started this discussion with your claim that dtors and finalize are essentially the same which i have refuted comprehensively.

Now you want to discuss finally and its behaviour w.r.t exception handling. In the absence of exceptions RAII gives you all of finally-like behaviour.

In the presence of exceptions;

> C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, ... Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter.

This is again a misunderstanding. I had already pointed you to Termination vs. Resumption exception handling models with a particular emphasis on meyer's contract-based approach to their usage. Now read Andrei Alexandrescu's classic old article Change the Way You Write Exception-Safe Code — Forever - https://erdani.org/publications/cuj-12-2000.php.html

Both C++ and Java use the Termination model but because the object model of C++ vs. Java is so very different (C++ has two types of object lifetimes viz. lexical scope for automatic and program scope for heap-based with no GC while Java only has program scope for heap-based reclaimed by GC) their implementation is necessarily different.

C++ does provide std::nested_exception and related api (https://en.cppreference.com/w/cpp/error/nested_exception.htm...) to handle chaining/handling of exceptions in any function. However the ctor/dtor are special functions because of the behaviour of the object model detailed above. Thus the decision was made to not allow a dtor to throw while an uncaught exception is in flight. Note that this does not mean a dtor cannot throw (though it has been made implicit noexcept from C++11) but only that the programmer needs to take care when to throw or not. An uncaught exception means there has been a violation of contract and hence the system is in a undefined state; and hence there is no point in proceeding further.

This where the std::uncaught_exceptions comes in which the stack overflow article i linked to earlier quotes Herb Sutter;

A type that wants to know whether its destructor is being run to unwind this object can query uncaught_exceptions in its constructor and store the result, then query uncaught_exceptions again in its destructor; if the result is different, then this destructor is being invoked as part of stack unwinding due to a new exception that was thrown later than the object’s construction.

Now the dtor can catch the uncaught exception and do proper logging/processing before exiting cleanly.

Finally, note also that Java itself has introduced new constructs like try-with-resources which should be used instead of try-finally for resources etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: