-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle exceptions in hookwrapper post stage (#244) #257
Handle exceptions in hookwrapper post stage (#244) #257
Conversation
Maybe I have missed something related to calling convention of hook wrappers, so feel free to decline. |
Interesting. The "exception-in-finalizer" problem is a classic issue with excpetions. The two approaches I'm aware of are indeed either to suppress or crash. Python itself chooses to suppress with a print message if an exception is raised in a Generally, I think finalizers should never fail, so if they raise, it is a bug, and a crash is not a bad way to signal that. I wonder if in your case, you did not have a way to fix the problem at the source? Regarding the patch itself (choosing to suppress), you made so that if a finalizer raises, the outcome changes to that exception for all subsequent finalizers. Why do you think this is the better behavior? (The alternative is to keep calling the other finalizers with the original Result). |
No, in the case of "crash", code after AssertionError: assert ['m1 init', '... 'm1 cleanup'] == ['m1 init', '...p', 'finally']
E At index 3 diff: 'finally' != 'm1 teardown'
E Right contains one more item: 'finally'
E Full diff:
E - ['m1 init', 'm2 init', 'm2 raise', 'm1 teardown', 'm1 cleanup', 'finally']
E + ['m1 init', 'm2 init', 'm2 raise', 'finally', 'm1 cleanup'] If your prefer to skip code after I do not consider remaining part of code as finalizers. It is just code after calling of inner hook, so I do not see any problem with exceptions just as in ordinary functions. My approach implements proper stack unwind and it is what I expect in response to exception. |
testing/test_multicall.py
Outdated
@@ -184,3 +184,79 @@ def m2(): | |||
with pytest.raises(exc): | |||
MC([m2, m1], {}) | |||
assert out == ["m1 init", "m1 finish"] | |||
|
|||
|
|||
def test_hookwrapper_teardown_exception(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would adjust this name.
Consider _handle_wrapper_unwind_in_teardown_on_exc
or something more explicit related to where in the chain the exception is raised.
@maxnikulin thanks for this work.
I believe this is due to
I believe this logic is correct.
This might actually be the approach we want instead. I have to read through #244 on a clear mind to be more sure of this. |
Disregard, I had a misunderstanding of how
|
Actually I could not get your point. Why do you prefer crash-like behavior to stack unwind? With I do not think that the problem is similar to async generator. Pluggy effectively simulates call stack of hook wrappers through sequential calls of generators. It is simply missed error handler that I believe that hook wrappers are similar to nested calls of regular functions. For convenience they are implemented using generators but I expect behavior as close to regular functions as possible def hook_with_result(arg):
return arg + 42
def inner_wrapper(arg)
result = hook_with_result(arg)
if result > 50:
# Code after yield in the case of geherators.
# What is the problem with exception here?
raise ValueError("Too much")
return result*2
def outer_hook(arg)
try:
return inner_hook(arg + 10)
except ValueError:
# Why it should not be possible to
# to handle exception and return result here
# or convert it to another exception?
return 13
print(outer_hook(100)) I even suppose that it should be more reliable to send exception into hook wrapper using So if the word "finalizer" is not used for the code after |
I agree with you, but I do not see |
@maxnikulin sorry maybe I wasn't being clear enough.
I think that in the non-error case we should be doing this regardless such that
Yes, for the error case this is 100% the the way I would expect exceptions to be propagated/handled. Sorry if I didn't restate this but I thought it was implied in your comment when I commented with:
which was in reference to:
I'm not sure how else you accomplish this without using Further,
I would view the situation of nested generators that need to all be unwound as similar to an async generator needing to be unwound from within a coroutine but it's really not that important. The main point was that Pep 533 (which is not async specific) goes over the issue of "generator lifetimes" as something that should be explicit as opposed to handled arbitrary in the future by the GC. In summary, I think we should make 2 changes:
|
@goodboy, I am confused a bit by your words, but situation becomes clearer if you missed that generator, that follows hook wrapper convention (namely it performs just one Another source of confusion is (warning! context is severely narrowed down, see the full comment):
@bluetech asked why I have chosen sending exception instead of original result inside the outcome. That is why, @goodboy, I decided that that you suggest the opposite thing, namely just call Concerning In python-3 it is possible to avoid To summarize:
|
Unit test is not a problem diff --git a/testing/test_multicall.py b/testing/test_multicall.py
index d4521cd..af986e9 100644
--- a/testing/test_multicall.py
+++ b/testing/test_multicall.py
@@ -156,15 +156,27 @@ def test_hookwrapper_not_yield():
def test_hookwrapper_too_many_yield():
+ out = []
+
@hookimpl(hookwrapper=True)
def m1():
- yield 1
- yield 2
+ try:
+ yield 1
+ yield 2
+ finally:
+ out.append("cleanup")
with pytest.raises(RuntimeError) as ex:
- MC([m1], {})
+ try:
+ MC([m1], {})
+ finally:
+ out.append("finally")
assert "m1" in str(ex.value)
assert (__file__ + ":") in str(ex.value)
+ assert out == [
+ "cleanup",
+ "finally",
+ ]
@pytest.mark.parametrize("exc", [ValueError, SystemExit]) However |
Yeah, my bad I'm probably not being clear but I also think the details of how errors can be handled in generators is nuanced.
No, and again my apologies for not being clear, I was trying to say that we should always be calling
Yes this is exactly one reason why. I also don't see a reason to not explicitly close the wrapper when it is finished being used instead of waiting for it to be GCed. If you look at the introduction of
So there may be a memory consideration as well as just adhering to good resource de-allocation practices.
Yeah, again my apologies, that's not what I meant even though I could see it being read that way.
Yes this my thinking and I think we should be using modern python features over our home brewed
The main problem with this is masking any error that was raised in the hook call that downstream wrappers might be expecting to handle. Being able to distinguish between errors from the hook call and errors from wrappers is likely desirable. |
Huh, interesting. I would have not expected that to work? [nav] In [1]: def gen():
...: try:
...: yield 10
...: yield 20
...: finally:
...: print('finalizer')
[nav] In [2]: g = gen(); next(g)
Out[2]: 10
[nav] In [3]: try:
...: g.send(None)
...: print('uh oh >1 yield')
...: raise ValueError
...: except StopIteration:
...: print('err: {}'.format(str(err)))
uh oh >1 yield
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-5f6680eec8f0> in <module>
2 g.send(None)
3 print('uh oh >1 yield')
----> 4 raise ValueError
5 except StopIteration:
6 print('err: {}'.format(str(err)))
ValueError: When I run this code the |
I have added |
@maxnikulin of course but this is not something we have control over since it's built into Python :) This is looking good but I would like some feedback from other core devs. |
It seems that master should be updated to fix new flake8 warnings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late feedback -- this looks like a great solution to #244 to me.
Agreed that this has very little to do with the generalized generator finalization problem, async or otherwise. pluggy already has logic to tear down the stack of hookwrappers; it just needs a fix such as this one so it actually executes that code even if one of the inner wrappers raises an exception.
src/pluggy/callers.py
Outdated
@@ -90,13 +90,20 @@ def _wrapped_call(wrap_controller, func): | |||
try: | |||
next(wrap_controller) # first yield | |||
except StopIteration: | |||
wrap_controller.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't necessary to close() after you've received StopIteration (or any other exception) from a call to send() or next(), since that means the generator is already exhausted. (It's harmless though.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. I'm now starting to wonder if there's any benefit other then triggering the finally:
in the case where the wrapper was written with >1 yield
(a mistake by the user).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as i understood its to ensure the generator cleans up when there was NO stop and its not clear if the generator is actually exhausted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally this branch is executed if generator finishes before yielding due to some bug in the wrapper. Notice that it still should be a generator and yield
should be present somewhere in the function body. See test_multicall_hookwrapper_yield_not_executed
below. So the generator is almost certainly already exhausted and all finally
clauses have executed already.
I am unsure if close()
could do something really useful here. I added it with hope that python implementation might be more aggressive trying to release resources of pathological generators
def bad_generator():
try:
raise StopIteration()
finally:
yield
However such code is not valid:
https://docs.python.org/3/library/exceptions.html#StopIteration
Changed in version 3.7: Enable PEP 479 for all code by default: a
StopIteration
error raised in a generator is transformed into aRuntimeError
.
If you believe that close()
is confusing here, I could remove it from this function and from the code handling "modern"-style hook wrappers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the point where I left this comment, the generator has raised StopIteration. That means it's exhausted. An exception can't escape from a generator if there's anything the generator still might yield. These uses (close() after catching an exception raised out of the generator, including StopIteration) are superfluous.
The PR also uses close() to deal with cases where the generator has yielded too many times. That use is valid and useful. Note that close() still might not succeed in closing the generator, if the generator yields in a finally
block, but that's a bug in the generator. In such cases close() will raise RuntimeError.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the point where I left this comment, the generator has raised StopIteration. That means it's exhausted.
This is where I was initially ignorant to the fact that finally:
blocks are indeed triggered on a single yield
generator that has received a StopIteration
.
That use is valid and useful. Note that close() still might not succeed in closing the generator, if the generator yields in a finally block, but that's a bug in the generator. In such cases close() will raise RuntimeError.
Yeah I'm thinking catching these cases is a good approach?
@oremanj do you think there's any merit to consideration for the GC discussion in the introduction of generator.close()
? It seems to me that calling .close()
will avoid any weird errors triggered from __del__()
in python's future as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, my example of pathological generator was wrong even for python-2. It is possible to yield more values despite attempts to close generator or to send to it another exception but it after some exception is caught from a generator, it is impossible to get more values and it is completely exhausted.
My opinion on the current PR is that it's problematic. In #257 (comment) @maxnikulin nicely explains their point of view - that the hookwrappers are analogous to nested function calls, and hence that an exception thrown in the "after yield" section should propagate to the upper wrappers and eventually to the hook caller. But as mentioned before I somewhat disagree with this view point. From my experience in pytest, I do view the "before-yield" and "after-yield" sections as setup/teardown sections. And specifically, if a setup or teardown code throws, I consider it a bug, and I do not expect it to propagate. Considering the issue at hand #244, I think that the proper fix is to introduce an explicit The idea in #260 would change the perspective to that of @maxnikulin, but in current pluggy I expect that an explicit action on the So I mentioned in my previous comment, my preference is to "crash" - in practice it would mean throwing some Regarding the discussion on |
I should note that this PR is definitely an improvement on the current pluggy, because current pluggy also propagates, but just doesn't do proper teardown, from what I understand. But I do not see this as a fix for #244, and would prefer that pluggy changed to a "crash" behavior as a separate manner. |
@bluetech As the person who filed #244, I do see this PR as a fix for #244. :-) AFAICT, the reason why hooks use this _Result object, instead of following I think insisting on a new Considering
versus
does the latter one really look preferable to you? |
Thanks for the comments @oremanj!
Yes, I don't know the history but that's probably why. Old Tornado code used a hack for this -- instead of
That's a good point. However, are you aware of any plugins which try to change the exception raised, or even suppress it? Looking at pytest itself, and some corpus of 70 plugins I have checked out, I only found one case in pytest-asyncio, but looks like it was just removed as well. So because the exception is not modified, it is a question of interpretation whether the exception is propagated, or the exception of the result is used. The pluggy docs currently only say "If any hookimpl errors with an exception no further callbacks are invoked and the exception is packaged up and delivered to any wrappers before being re-raised at the hook invocation point" I'd say this leaves the behavior of a hook raising unspecified, and the feature of altering the exception is currently unsupported.
I'd be interested to see some examples.
I think it is not strictly better, because a buggy hookwrapper, which has an accidental
Of course the first one looks much nicer. But it is just not analogous with current pluggy which is result = yield
result.force_result(result.get_result() + 10) not return (yield) + 10 I think one way to resolve this question would be:
|
I've written one -- it was for an internal project at my previous employer, so I can't show it here, but I don't think it's an exotic use case. I wound up needing to assign to
How is a buggy hookwrapper meaningfully different from a buggy hook that's not a wrapper? Yeah, if your hookwrapper has a bug that results in it throwing an exception, it might not be clear whether that exception was thrown intentionally or not. So you look at the traceback and figure it out. Same as you would do if your non-wrapper hook threw an exception. :-)
I'd support allowing a I think removing |
Frankly speaking, I completely missed that hook wrappers could be used for setup and tear down. So it turns out that hook wrappers could have 2 different variants of usage with distinct expected behavior
On the other hand I am tempting to say that tear down code must be in a
As an alternative, other than Concerning docs
I did not noticed unspecified behavior and expected that it is delivered to all wrappers called before. Ability to alter the exception is in some contradiction with the statement. On the other hand I have faced the issue playing with pytest. I was trying to achieve a convenient implementation of To summarize:
|
btw @maxnikulin I fixed the I also still need to go through the most recent discussion. |
See https://github.com/pytest-dev/pytest-subtests/pull/27/files#diff-d53084e24bba305b8e814d2a0a513397R248 for an example where exception in post-yield code could be useful to mark a pytest as failed one |
So we've removed Python 2 support in master, and gearing up to a 1.0.0 release (which pytest 6.0.0 will hopefully depend on). So I'm trying to hookwrap my mind around this again (sorry for the verbosity!). For the sake of understanding, let's take a simple hookspec, not using firstresult, which has 3 non-wrapper impls and 3 wrapper impls. Code for the examplefrom pluggy import HookspecMarker, HookimplMarker, PluginManager
hookspec = HookspecMarker("example")
hookimpl = HookimplMarker("example")
pm = PluginManager("example")
class Hooks:
@hookspec
def myhook(self, arg):
pass
class Plugin1:
@hookimpl
def myhook(self, arg):
print("call1()")
class Plugin2:
@hookimpl
def myhook(self, arg):
print("call2()")
class Plugin3:
@hookimpl()
def myhook(self, arg):
print("call3()")
class Plugin4:
@hookimpl(hookwrapper=True)
def myhook(self, arg):
print("setup4()")
yield
print("teardown4()")
class Plugin5:
@hookimpl(hookwrapper=True)
def myhook(self, arg):
print("setup5()")
yield
print("teardown5()")
class Plugin6:
@hookimpl(hookwrapper=True)
def myhook(self, arg):
print("setup6()")
yield
print("teardown6()")
pm.add_hookspecs(Hooks)
pm.register(Plugin1())
pm.register(Plugin2())
pm.register(Plugin3())
pm.register(Plugin4())
pm.register(Plugin5())
pm.register(Plugin6())
pm.hook.myhook(arg=10) This prints:
I tried to formulate the current semantics, and this is what I came up with: results, exc = [], None
try:
setup6(arg)
except BaseException as e:
exc = e
else:
try:
setup5(arg)
except BaseException as e:
exc = e
else:
try:
setup4(arg)
except BaseException as e:
exc = e
else:
try:
results.append(call3(arg))
results.append(call2(arg))
results.append(call1(arg))
except BaseException as e:
exc = e
teardown4(arg, results, exc)
teardown5(arg, results, exc)
teardown6(arg, results, exc)
return results, exc This PR will change it to this: results, exc = [], None
try:
setup6(arg)
try:
setup5(arg)
try:
setup4(arg)
try:
results.append(call3(arg))
results.append(call2(arg))
results.append(call1(arg))
except BaseException as e:
exc = e
teardown4(arg, results, exc)
except BaseException as e:
exc = e
teardown5(arg, results, exc)
except BaseException as e:
exc = e
teardown6(arg, results, exc)
except BaseException as e:
exc = e
return results, exc I'd say the current behavior is the worst of all worlds. It both discards the other teardowns, and propagates the exception as-is. In particular, I'm convinced by @oremanj's point about the common usage of the result = (yield).get_result() pattern without any try/finally, which is very common in pytest, also in its docs, and is currently buggy. We can't break that, and checking that the exception is the same is really ugly. So my conclusion is that this PR is a good idea. However, I feel that the discrepancy with how a hookwrapper would change the return value ( results, exc = [], None
try:
setup6(arg)
try:
setup5(arg)
try:
setup4(arg)
try:
results.append(call3(arg))
results.append(call2(arg))
results.append(call1(arg))
except BaseException as e:
exc = e
results = teardown4(arg, results, exc)
except BaseException as e:
exc = e
results = teardown5(arg, results, exc)
except BaseException as e:
exc = e
results = teardown6(arg, results, exc)
except BaseException as e:
exc = e
return results, exc This can be done with the existing WDYT? We can leave the In the meantime @maxnikulin I think you can safely rebase now. |
The branch currently have no merge/rebase conflicts with master. I do not think that propagation of exception will cause real additional problems since currently finalizing part of outer wrappers is not invoked at all in the case of an error in an inner wrapper. Currently teardown call is just missing, with these changes I believe that replacing result by exception is the only straightforward way following least surprise principle. I do not see a clear way to distinguish exceptions generated by a normal hook and by an inner wrapper that is why I expect more confusion with result array and exception from teardown passed simultaneously or from hiding teardown exceptions from outer wrappers. Recently I have tried to improve the pytest-subtests plugin. I would not mind to have ability to make a list of exceptions the outcome of the test with failed subtests. But it should be handled inside pytest hooks, not in pluggy. The complication is that the list of exceptions could be generated by a single hook(wrapper). We could speculate that generator interface in principle allows list of outcomes from a single wrapper but it should not be considered seriously. It is even possible to allow ordinary hooks generate list of mixed result/exception outcomes (something like So there are plenty of possibilities to mix results and exceptions in hook call outcome. For me the most logical one is to replace result by exception in teardown handler as an unambiguous sign that the call as a whole has failed (either in a reglar hook or in a wrapper) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of questions/suggestions, but otherwise it LGTM!
src/pluggy/callers.py
Outdated
@@ -109,6 +109,7 @@ def _multicall(hook_impls, caller_kwargs, firstresult=False): | |||
next(gen) # first yield | |||
teardowns.append(gen) | |||
except StopIteration: | |||
gen.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you describe why this is needed?
(I think it was discussed before but I'm still not sure).
BTW, I think it will be nice to move the gen = hook_impl.function(*args)
line to before the try
, and the teardowns.append(gen)
to after the try
, because it's good to keep the scope of try
as focused as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is executed for pathological hooks only so I do not think it is really important.
A real step to improve diagnostics is to add a check that hookwrapper is a generator on the initialization step before attempts to run hooks. However gen.close()
is related to bad generator, so it is another case.
I do not mind to move gen =
line above try
.
Really defensive style would require gen.close()
in response to any other exception, including improbable failure of teardowns.append()
.
close()
was added here in response to the comments of @goodboy.
In summary, I think we should make 2 changes:
generator.close()
should always be called
I missed that other related comments have been edited: #257 (comment) #257 (comment)
I do not have strong opinion concerning close
here. It makes no harm. In principle it might allow interpreter implementation to release resources earlier but I am not sure. Taking into account #257 (review) I have realized just now that there are more objections than arguments in support of the change. Actually I was waiting for more opinions from developers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A real step to improve diagnostics is to add a check that hookwrapper is a generator on the initialization step before attempts to run hooks.
I think that's a good idea, sent #282 which adds this validation.
Really defensive style would require gen.close() in response to any other exception, including improbable failure of teardowns.append().
I think we can assume that teardowns.append
does not fail.
I do not have strong opinion concerning close here. It makes no harm. In principle it might allow interpreter implementation to release resources earlier but I am not sure
The docs say
close() does nothing if the generator has already exited due to an exception or normal exit.
This is the case here, so I think that that close
is redundant, unless I'm missing something.
Similarly to the defensive approach you suggested, IIUC, if the next(gen)
raises it means the close()
will do nothing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed with @bluetech -- the extra gen.close()
isn't doing anything here.
src/pluggy/callers.py
Outdated
@@ -127,9 +128,17 @@ def _multicall(hook_impls, caller_kwargs, firstresult=False): | |||
# run all wrapper post-yield blocks | |||
for gen in reversed(teardowns): | |||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDYT about writing it like this instead?
try:
gen.send(outcome)
except StopIteration:
# Exited normally after one yield.
pass
except BaseException:
# Raised an exception.
outcome._excinfo = sys.exc_info()
else:
# Has a second yield.
try:
gen.close()
except BaseException:
# Ignored GeneratorExit in second yield, then raised.
# This is a lost cause; ignore and let the GC handle it.
pass
_raise_wrapfail(gen, "has second yield")
I added some comments which just repeat the code but help because the code is a bit tricky.
I flattened the try/except because I think it makes the code a bit clearer.
I modified the existing outcome
instead of creating a new one, because the current code works by passing one mutable _Result
everywhere and I think it'd be good to preserve this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only reason of creation of a new _Result
object is to avoid direct usage of "private" _excinfo
field. For me it does not matter which variant is used. Just confirm that you believe that it is worth changing to _excinfo
. Better alternative is adding a method to _Result
something like force_current_exception()
. I tried to make minimal changes to avoid arguing on such changes of the interface.
Are you assuming removing of the outer try
? If so I would object. Exception from the last raise_wrapfail()
must be trapped into the outcome
as well. Extra try
just ensures that there are no holes in the capture exception logic.
Maybe KeyboardInterrupt
requires more aggressive treatment with gen.throw()
to skip teardown code without explicit finally
. (I assume is that a hook having well behavior does not use sys.exit()
.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just confirm that you believe that it is worth changing to _excinfo. Better alternative is adding a method to _Result something like force_current_exception(). I tried to make minimal changes to avoid arguing on such changes of the interface.
Right it's not great to access a private attribute but since it's internal I don't mind too much.
We can add a _Result
method for that, but I'd make it private, and then I feel it's not really worth the trouble over just setting _excinfo
:)
Are you assuming removing of the outer try? If so I would object. Exception from the last raise_wrapfail()must be trapped into the outcome as well.
Right, I missed that. I still somewhat dislike the nested try for the happy path, so I revise my suggestion to this:
try:
gen.send(outcome)
# Has a second yield.
try:
gen.close()
except BaseException:
# Ignored GeneratorExit in second yield, then raised or yielded (RuntimeError).
# This is a lost cause; continue and let the GC handle it.
pass
_raise_wrapfail(gen, "has second yield")
except StopIteration:
# Exited normally after one yield.
pass
except BaseException:
# Raised an exception.
outcome._excinfo = sys.exc_info()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I am against ignoring exception raised in response to gen.close()
.
If gen.close()
in finally block is really offensive for your, if I have not missed something, it should be
try:
gen.send(outcome)
# Second yield is an error, close() is not necessary after StopIteration
gen.close()
_raise_wrapfail(gen, "has second yield")
except StopIteration:
# Exited normally after one yield.
pass
except BaseException:
# Raised an exception.
outcome._excinfo = sys.exc_info()
Second reason for nested try
blocks was likely ensuring gen.close()
independently of code path. I believed that it was what @goodboy was asking for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that LGTM too. Would just change the comment to
# Has a second yield.
# Try to close it immediately instead of leaving it to the GC.
# Note that If the generator is misbehaved and ignores `GeneratorExit`, the `close()` may raise.
Second reason for nested try blocks was likely ensuring gen.close() independently of code path
I think doing close
when it's unnecessary obscures the only time it is necessary (second yield), so my preference is to do it only then. @goodboy might still think different though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New _Result
made from exception however releases result list unlike direct modification of _excinfo
.
Right, I missed that you updated it. Another rebase would still be good because some things changed since you last pushed and it will be good to validate CI on top of current master.
I'm not familiar with pytest-subtests, but maybe you mean that in a similar way to how calling a hook (without |
It is rather brain storm like idea than real use case yet. That is why it does not deserve an issue. Anyway it contradicts with
There are several attempts to implement non-fatal (soft) asserts. It is useful to report several issues with compound object in a single test. The idea crystallized in pytest-check but pytest-subtests has quite similar context manager test_some_object():
obj = make_some_object()
with check: assert obj.field == 3 # failure here does block following asserts
with check: assert check_invariant1(obj) |
36961c8
to
bdc131a
Compare
I ran pytest's test suite (current master) against this PR -- all green. |
@bluetech I think your summary in #257 (comment) is spot-on and I agree with everything you wrote there. |
@oremanj wrote:
I am in doubts concerning the following in the mentioned comment
and the latest code block there. Do you mean that original results should be propagated to outer wrappers despite exception in an inner one or I have managed to convince you that current changes do not make things worse by exposing exception to outer wrappers? |
My read was: current wrappers use |
You managed to convince me! @oremanj explained it better, sorry for being unclear. I think this is ready to go. Thanks @maxnikulin for persisting and also @oremanj for weighing in. I'll ping @RonnyPfannschmidt and @goodboy - hopefully after a long discussion we covered the bases and it's ready for a final review. |
One thing I forgot, @maxnikulin it would be good to add a changelog entry explaining how the semantics changed, though that can be done later as well. (See https://github.com/pytest-dev/pluggy/tree/master/changelog on how to do that). It seems that pluggy doesn't use a |
I could remove the commit touching docs if you have idea how to make requirement for try-finally in setup/tear down hook wrappers more visible and clear |
Demonstrate that explicit Generator.close() could suppress undefined behavior in respect to the moment when generator finalizer is called. Generator could resist attempts to close it, so one test is marked as xfail.
Previously only a case of non-generator hook wrapper was tested.
f73a64d
to
9532856
Compare
It seems, #244 is still in its place. I had a hope that nobody was finally against this pull request, but it was not merged before 1.0 release. I have rebased my branch to the current main state. Is there any problem with the fix? |
my current understanding is, that we want to eventually merge this, but i have to admit that i'll likely need a ful half day to get truly into what this means at the api level your pr touches a topic that has some potentially tricky/painful implications, and its hard to allocate enough time to give it due diligence for some of us (structurally this is actually something i'd like to get in before starting to work on considerations for async support) |
This change has been replaced by/integrated into #389. Thanks! |
Make hook wrappers more reliable by calling remaining tear down stages even an exception thrown in some wrapper during this step. Currently such code could be called too late when garbage collector deletes generator objects.
Fixes #244