Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JsonRpcConnection: Don't drop client from cache prematurely #10210

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

yhabteab
Copy link
Member

@yhabteab yhabteab commented Oct 31, 2024

PR #7445 incorrectly assumed that a peer that had already disconnected and never reconnected was due to the endpoint client being dropped after a successful socket shutdown. However, the issue at that time was that there was not a single timeout guards that could cancel the async_shutdown call, petentially blocking indefinetely. Although removing the client from cache early might have allowed the endpoint to reconnect, it did not resolve the underlying problem. Now that we have a proper cancellation timeout, we can wait until the currently used socket is fully closed before dropping the client from our cache. When our socket termination works reliably, the ApiListener reconnect timer should attempt to reconnect this endpoint after the next tick. Additionally, we now logs both before and after socket termination, which may help identify if it is hanging at any point in between.

  • 16d6c23 Moves the removal of the client from cache to the end of the disconnection coroutine, so that the client gets dropped only after the connection is completely closed.
  • 0479a59 Adds additional logging before and after disconnecting the RPC client connection.
  • ae11193 Fixes a hidden bug that came to light with the changes of 16d6c23 and is described in JsonRpcConnection: Don't drop client from cache prematurely #10210 (comment). What this commit basically does is that after the disconnect coroutine gets resumed, it first gives the writer coroutine a maximum timeout of 5s to finish on its own. If it exceeds that additional time, it will be terminated forcibly instead of waiting endlessly for it to finish normally.

Tests

[2024-11-28 10:37:03 +0000] information/ApiListener: New client connection for identity 'mbp-yhabteab' from [::ffff:192.168.202.46]:56152
[2024-11-28 10:37:03 +0000] information/ApiListener: Ignoring JSON-RPC connection from [::ffff:192.168.202.46]:56152. We're already connected to Endpoint 'mbp-yhabteab' (last message sent: 2024-11-28 10:36:35, last message received: 2024-11-28 10:36:19).
[2024-11-28 10:37:13 +0000] information/ApiListener: New client connection for identity 'mbp-yhabteab' from [::ffff:192.168.202.46]:56173
[2024-11-28 10:37:13 +0000] information/ApiListener: Ignoring JSON-RPC connection from [::ffff:192.168.202.46]:56173. We're already connected to Endpoint 'mbp-yhabteab' (last message sent: 2024-11-28 10:36:35, last message received: 2024-11-28 10:36:19).
[2024-11-28 10:37:23 +0000] information/JsonRpcConnection: No messages for identity 'mbp-yhabteab' have been received in the last 60 seconds.
[2024-11-28 10:37:23 +0000] information/ApiListener: New client connection for identity 'mbp-yhabteab' from [::ffff:192.168.202.46]:56204
[2024-11-28 10:37:23 +0000] information/ApiListener: Ignoring JSON-RPC connection from [::ffff:192.168.202.46]:56204. We're already connected to Endpoint 'mbp-yhabteab' (last message sent: 2024-11-28 10:36:35, last message received: 2024-11-28 10:36:19).
[2024-11-28 10:37:33 +0000] information/ApiListener: New client connection for identity 'mbp-yhabteab' from [::ffff:192.168.202.46]:56227
[2024-11-28 10:37:33 +0000] information/ApiListener: Ignoring JSON-RPC connection from [::ffff:192.168.202.46]:56227. We're already connected to Endpoint 'mbp-yhabteab' (last message sent: 2024-11-28 10:36:35, last message received: 2024-11-28 10:36:19).
[2024-11-28 10:37:38 +0000] warning/ApiListener: Removing API client for endpoint 'mbp-yhabteab'. 0 API clients left.
[2024-11-28 10:37:38 +0000] warning/JsonRpcConnection: API client disconnected for identity 'mbp-yhabteab'

@yhabteab yhabteab added enhancement New feature or request area/api REST API labels Oct 31, 2024
@yhabteab yhabteab added this to the 2.15.0 milestone Oct 31, 2024
@cla-bot cla-bot bot added the cla/signed label Oct 31, 2024
@icinga-probot icinga-probot bot added area/distributed Distributed monitoring (master, satellites, clients) bug Something isn't working core/build-fix Follow-up fix, not released yet labels Oct 31, 2024
@yhabteab yhabteab removed bug Something isn't working core/build-fix Follow-up fix, not released yet labels Oct 31, 2024
Copy link
Member

@Al2Klimov Al2Klimov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually need two warnings?

@yhabteab
Copy link
Member Author

Actually, no. However, I don't know which log level to use for which of these. So, if you have any better ideas, then please suggest!

@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from 99be923 to d51d6a7 Compare October 31, 2024 11:27
@yhabteab
Copy link
Member Author

Do we actually need two warnings?

I now have degraded the first log to info.

@yhabteab yhabteab requested a review from Al2Klimov October 31, 2024 12:52
@yhabteab yhabteab added the consider backporting Should be considered for inclusion in a bugfix release label Oct 31, 2024
Copy link
Member

@Al2Klimov Al2Klimov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now as I'm thinking about it, in monitoring in general you use WARNING as a soon indicator (before CRITICAL) that something may be not right. But let's also let Julian come to word...

@julianbrost
Copy link
Contributor

I'm not really sure how well that comparison works. But yes, the question for choosing the log severity should also be "does it require attention?". So during a normal reload/config deployment, there ideally shouldn't be any warnings (I do know that we aren't there yet).

For a user, having both messages doesn't sound too helpful: like it says pretty much the same thing twice within at most 10 seconds, so I'd go even further and log the first one at notice.

Additionally, we now logs both before and after socket termination, which may help identify if it is hanging at any point in between.

That makes it sound like something that should only be necessary to debug very specific issues, not something that would be useful logging for every user.

@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from d51d6a7 to b7dfdd3 Compare October 31, 2024 16:08
@yhabteab yhabteab requested a review from Al2Klimov October 31, 2024 16:10
ApiListener::GetInstance()->RemoveAnonymousClient(this);
}
Log(LogNotice, "JsonRpcConnection")
<< "Disconnecting API client for identity '" << m_Identity << "'";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 What about logging this...

lib/remote/jsonrpcconnection.cpp Show resolved Hide resolved
@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from b7dfdd3 to ecc4b43 Compare November 4, 2024 08:44
@yhabteab yhabteab requested a review from Al2Klimov November 4, 2024 08:44
Al2Klimov
Al2Klimov previously approved these changes Nov 4, 2024
Copy link
Member

@Al2Klimov Al2Klimov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@julianbrost
Copy link
Contributor

Neither the PR description nor the commit message really tell the purpose of moving the RemoveClient() call.

it did not resolve the underlying problem.

Here, underlying problem refers to the possibly blocking TLS shutdown without a timeout, i.e. something that's fixed already and not supposed to be fixed by this PR?

Now that we have a proper cancellation timeout, we can wait until the currently used socket is fully closed before dropping the client from our cache.

Why do we want to wait? Like the PR claims that currently it's premature but not why that's the case and what's improved by changing this.

@yhabteab
Copy link
Member Author

yhabteab commented Nov 6, 2024

Here, underlying problem refers to the possibly blocking TLS shutdown without a timeout, i.e. something that's fixed already and not supposed to be fixed by this PR?

Something that should have been fixed already!

Now that we have a proper cancellation timeout, we can wait until the currently used socket is fully closed before dropping the client from our cache.

Why do we want to wait? Like the PR claims that currently it's premature but not why that's the case and what's improved by changing this.

Firstly, you don't want to mark an endpoint as disconnected if it actually isn't. Before #7445, the shutdown flow was as it should be, i.e. first the socket is completely shut down, then the endpoint is marked as disconnected. However, this was changed with #7445 due to an incorrect assumption that disconnected clients being never reconnected again were due to this change, when in fact it might've been stuck somewhere in async_shutdown before reaching the end. So if for some reason the shutdown process gets stuck somewhere in between, we would never know, but assume that the node was completely disconnected, as the API listener would then try to reconnect to the client before fully ensuring that the previous socket was properly closed. This PR reverts this to the original form so that if we see a API client disconnected ... log entry, exactly know the shutdown process was successful.

@julianbrost
Copy link
Contributor

Firstly, you don't want to mark an endpoint as disconnected if it actually isn't.

In contrast, do you want to keep treating an endpoint as connected after the code decided that the connection is dead (for example in the "no messages received" case)? It's more in some in-between state of cleaning up the connection.

So if for some reason the shutdown process gets stuck somewhere in between, we would never know, but assume that the node was completely disconnected, as the API listener would then try to reconnect to the client before fully ensuring that the previous socket was properly closed.

In itself, that doesn't sound bad. Like once the connection is declared dead, it should be fine to establish a new one. It's just problematic if that would turn into a resource leak. Is that the main change here that if the code had a problem in Disconnect(), this PR would turn an invisible resource leak into a more visible "it fails to reconnect because it somehow hangs in Disconnect()?

This PR reverts this to the original form so that if we see a API client disconnected ... log entry, exactly know the shutdown process was successful.

Though that's not related to moving the RemoveClient() call but simply due to the change to the logging.

Interestingly, there seems to be quite a connection to #10005, so this might be yet another reason to revive that PR, quoting from that PR's description:

Open questions:

  • Do we want to try to call ForceDisconnect() directly in case a connection is shut down due to a timeout like "no messages received" on JSON-RPC connections?

Now thinking about this in context, my intuition says yes. If we consider a connection failed, why should we bother attempting a clean TLS shutdown instead of just killing it with fire? That would then also change things here as a (forceful) disconnect should be pretty much instant.

@yhabteab
Copy link
Member Author

yhabteab commented Nov 6, 2024

In contrast, do you want to keep treating an endpoint as connected after the code decided that the connection is dead (for example in the "no messages received" case)?

Yes, generally I would treat an endpoint as connected as long as its socket is not fully shut down, but the no message received case is something different that ideally should not happen that often.

Is that the main change here that if the code had a problem in Disconnect(), this PR would turn an invisible resource leak into a more visible "it fails to reconnect because it somehow hangs in Disconnect()?

Yes. If you just drop the client before anything else and log that it's disconnected when in fact it's not, we won't be able to tell for sure if the shutdown is complete afterwards. Unless there's something that we really messed up, the endpoint should be kept as connected for a maximum of 10s after someone requests a disconnect, so it shouldn't cause any other issues.

This PR reverts this to the original form so that if we see a API client disconnected ... log entry, exactly know the shutdown process was successful.

Though that's not related to moving the RemoveClient() call but simply due to the change to the logging.

When looking just at the log entry, then yes, but why should the endpoint be allowed to initiate another connection at all if the current one is not completely closed? I'm just trying to recreate theat least for me logical flow as how it should be, i.e. first either gracefully or forcibly close the current one before marking the endpoint as disconnected.

Now thinking about this in context, my intuition says yes. If we consider a connection failed, why should we bother attempting a clean TLS shutdown instead of just killing it with fire?

As I have already talked to you lately about that PR, I'm perfectly happy with it regardless of this one, and forcibly closing such a dead connection doesn't sound like a bad idea too.

That would then also change things here as a (forceful) disconnect should be pretty much instant.

What exactly would change here otherwise? For me, the referenced PR is just something I would include on top of this, but I don't see how those exclude one another.

@julianbrost
Copy link
Contributor

This PR reverts this to the original form so that if we see a API client disconnected ... log entry, exactly know the shutdown process was successful.

Though that's not related to moving the RemoveClient() call but simply due to the change to the logging.

When looking just at the log entry, then yes, but why should the endpoint be allowed to initiate another connection at all if the current one is not completely closed? I'm just trying to recreate theat least for me logical flow as how it should be, i.e. first either gracefully or forcibly close the current one before marking the endpoint as disconnected.

That's what I meant with saying it's in some in-between state. The rest of the code also makes a difference between connected and connecting, likewise there's a difference between disconnecting and disconnected (though I don't that distinction is done explicitly in the code).

Now the question is whether a new connection should already be attempted while in that disconnecting state. Currently it is done, you suggest to wait to be fully disconnected with this PR. I think neither is wrong and would probably tend towards the suggested change. You just use a quite high level of "should" here and whether the change is a good idea boils down to how sure we are that this doesn't delay the RemoveClient() call by much more than 10 seconds.

That would then also change things here as a (forceful) disconnect should be pretty much instant.

What exactly would change here otherwise? For me, the referenced PR is just something I would include on top of this, but I don't see how those exclude one another.

Ideally, that forceful disconnect would only be a close syscall1, so then it wouldn't make a noticeable difference whether you call RemoveClient() before or after that (in contrast to waiting 10 seconds for something to happen).

Footnotes

  1. Though that can't be done manually, Asio needs to be aware of this, so it has to be done using Asio.

@julianbrost
Copy link
Contributor

And how long is the too long you are referring to?

That's the thing, there's no explicit timeout set. So however long it takes until the kernel decides itself that the socket is dead (which can take hours).

Why do you expect an endpoint to be instantly flagged as disconnected as soon as someone has requested a disconnect? If the current connection is still alive, we obviously need to flush its buffers first before closing it, and to me that's just the natural flow. But what advantages do you hope to gain from immediately cancelling the connection? To initiate a new connection while the current one is still sending data? And for what purpose? I don't get that.

I don't want it, I'm just saying there's a high chance that this PR actually reintroduces the problem described in #7444 (hence the request changes, so that it doesn't get merged before that is investigated as it was already approved).

I've already said in my previous comments that the no messages received disconnect is a special case that doesn't actually need a graceful shutdown, but simply terminate it as you've already suggested.

Not every disconnect is for this reason, but if there is one for this reason, reconnecting afterwards should work reliable.

@julianbrost
Copy link
Contributor

I'm just saying there's a high chance that this PR actually reintroduces the problem described in #7444

Indeed it does. I managed to trigger it with this PR by breaking a TCP connection by dropping all of its packets in a firewall (iptables -A INPUT -p tcp -s 172.18.0.33 --sport 49584 -j DROP).

Both nodes then logged that no messages were received in the last 60 seconds but didn't attempt to reconnect until 15 minutes later (not sure which timeout caused it to unblock, I'm not aware of related 15 minute timeouts in Icinga 2, so it probably was some kernel timeout).

[2024-11-07 13:20:46 +0100] information/JsonRpcConnection: No messages for identity 'satellite-b-2' have been received in the last 60 seconds.
[2024-11-07 13:35:24 +0100] warning/ApiListener: Removing API client for endpoint 'satellite-b-2'. 0 API clients left.
[...just work queue statistics and dumping program state...]
[2024-11-07 13:35:24 +0100] warning/JsonRpcConnection: API client disconnected for identity 'satellite-b-2'
[2024-11-07 13:35:26 +0100] information/ApiListener: Reconnecting to endpoint 'satellite-b-2' via host 'satellite-b-2' and port '5665'

Is that the main change here that if the code had a problem in Disconnect(), this PR would turn an invisible resource leak into a more visible "it fails to reconnect because it somehow hangs in Disconnect()?

So yes, it does exactly that, see also #10005 (comment) (describes the resource leak that happens when doing the same on the current master) 😅

Al2Klimov

This comment was marked as duplicate.

@Al2Klimov Al2Klimov dismissed their stale review November 7, 2024 16:46

Indeed. Too many connections are bad. But none at all "just" due to a, possibility little, resource leak are worse IMAO. Let's stall this.

@julianbrost julianbrost added the stalled Blocked or not relevant yet label Nov 12, 2024
@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from ecc4b43 to 032db7d Compare November 25, 2024 10:34
PR #7445 incorrectly assumed that a peer that had already disconnected
and never reconnected was due to the endpoint client being dropped after
a successful socket shutdown. However, the issue at that time was that
there was not a single timeout guards that could cancel the `async_shutdown`
call, petentially blocking indefinetely. Although removing the client from
cache early might have allowed the endpoint to reconnect, it did not
resolve the underlying problem. Now that we have a proper cancellation
timeout, we can wait until the currently used socket is fully closed
before dropping the client from our cache. When our socket termination
works reliably, the `ApiListener` reconnect timer should attempt to
reconnect this endpoint after the next tick. Additionally, we now have
logs both for before and after socket termination, which may help
identify if it is hanging somewhere in between.
@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from 032db7d to 0479a59 Compare November 27, 2024 09:02
@yhabteab yhabteab added bug Something isn't working and removed enhancement New feature or request stalled Blocked or not relevant yet labels Nov 28, 2024
@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from 64d55df to 44954ce Compare November 28, 2024 10:15
@yhabteab
Copy link
Member Author

See the updated PR description!

@@ -238,6 +241,8 @@ void JsonRpcConnection::Disconnect()
m_CheckLivenessTimer.cancel();
m_HeartbeatTimer.cancel();

// In case the writer coroutine is not done yet which might got stuck somewhere in async_write
// or async_flush, cancel all operations on the underlying socket to unblock it.
m_Stream->lowest_layer().cancel(ec);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so in addition to improvements to logging 0479a59 and awaiting an actual possibly long disconnect before unregistration 16d6c23, you also cancel I/O 44954ce ...

// The writer coroutine could not finish soon enough to unblock the waiter down blow,
// so we have to do this on our own, and the coroutine will be terminated forcibly when
// the ops on the underlying socket are cancelled.
m_WriterDone.Set();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... to shorten the disconnection process once it took 5s? But then, shouldn't you m_Stream->lowest_layer().cancel(ec); instead, so that the writer terminates, and let m_WriterDone.Wait(yc); just do its thing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do it, but it should do nothing different!

@yhabteab yhabteab force-pushed the endpoint-client-dropped-early branch from 44954ce to ae11193 Compare November 28, 2024 12:10
@yhabteab yhabteab requested a review from Al2Klimov November 28, 2024 12:11
Al2Klimov
Al2Klimov previously approved these changes Nov 28, 2024
Copy link
Member

@Al2Klimov Al2Klimov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LPTM

{
asio::deadline_timer writerTimeout(m_IoStrand.context(), boost::posix_time::seconds(5));
writerTimeout.async_wait(asio::bind_executor(m_IoStrand, [this](boost::system::error_code ec) {
if (!ec) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so similar to #10254, in fact you could re-use that class instead of implementing it by yourself. Just as idea.

@Al2Klimov Al2Klimov dismissed their stale review December 2, 2024 09:05

I'm afraid the self-made timeout here is indeed similar enough to a previous version of #10254 to share its weakness as digged out in #10254 (comment) and fixed in 8dba2bb.

@yhabteab yhabteab requested a review from Al2Klimov December 10, 2024 09:57
{
asio::deadline_timer writerTimeout(m_IoStrand.context(), boost::posix_time::seconds(5));
writerTimeout.async_wait(asio::bind_executor(m_IoStrand, [this](boost::system::error_code ec) {
if (!ec) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid I was right with #10210 (comment). One of the recent changes of #10254 was the re-introduced shared atomic bool and its check in addition to !ec. This was done because on timeout due, in a multi thread I/O engine, the callback would be already scheduled on next free time of the strand...

@@ -255,6 +258,15 @@ void JsonRpcConnection::Disconnect()
shutdownTimeout->Cancel();

m_Stream->lowest_layer().shutdown(m_Stream->lowest_layer().shutdown_both, ec);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... i.e. it could be called while async_shutdown() here yields. This is obviously not what we want, so I'd like the same check here. This would make it a 99% copy of #10254 w/o even unit tests, so... 🤷‍♂️

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively to #10254 you can also use the current Timeout class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api REST API area/distributed Distributed monitoring (master, satellites, clients) bug Something isn't working cla/signed consider backporting Should be considered for inclusion in a bugfix release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants