Skip to content
Snippets Groups Projects
Commit e513d4ce authored by Nikolay Shirokovskiy's avatar Nikolay Shirokovskiy Committed by Vladimir Davydov
Browse files

core: fiber pool shutdown

Fiber pool shutdown is finishing all idle fibers. Any message processing
is finished earlier on client fiber shutdown.

We need some changes in shutdown order to make fiber pool shutdown.
First we need to move stopping of iproto threads from free to shutdown
step. The issue is we want to destroy "tx" endpoint which requires all
pipes connected to it to be destroyed first. There are pipes in iproto
threads that connected to "tx". Currently we destroy pipes in free step
and at this point as there is no event loop in tx thread
`cbus_endpoint_destroy` can't receive notifications that pipes are
destroyed.

Originally we put stopping of iproto threads to the free step because we
don't have client fibers shutdown. So it was convenient to have working
`net_pipe` so that client fibers can use iproto API without adding extra
logic to them. Now I guess it make sense to stop client fibers before
iproto shutdown. This is the second change in shutdown order.

There is another reason why we have iproto shutdown before client fiber
shutdown. In the process of iproto shutdown we close connections first
and then cancel all requests in progress. This way client do not receive
unexpected `FiberIsCancelled` errors in the process of server shutdown.
After the patch it not so. Well we may close connections as an extra
step before client fibers shutdown. But let's leave it this way.  Good
clients can subscribe to servere shutdown and prepare for it.  Otherwise
they may receive `FiberIsCancelled` for theier request which looks
sensible.

It is also makes sense now to move watcher and client fiber shutdown
to `box_shutdown` as we can both use watcher and create client fibers
without starting a box.

While at it also drop a note in code why we shutdown watcher before even
fiber clients shutdown.

Part of #9722

NO_CHANGELOG=internal
NO_DOC=internal
parent f944953e
No related branches found
No related tags found
No related merge requests found
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment