Fix restart_both test
Follow-up for #90 (closed) (!130 (merged)). Restarting two of two instances doesn't work.
Test log
$ PICODATA_LOG_LEVEL=verbose pytest -k restart_both
============================= test session starts ==============================
platform linux -- Python 3.10.0, pytest-7.1.2, pluggy-1.0.0 -- /home/rosik/.local/share/virtualenvs/picodata-6sv6l6y-/bin/python
cachedir: .pytest_cache
rootdir: /home/rosik/w/picodata, configfile: pytest.ini, testpaths: test/int
plugins: xdist-2.5.0, clarity-1.0.1, forked-1.4.0
collecting ... collected 17 items / 16 deselected / 1 selected
test/int/test_couple.py::test_restart_both FAILED [100%]
=================================== FAILURES ===================================
______________________________ test_restart_both _______________________________
cluster2 = Cluster("127.0.0.1:3300", n=2)
def test_restart_both(cluster2: Cluster):
# Given a cluster of 2 instances - i1, i2
# When both instances are stopped and then started again
# Then both can become ready and handle proposals.
i1, i2 = cluster2.instances
i1.terminate()
i2.terminate()
@funcy.retry(tries=20, timeout=0.1)
def wait_alive(instance):
assert instance._raft_status().is_ready is False
i1.start()
# This synchronization makes sure the test fails.
# i1 has already initialized raft node but can't win election yet
# i2 starts discovery and fails: leader_id is unknown for node 1
wait_alive(i1)
i2.start()
> i1.wait_ready()
test/int/test_couple.py:99:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.local/share/virtualenvs/picodata-6sv6l6y-/lib/python3.10/site-packages/funcy/decorators.py:45: in wrapper
return deco(call, *dargs, **dkwargs)
../../.local/share/virtualenvs/picodata-6sv6l6y-/lib/python3.10/site-packages/funcy/flow.py:127: in retry
return call()
../../.local/share/virtualenvs/picodata-6sv6l6y-/lib/python3.10/site-packages/funcy/decorators.py:66: in __call__
return self._func(*self._args, **self._kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Instance(i1, listen=127.0.0.1:3301)
@funcy.retry(tries=30, timeout=0.2)
def wait_ready(self):
status = self._raft_status()
> assert status.is_ready
E AssertionError: assert False
E + where False = RaftStatus(id=1, raft_state='PreCandidate', is_ready=False, leader_id=None).is_ready
test/int/conftest.py:306: AssertionError
---------------------------- Captured stdout setup -----------------------------
i2 | [supervisor:2975090] running StartDiscover
i1 | [supervisor:2975087] running StartDiscover
i1 | [supervisor:2975087] subprocess finished: Ok(Exited(Pid(2975095), 0))
i1 | [supervisor:2975087] subprocess requested rebootstrap
i1 | [supervisor] removing file: /tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.snap
i1 | [supervisor] removing file: /tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.xlog
i1 | [supervisor:2975087] running StartBoot
i2 | [supervisor:2975090] subprocess finished: Ok(Exited(Pid(2975094), 0))
i2 | [supervisor:2975090] subprocess requested rebootstrap
i2 | [supervisor] removing file: /tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000000.snap
i2 | [supervisor] removing file: /tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000000.xlog
i2 | [supervisor:2975090] running StartJoin { leader_address: "127.0.0.1:3301" }
---------------------------- Captured stderr setup -----------------------------
warning: 'tarantool-sys/patches-applied' exists, so patching step is skipped
Finished dev [unoptimized + debuginfo] target(s) in 0.06s
Instance(i1, listen=127.0.0.1:3301) starting...
Instance(i2, listen=127.0.0.1:3302) starting...
i2 | >>>>> start_discover()
i1 | >>>>> start_discover()
i2 | 2022-06-01 20:17:36.834 [2975094] main/103/interactive C> Tarantool 2.8.3-0-g01023dbc2
i2 | 2022-06-01 20:17:36.834 [2975094] main/103/interactive C> log level 6
i2 | 2022-06-01 20:17:36.834 [2975094] main/103/interactive I> wal/engine cleanup is paused
i1 | 2022-06-01 20:17:36.834 [2975095] main/103/interactive C> Tarantool 2.8.3-0-g01023dbc2
i1 | 2022-06-01 20:17:36.834 [2975095] main/103/interactive C> log level 6
i1 | 2022-06-01 20:17:36.834 [2975095] main/103/interactive I> wal/engine cleanup is paused
i2 | 2022-06-01 20:17:36.834 [2975094] main/103/interactive I> mapping 268435456 bytes for memtx tuple arena...
i1 | 2022-06-01 20:17:36.834 [2975095] main/103/interactive I> mapping 268435456 bytes for memtx tuple arena...
i2 | 2022-06-01 20:17:36.834 [2975094] main/103/interactive I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
i2 | 2022-06-01 20:17:36.834 [2975094] main/103/interactive I> mapping 134217728 bytes for vinyl tuple arena...
i1 | 2022-06-01 20:17:36.834 [2975095] main/103/interactive I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
i1 | 2022-06-01 20:17:36.834 [2975095] main/103/interactive I> mapping 134217728 bytes for vinyl tuple arena...
i1 | 2022-06-01 20:17:36.839 [2975095] main/103/interactive I> instance uuid 993ad696-699d-4376-9da9-a1844799c1bb
i2 | 2022-06-01 20:17:36.839 [2975094] main/103/interactive I> instance uuid 5a3be4ec-967f-473a-a46c-395024032a71
i2 | 2022-06-01 20:17:36.839 [2975094] main/103/interactive I> tx_binary: stopped
i2 | 2022-06-01 20:17:36.839 [2975094] main/103/interactive I> initializing an empty data directory
i1 | 2022-06-01 20:17:36.839 [2975095] main/103/interactive I> tx_binary: stopped
i1 | 2022-06-01 20:17:36.839 [2975095] main/103/interactive I> initializing an empty data directory
i1 | 2022-06-01 20:17:36.863 [2975095] main/103/interactive V> RAFT: recover {term: 1}
i1 | 2022-06-01 20:17:36.864 [2975095] main/103/interactive I> assigned id 1 to replica 993ad696-699d-4376-9da9-a1844799c1bb
i2 | 2022-06-01 20:17:36.863 [2975094] main/103/interactive V> RAFT: recover {term: 1}
i1 | 2022-06-01 20:17:36.864 [2975095] main/103/interactive I> cluster uuid 9ff5f0e4-d4b6-4472-8030-f405517c9cc8
i2 | 2022-06-01 20:17:36.864 [2975094] main/103/interactive I> assigned id 1 to replica 5a3be4ec-967f-473a-a46c-395024032a71
i2 | 2022-06-01 20:17:36.864 [2975094] main/103/interactive I> cluster uuid 0e40df14-ff04-44d2-92a1-59402730dd13
i1 | 2022-06-01 20:17:36.865 [2975095] snapshot/101/main I> saving snapshot `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.snap.inprogress'
i2 | 2022-06-01 20:17:36.865 [2975094] snapshot/101/main I> saving snapshot `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000000.snap.inprogress'
i2 | 2022-06-01 20:17:36.870 [2975094] snapshot/101/main I> done
i1 | 2022-06-01 20:17:36.870 [2975095] snapshot/101/main I> done
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive I> ready to accept requests
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive I> ready to accept requests
i2 | 2022-06-01 20:17:36.870 [2975094] main/105/gc I> wal/engine cleanup is resumed
i1 | 2022-06-01 20:17:36.870 [2975095] main/105/gc I> wal/engine cleanup is resumed
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive I> set 'log_level' configuration option to 6
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive I> set 'log_level' configuration option to 6
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive I> set 'feedback_enabled' configuration option to false
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive I> set 'replication' configuration option to []
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive I> set 'feedback_enabled' configuration option to false
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive I> set 'replication' configuration option to []
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:36.870 [2975094] main/103/interactive I> set 'log_format' configuration option to "plain"
i2 | 2022-06-01 20:17:36.870 [2975094] main/106/checkpoint_daemon I> scheduled next checkpoint for Wed Jun 1 22:03:50 2022
i1 | 2022-06-01 20:17:36.870 [2975095] main/103/interactive I> set 'log_format' configuration option to "plain"
i1 | 2022-06-01 20:17:36.870 [2975095] main/106/checkpoint_daemon I> scheduled next checkpoint for Wed Jun 1 22:05:20 2022
i1 | 2022-06-01 20:17:36.873 [2975095] main/103/interactive I> tx_binary: stopped
i1 | 2022-06-01 20:17:36.873 [2975095] main/103/interactive I> tx_binary: bound to 127.0.0.1:3301
i1 | 2022-06-01 20:17:36.873 [2975095] main/103/interactive I> set 'listen' configuration option to "127.0.0.1:3301"
i2 | 2022-06-01 20:17:36.873 [2975094] main/103/interactive I> tx_binary: stopped
i2 | 2022-06-01 20:17:36.873 [2975094] main/103/interactive I> tx_binary: bound to 127.0.0.1:3302
i2 | 2022-06-01 20:17:36.873 [2975094] main/103/interactive I> set 'listen' configuration option to "127.0.0.1:3302"
i2 | 2022-06-01 20:17:37.086 [2975094] main/103/interactive tarantool.rs:224 W> net_box_call failed: IO error: failed to fill whole buffer, fn: .proc_discover, peer: 127.0.0.1:3301
i1 | >>>>> start_boot()
i1 | 2022-06-01 20:17:37.161 [2975105] main/103/interactive C> Tarantool 2.8.3-0-g01023dbc2
i1 | 2022-06-01 20:17:37.161 [2975105] main/103/interactive C> log level 6
i1 | 2022-06-01 20:17:37.161 [2975105] main/103/interactive I> wal/engine cleanup is paused
i1 | 2022-06-01 20:17:37.161 [2975105] main/103/interactive I> mapping 268435456 bytes for memtx tuple arena...
i1 | 2022-06-01 20:17:37.161 [2975105] main/103/interactive I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
i1 | 2022-06-01 20:17:37.161 [2975105] main/103/interactive I> mapping 134217728 bytes for vinyl tuple arena...
i1 | 2022-06-01 20:17:37.166 [2975105] main/103/interactive I> instance uuid 68d4a766-4144-3248-aeb4-e212356716e4
i1 | 2022-06-01 20:17:37.166 [2975105] main/103/interactive I> tx_binary: stopped
i1 | 2022-06-01 20:17:37.166 [2975105] main/103/interactive I> initializing an empty data directory
i1 | 2022-06-01 20:17:37.191 [2975105] main/103/interactive V> RAFT: recover {term: 1}
i1 | 2022-06-01 20:17:37.191 [2975105] main/103/interactive I> assigned id 1 to replica 68d4a766-4144-3248-aeb4-e212356716e4
i1 | 2022-06-01 20:17:37.191 [2975105] main/103/interactive I> cluster uuid e0df68c5-e7f9-395f-86b3-30ad9e1b7b07
i1 | 2022-06-01 20:17:37.192 [2975105] snapshot/101/main I> saving snapshot `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.snap.inprogress'
i1 | 2022-06-01 20:17:37.196 [2975105] snapshot/101/main I> done
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> ready to accept requests
i1 | 2022-06-01 20:17:37.196 [2975105] main/105/gc I> wal/engine cleanup is resumed
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> set 'log_level' configuration option to 6
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> set 'feedback_enabled' configuration option to false
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> set 'replication' configuration option to []
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> set 'instance_uuid' configuration option to "68d4a766-4144-3248-aeb4-e212356716e4"
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> set 'log_format' configuration option to "plain"
i1 | 2022-06-01 20:17:37.196 [2975105] main/106/checkpoint_daemon I> scheduled next checkpoint for Wed Jun 1 22:17:24 2022
i1 | 2022-06-01 20:17:37.196 [2975105] main/103/interactive I> set 'replicaset_uuid' configuration option to "e0df68c5-e7f9-395f-86b3-30ad9e1b7b07"
i1 | 2022-06-01 20:17:37.199 [2975105] main/103/interactive I> >>>>> postjoin()
i1 | 2022-06-01 20:17:37.199 [2975105] main/103/interactive I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/103/interactive V> reset election timeout 0 -> 37 at 0, election_elapsed: 0, timeout: 37, prev_timeout: 0, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/103/interactive I> became follower at term 1, term: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/103/interactive I> newRaft, peers: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {} } }, last term: 1, last index: 1, applied: 0, commit: 1, term: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/103/interactive I> RawNode created with id 1., id: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/111/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/103/interactive I> this is the only vorer in cluster, triggering election immediately
i1 | 2022-06-01 20:17:37.200 [2975105] main/111/raft_main_loop I> starting a new election, term: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/111/raft_main_loop I> became pre-candidate at term 1, term: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/111/raft_main_loop V> reset election timeout 37 -> 31 at 1, election_elapsed: 1, timeout: 31, prev_timeout: 37, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/111/raft_main_loop I> became candidate at term 2, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:37.200 [2975105] main/111/raft_main_loop V> reset election timeout 31 -> 34 at 0, election_elapsed: 0, timeout: 34, prev_timeout: 31, raft_id: 1
i1 | 2022-06-01 20:17:37.201 [2975105] main/111/raft_main_loop I> became leader at term 2, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:37.202 [2975105] main/111/raft_main_loop V> persisted index 2, raft_id: 1
i1 | 2022-06-01 20:17:37.202 [2975105] main/111/raft_main_loop V> committing index 2, index: 2, raft_id: 1
i1 | 2022-06-01 20:17:37.202 [2975105] main/103/interactive I> tx_binary: stopped
i1 | 2022-06-01 20:17:37.202 [2975105] main/103/interactive I> tx_binary: bound to 127.0.0.1:3301
i1 | 2022-06-01 20:17:37.202 [2975105] main/103/interactive I> set 'listen' configuration option to "127.0.0.1:3301"
i1 | 2022-06-01 20:17:37.202 [2975105] main C> entering the event loop
Instance(i1, listen=127.0.0.1:3301) is ready
i2 | >>>>> start_join(127.0.0.1:3301)
i1 | 2022-06-01 20:17:37.557 [2975105] main/112/raft_join_loop I> processing batch: ["i2"]
i1 | 2022-06-01 20:17:37.558 [2975105] main/111/raft_main_loop V> persisted index 3, raft_id: 1
i1 | 2022-06-01 20:17:37.558 [2975105] main/111/raft_main_loop V> committing index 3, index: 3, raft_id: 1
i1 | 2022-06-01 20:17:37.558 [2975105] main/111/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {1} } }, learners: {2}, learners_next: {}, auto_leave: true }, raft_id: 1
i1 | 2022-06-01 20:17:37.558 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 2 entries {entry_type: EntryConfChangeV2 term: 2 index: 3 data: "\010\001\022\004\010\002\020\002" context: "\201\245peers\221\210\247raft_id\002\254peer_address\256127.0.0.1:3302\245voter\302\253instance_id\242i2\255replicaset_id\242r1\255instance_uuid\331$24c4ac5f-4981-3441-879c-aee1edb608a6\257replicaset_uuid\331$e0df68c5-e7f9-395f-86b3-30ad9e1b7b07\254commit_index\000"} commit: 3, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.558 [2975105] main/111/raft_main_loop I> initiating automatic transition out of joint configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {1} } }, learners: {2}, learners_next: {}, auto_leave: true }, raft_id: 1
i1 | 2022-06-01 20:17:37.659 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgHeartbeat to: 2, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.660 [2975105] main/114/lua V> Interact with 127.0.0.1:3302 -> IO error: Connection refused (os error 111)
i1 | 2022-06-01 20:17:37.661 [2975105] main/111/raft_main_loop V> persisted index 4, raft_id: 1
i1 | 2022-06-01 20:17:37.661 [2975105] main/111/raft_main_loop V> committing index 4, index: 4, raft_id: 1
i1 | 2022-06-01 20:17:37.661 [2975105] main/111/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {} } }, learners: {2}, learners_next: {}, auto_leave: false }, raft_id: 1
i1 | 2022-06-01 20:17:37.662 [2975105] main/112/raft_join_loop I> batch processed: ["i2"], Ok(4)
i2 | 2022-06-01 20:17:37.666 [2975113] main/103/interactive C> Tarantool 2.8.3-0-g01023dbc2
i2 | 2022-06-01 20:17:37.666 [2975113] main/103/interactive C> log level 6
i2 | 2022-06-01 20:17:37.666 [2975113] main/103/interactive I> wal/engine cleanup is paused
i2 | 2022-06-01 20:17:37.666 [2975113] main/103/interactive I> mapping 268435456 bytes for memtx tuple arena...
i2 | 2022-06-01 20:17:37.667 [2975113] main/103/interactive I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
i2 | 2022-06-01 20:17:37.667 [2975113] main/103/interactive I> mapping 134217728 bytes for vinyl tuple arena...
i2 | 2022-06-01 20:17:37.685 [2975113] main/103/interactive I> instance uuid 24c4ac5f-4981-3441-879c-aee1edb608a6
i2 | 2022-06-01 20:17:37.685 [2975113] main/103/interactive I> tx_binary: stopped
i2 | 2022-06-01 20:17:37.685 [2975113] main/103/interactive I> tx_binary: bound to 127.0.0.1:3302
i2 | 2022-06-01 20:17:37.685 [2975113] main/103/interactive I> connecting to 2 replicas
i2 | 2022-06-01 20:17:37.691 [2975113] main/113/applier/127.0.0.1:3301 I> remote master 68d4a766-4144-3248-aeb4-e212356716e4 at 127.0.0.1:3301 running Tarantool 2.8.3
i2 | 2022-06-01 20:17:37.691 [2975113] main/114/applier/127.0.0.1:3302 I> remote master 24c4ac5f-4981-3441-879c-aee1edb608a6 at 127.0.0.1:3302 running Tarantool 2.8.3
i2 | 2022-06-01 20:17:37.691 [2975113] main/103/interactive I> connected to 2 replicas
i2 | 2022-06-01 20:17:37.692 [2975113] main/103/interactive I> bootstrapping replica from 68d4a766-4144-3248-aeb4-e212356716e4 at 127.0.0.1:3301
i1 | 2022-06-01 20:17:37.692 [2975105] main/110/main I> joining replica 24c4ac5f-4981-3441-879c-aee1edb608a6 at fd 4, aka 127.0.0.1:3301, peer of 127.0.0.1:39078
i1 | 2022-06-01 20:17:37.700 [2975105] main/110/main I> initial data sent.
i1 | 2022-06-01 20:17:37.700 [2975105] main/110/main I> assigned id 2 to replica 24c4ac5f-4981-3441-879c-aee1edb608a6
i1 | 2022-06-01 20:17:37.702 [2975105] relay/127.0.0.1:39078/101/main I> recover from `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.xlog'
i1 | 2022-06-01 20:17:37.703 [2975105] main/110/main I> final data sent.
i2 | 2022-06-01 20:17:37.726 [2975113] main/113/applier/127.0.0.1:3301 I> cluster uuid e0df68c5-e7f9-395f-86b3-30ad9e1b7b07
i2 | 2022-06-01 20:17:37.741 [2975113] main/113/applier/127.0.0.1:3301 I> assigned id 1 to replica 68d4a766-4144-3248-aeb4-e212356716e4
i2 | 2022-06-01 20:17:37.742 [2975113] main/113/applier/127.0.0.1:3301 I> initial data received
i2 | 2022-06-01 20:17:37.742 [2975113] main/113/applier/127.0.0.1:3301 I> assigned id 2 to replica 24c4ac5f-4981-3441-879c-aee1edb608a6
i2 | 2022-06-01 20:17:37.742 [2975113] main/113/applier/127.0.0.1:3301 I> final data received
i2 | 2022-06-01 20:17:37.743 [2975113] snapshot/101/main I> saving snapshot `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000016.snap.inprogress'
i2 | 2022-06-01 20:17:37.747 [2975113] snapshot/101/main I> done
i2 | 2022-06-01 20:17:37.747 [2975113] main/103/interactive V> rotating vylog 0 => 16
i2 | 2022-06-01 20:17:37.747 [2975113] coio V> loading vylog 0
i2 | 2022-06-01 20:17:37.747 [2975113] coio V> done loading vylog
i2 | 2022-06-01 20:17:37.747 [2975113] coio V> saving vylog 16
i2 | 2022-06-01 20:17:37.747 [2975113] coio V> done saving vylog
i2 | 2022-06-01 20:17:37.748 [2975113] main/103/interactive V> done rotating vylog
i2 | 2022-06-01 20:17:37.748 [2975113] main/103/interactive I> ready to accept requests
i2 | 2022-06-01 20:17:37.748 [2975113] main/103/interactive I> synchronizing with 2 replicas
i1 | 2022-06-01 20:17:37.748 [2975105] main/110/main I> subscribed replica 24c4ac5f-4981-3441-879c-aee1edb608a6 at fd 4, aka 127.0.0.1:3301, peer of 127.0.0.1:39078
i1 | 2022-06-01 20:17:37.748 [2975105] main/110/main I> remote vclock {1: 16} local vclock {0: 29, 1: 16}
i2 | 2022-06-01 20:17:37.748 [2975113] main/113/applier/127.0.0.1:3301 I> subscribed
i2 | 2022-06-01 20:17:37.748 [2975113] main/113/applier/127.0.0.1:3301 I> remote vclock {1: 16} local vclock {1: 16}
i2 | 2022-06-01 20:17:37.748 [2975113] main/113/applier/127.0.0.1:3301 I> RAFT: message {term: 1, state: follower} from 1
i1 | 2022-06-01 20:17:37.749 [2975105] relay/127.0.0.1:39078/101/main I> recover from `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.xlog'
i2 | 2022-06-01 20:17:37.749 [2975113] main/115/applierw/127.0.0.1:3301 C> leaving orphan mode
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive I> replica set sync complete
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive C> leaving orphan mode
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive I> set 'log_level' configuration option to 6
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive I> set 'feedback_enabled' configuration option to false
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive I> set 'listen' configuration option to "127.0.0.1:3302"
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive I> set 'replication' configuration option to ["127.0.0.1:3301","127.0.0.1:3302"]
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive I> set 'instance_uuid' configuration option to "24c4ac5f-4981-3441-879c-aee1edb608a6"
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:37.749 [2975113] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:37.750 [2975113] main/103/interactive I> set 'log_format' configuration option to "plain"
i2 | 2022-06-01 20:17:37.750 [2975113] main/108/checkpoint_daemon I> scheduled next checkpoint for Wed Jun 1 21:39:37 2022
i2 | 2022-06-01 20:17:37.750 [2975113] main/103/interactive I> set 'replicaset_uuid' configuration option to "e0df68c5-e7f9-395f-86b3-30ad9e1b7b07"
i2 | 2022-06-01 20:17:37.750 [2975113] main/103/interactive I> >>>>> postjoin()
i2 | 2022-06-01 20:17:37.751 [2975113] main/103/interactive I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 2
i2 | 2022-06-01 20:17:37.751 [2975113] main/103/interactive V> reset election timeout 0 -> 27 at 0, election_elapsed: 0, timeout: 27, prev_timeout: 0, raft_id: 2
i2 | 2022-06-01 20:17:37.751 [2975113] main/103/interactive I> became follower at term 0, term: 0, raft_id: 2
i2 | 2022-06-01 20:17:37.751 [2975113] main/103/interactive I> newRaft, peers: Configuration { incoming: Configuration { voters: {} }, outgoing: Configuration { voters: {} } }, last term: 0, last index: 0, applied: 0, commit: 0, term: 0, raft_id: 2
i2 | 2022-06-01 20:17:37.751 [2975113] main/103/interactive I> RawNode created with id 2., id: 2, raft_id: 2
i1 | 2022-06-01 20:17:37.864 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgHeartbeat to: 2, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.871 [2975113] main/116/raft_main_loop I> received a message with higher term from 1, msg type: MsgHeartbeat, message_term: 2, term: 0, from: 1, raft_id: 2
i2 | 2022-06-01 20:17:37.871 [2975113] main/116/raft_main_loop V> reset election timeout 27 -> 30 at 1, election_elapsed: 1, timeout: 30, prev_timeout: 27, raft_id: 2
i2 | 2022-06-01 20:17:37.871 [2975113] main/116/raft_main_loop I> became follower at term 2, term: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.875 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgHeartbeatResponse to: 1, to: 1, from: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.877 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgReadIndex to: 1 entries {data: "\202\242lc\203\242id\002\243gen\001\245count\001\242op\201\244kind\243nop"}, to: 1, from: 2, raft_id: 2
i1 | 2022-06-01 20:17:37.878 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 2 entries {entry_type: EntryConfChangeV2 term: 2 index: 3 data: "\010\001\022\004\010\002\020\002" context: "\201\245peers\221\210\247raft_id\002\254peer_address\256127.0.0.1:3302\245voter\302\253instance_id\242i2\255replicaset_id\242r1\255instance_uuid\331$24c4ac5f-4981-3441-879c-aee1edb608a6\257replicaset_uuid\331$e0df68c5-e7f9-395f-86b3-30ad9e1b7b07\254commit_index\000"} commit: 4, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.879 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgReadIndexResp to: 2 index: 4 entries {data: "\202\242lc\203\242id\002\243gen\001\245count\001\242op\201\244kind\243nop"}, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.879 [2975113] main/116/raft_main_loop V> rejected msgApp [logterm: 2, index: 2] from 1, logterm: Ok(0), index: 2, from: 1, msg_index: 2, msg_log_term: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.879 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 2 reject: true, to: 1, from: 2, raft_id: 2
i1 | 2022-06-01 20:17:37.880 [2975105] main/111/raft_main_loop V> received msgAppend rejection, index: 2, from: 2, reject_hint_term: 0, reject_hint_index: 0, raft_id: 1
i1 | 2022-06-01 20:17:37.880 [2975105] main/111/raft_main_loop V> decreased progress of 2, progress: Progress { matched: 0, next_idx: 1, state: Probe, paused: false, pending_snapshot: 0, pending_request_snapshot: 0, recent_active: true, ins: Inflights { start: 0, count: 0, buffer: [] }, commit_group_id: 0, committed_index: 0 }, raft_id: 1
i2 | 2022-06-01 20:17:37.880 [2975113] main/103/interactive main.rs:594 W> initiating self-promotion of Peer { raft_id: 2, peer_address: "127.0.0.1:3302", voter: false, instance_id: "i2", replicaset_id: "r1", instance_uuid: "24c4ac5f-4981-3441-879c-aee1edb608a6", replicaset_uuid: "e0df68c5-e7f9-395f-86b3-30ad9e1b7b07", commit_index: 3 }
i1 | 2022-06-01 20:17:37.881 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 entries {entry_type: EntryConfChange term: 1 index: 1 data: "\030\001" context: "\201\245peers\221\210\247raft_id\001\254peer_address\256127.0.0.1:3301\245voter\303\253instance_id\242i1\255replicaset_id\242r1\255instance_uuid\331$68d4a766-4144-3248-aeb4-e212356716e4\257replicaset_uuid\331$e0df68c5-e7f9-395f-86b3-30ad9e1b7b07\254commit_index\000"} commit: 4, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.882 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 1 commit: 1, to: 1, from: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.883 [2975113] main/116/raft_main_loop V> persisted index 1, raft_id: 2
i1 | 2022-06-01 20:17:37.884 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 1 index: 1 entries {term: 2 index: 2} commit: 4, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.884 [2975113] main/116/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 2
i1 | 2022-06-01 20:17:37.884 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 2 entries {entry_type: EntryConfChangeV2 term: 2 index: 3 data: "\010\001\022\004\010\002\020\002" context: "\201\245peers\221\210\247raft_id\002\254peer_address\256127.0.0.1:3302\245voter\302\253instance_id\242i2\255replicaset_id\242r1\255instance_uuid\331$24c4ac5f-4981-3441-879c-aee1edb608a6\257replicaset_uuid\331$e0df68c5-e7f9-395f-86b3-30ad9e1b7b07\254commit_index\000"} commit: 4, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.885 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 3 entries {entry_type: EntryConfChangeV2 term: 2 index: 4} commit: 4, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.885 [2975105] main/112/raft_join_loop I> processing batch: ["i2"]
i2 | 2022-06-01 20:17:37.886 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 2 commit: 2, to: 1, from: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.886 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 3 commit: 3, to: 1, from: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.886 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 4 commit: 4, to: 1, from: 2, raft_id: 2
i1 | 2022-06-01 20:17:37.886 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 4 entries {entry_type: EntryConfChangeV2 term: 2 index: 5 data: "\010\001\022\002\020\002" context: "\201\245peers\221\210\247raft_id\002\254peer_address\256127.0.0.1:3302\245voter\303\253instance_id\242i2\255replicaset_id\242r1\255instance_uuid\331$24c4ac5f-4981-3441-879c-aee1edb608a6\257replicaset_uuid\331$e0df68c5-e7f9-395f-86b3-30ad9e1b7b07\254commit_index\003"} commit: 4, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.887 [2975113] main/116/raft_main_loop V> persisted index 4, raft_id: 2
i1 | 2022-06-01 20:17:37.887 [2975105] main/111/raft_main_loop V> persisted index 5, raft_id: 1
i1 | 2022-06-01 20:17:37.888 [2975105] main/111/raft_main_loop V> committing index 5, index: 5, raft_id: 1
i1 | 2022-06-01 20:17:37.888 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 5 commit: 5, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.888 [2975113] main/116/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {1} } }, learners: {2}, learners_next: {}, auto_leave: true }, raft_id: 2
i2 | 2022-06-01 20:17:37.888 [2975113] main/116/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1} }, outgoing: Configuration { voters: {} } }, learners: {2}, learners_next: {}, auto_leave: false }, raft_id: 2
i1 | 2022-06-01 20:17:37.889 [2975105] main/111/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {1} } }, learners: {}, learners_next: {}, auto_leave: true }, raft_id: 1
i1 | 2022-06-01 20:17:37.889 [2975105] main/111/raft_main_loop I> initiating automatic transition out of joint configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {1} } }, learners: {}, learners_next: {}, auto_leave: true }, raft_id: 1
i2 | 2022-06-01 20:17:37.889 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 5 commit: 4, to: 1, from: 2, raft_id: 2
i1 | 2022-06-01 20:17:37.889 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 5 entries {entry_type: EntryConfChangeV2 term: 2 index: 6} commit: 5, to: 2, from: 1, raft_id: 1
i2 | 2022-06-01 20:17:37.890 [2975113] main/116/raft_main_loop V> persisted index 5, raft_id: 2
i2 | 2022-06-01 20:17:37.890 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 5 commit: 5, to: 1, from: 2, raft_id: 2
i1 | 2022-06-01 20:17:37.890 [2975105] main/111/raft_main_loop V> persisted index 6, raft_id: 1
i2 | 2022-06-01 20:17:37.891 [2975113] main/116/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {1} } }, learners: {}, learners_next: {}, auto_leave: true }, raft_id: 2
i2 | 2022-06-01 20:17:37.892 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 6 commit: 5, to: 1, from: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.892 [2975113] main/116/raft_main_loop V> persisted index 6, raft_id: 2
i1 | 2022-06-01 20:17:37.893 [2975105] main/111/raft_main_loop V> committing index 6, index: 6, raft_id: 1
i1 | 2022-06-01 20:17:37.893 [2975105] main/111/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgAppend to: 2 log_term: 2 index: 6 commit: 6, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:37.893 [2975105] main/111/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 1
i1 | 2022-06-01 20:17:37.894 [2975105] main/112/raft_join_loop I> batch processed: ["i2"], Ok(6)
i2 | 2022-06-01 20:17:37.895 [2975113] main/116/raft_main_loop V> Sending from 2 to 1, msg: msg_type: MsgAppendResponse to: 1 index: 6 commit: 6, to: 1, from: 2, raft_id: 2
i2 | 2022-06-01 20:17:37.896 [2975113] main/116/raft_main_loop I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 2
i2 | 2022-06-01 20:17:37.896 [2975113] main C> entering the event loop
Instance(i2, listen=127.0.0.1:3302) is ready
=================== Cluster("127.0.0.1:3300", n=2) deployed ====================
----------------------------- Captured stdout call -----------------------------
i1 | [supervisor:2975087] got signal 15
i1 | [supervisor:2975087] subprocess finished: Ok(Exited(Pid(2975105), 0))
i2 | [supervisor:2975090] got signal 15
i2 | [supervisor:2975090] subprocess finished: Ok(Exited(Pid(2975113), 0))
i1 | [supervisor:2975124] running StartDiscover
i2 | [supervisor:2975135] running StartDiscover
----------------------------- Captured stderr call -----------------------------
i2 | 2022-06-01 20:17:38.096 [2975113] main/113/applier/127.0.0.1:3301 I> can't read row
i2 | 2022-06-01 20:17:38.096 [2975113] main/113/applier/127.0.0.1:3301 coio.cc:378 !> SystemError unexpected EOF when reading from socket, called on fd 18, aka 127.0.0.1:39078, peer of 127.0.0.1:3301: Broken pipe
i2 | 2022-06-01 20:17:38.096 [2975113] main/113/applier/127.0.0.1:3301 I> will retry every 1.00 second
Instance(i1, listen=127.0.0.1:3301) terminated
Instance(i2, listen=127.0.0.1:3302) terminated
Instance(i1, listen=127.0.0.1:3301) starting...
i1 | >>>>> start_discover()
i1 | 2022-06-01 20:17:38.198 [2975128] main/103/interactive C> Tarantool 2.8.3-0-g01023dbc2
i1 | 2022-06-01 20:17:38.199 [2975128] main/103/interactive C> log level 6
i1 | 2022-06-01 20:17:38.199 [2975128] main/103/interactive I> wal/engine cleanup is paused
i1 | 2022-06-01 20:17:38.200 [2975128] main/103/interactive I> mapping 268435456 bytes for memtx tuple arena...
i1 | 2022-06-01 20:17:38.200 [2975128] main/103/interactive I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
i1 | 2022-06-01 20:17:38.200 [2975128] main/103/interactive I> mapping 134217728 bytes for vinyl tuple arena...
i1 | 2022-06-01 20:17:38.201 [2975128] main/103/interactive I> Recovering snapshot with schema version 2.7.1
i1 | 2022-06-01 20:17:38.204 [2975128] main/103/interactive I> instance uuid 68d4a766-4144-3248-aeb4-e212356716e4
i1 | 2022-06-01 20:17:38.205 [2975128] main/103/interactive I> instance vclock {0: 42, 1: 16}
i1 | 2022-06-01 20:17:38.205 [2975128] main/103/interactive I> tx_binary: stopped
i1 | 2022-06-01 20:17:38.205 [2975128] coio V> loading vylog 0
i1 | 2022-06-01 20:17:38.205 [2975128] coio V> done loading vylog
i1 | 2022-06-01 20:17:38.205 [2975128] main/103/interactive I> recovery start
i1 | 2022-06-01 20:17:38.205 [2975128] main/103/interactive I> recovering from `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.snap'
i1 | 2022-06-01 20:17:38.215 [2975128] main/103/interactive I> cluster uuid e0df68c5-e7f9-395f-86b3-30ad9e1b7b07
i1 | 2022-06-01 20:17:38.227 [2975128] main/103/interactive I> assigned id 1 to replica 68d4a766-4144-3248-aeb4-e212356716e4
i1 | 2022-06-01 20:17:38.227 [2975128] main/103/interactive V> RAFT: recover {term: 1}
i1 | 2022-06-01 20:17:38.227 [2975128] main/103/interactive I> recover from `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.xlog'
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive I> assigned id 2 to replica 24c4ac5f-4981-3441-879c-aee1edb608a6
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive recovery.cc:160 W> file `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i1/00000000000000000000.xlog` wasn't correctly closed
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive I> Building secondary indexes in space 'raft_group'...
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive I> Adding 2 keys to TREE index 'instance_id' ...
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive I> Adding 2 keys to TREE index 'replicaset_id' ...
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive I> Space 'raft_group': done
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive I> ready to accept requests
i1 | 2022-06-01 20:17:38.228 [2975128] main/103/interactive C> leaving orphan mode
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive I> set 'log_level' configuration option to 6
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive I> set 'feedback_enabled' configuration option to false
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive I> set 'replication' configuration option to []
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon stopped
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive V> feedback_daemon started
i1 | 2022-06-01 20:17:38.229 [2975128] main/103/interactive I> set 'log_format' configuration option to "plain"
i1 | 2022-06-01 20:17:38.229 [2975128] main/106/checkpoint_daemon I> scheduled next checkpoint for Wed Jun 1 22:06:31 2022
i1 | 2022-06-01 20:17:38.230 [2975128] main/103/interactive I> tx_binary: stopped
i1 | 2022-06-01 20:17:38.230 [2975128] main/103/interactive I> tx_binary: bound to 127.0.0.1:3301
i1 | 2022-06-01 20:17:38.230 [2975128] main/103/interactive I> set 'listen' configuration option to "127.0.0.1:3301"
i1 | 2022-06-01 20:17:38.431 [2975128] main/103/interactive I> >>>>> postjoin()
i1 | 2022-06-01 20:17:38.431 [2975128] main/103/interactive I> switched to configuration, config: Configuration { voters: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {} } }, learners: {}, learners_next: {}, auto_leave: false }, raft_id: 1
i1 | 2022-06-01 20:17:38.432 [2975128] main/103/interactive V> reset election timeout 0 -> 26 at 0, election_elapsed: 0, timeout: 26, prev_timeout: 0, raft_id: 1
i1 | 2022-06-01 20:17:38.432 [2975128] main/103/interactive I> became follower at term 2, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:38.432 [2975128] main/103/interactive I> newRaft, peers: Configuration { incoming: Configuration { voters: {1, 2} }, outgoing: Configuration { voters: {} } }, last term: 2, last index: 6, applied: 6, commit: 6, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:38.432 [2975128] main/103/interactive I> RawNode created with id 1., id: 1, raft_id: 1
Instance(i2, listen=127.0.0.1:3302) starting...
i2 | >>>>> start_discover()
i2 | 2022-06-01 20:17:38.521 [2975139] main/103/interactive C> Tarantool 2.8.3-0-g01023dbc2
i2 | 2022-06-01 20:17:38.521 [2975139] main/103/interactive C> log level 6
i2 | 2022-06-01 20:17:38.521 [2975139] main/103/interactive I> wal/engine cleanup is paused
i2 | 2022-06-01 20:17:38.522 [2975139] main/103/interactive I> mapping 268435456 bytes for memtx tuple arena...
i2 | 2022-06-01 20:17:38.522 [2975139] main/103/interactive I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
i2 | 2022-06-01 20:17:38.522 [2975139] main/103/interactive I> mapping 134217728 bytes for vinyl tuple arena...
i2 | 2022-06-01 20:17:38.523 [2975139] main/103/interactive I> Recovering snapshot with schema version 2.7.1
i2 | 2022-06-01 20:17:38.526 [2975139] main/103/interactive I> instance uuid 24c4ac5f-4981-3441-879c-aee1edb608a6
i2 | 2022-06-01 20:17:38.527 [2975139] main/103/interactive I> instance vclock {0: 45, 1: 16}
i2 | 2022-06-01 20:17:38.527 [2975139] main/103/interactive I> tx_binary: stopped
i2 | 2022-06-01 20:17:38.527 [2975139] coio V> loading vylog 16
i2 | 2022-06-01 20:17:38.527 [2975139] coio V> done loading vylog
i2 | 2022-06-01 20:17:38.527 [2975139] main/103/interactive I> recovery start
i2 | 2022-06-01 20:17:38.527 [2975139] main/103/interactive I> recovering from `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000016.snap'
i2 | 2022-06-01 20:17:38.537 [2975139] main/103/interactive I> cluster uuid e0df68c5-e7f9-395f-86b3-30ad9e1b7b07
i2 | 2022-06-01 20:17:38.549 [2975139] main/103/interactive I> assigned id 1 to replica 68d4a766-4144-3248-aeb4-e212356716e4
i2 | 2022-06-01 20:17:38.549 [2975139] main/103/interactive I> assigned id 2 to replica 24c4ac5f-4981-3441-879c-aee1edb608a6
i2 | 2022-06-01 20:17:38.549 [2975139] main/103/interactive V> RAFT: recover {term: 1}
i2 | 2022-06-01 20:17:38.549 [2975139] main/103/interactive I> recover from `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000016.xlog'
i2 | 2022-06-01 20:17:38.549 [2975139] main/103/interactive recovery.cc:160 W> file `/tmp/pytest-of-rosik/pytest-1322/test_restart_both0/i2/00000000000000000016.xlog` wasn't correctly closed
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> Building secondary indexes in space 'raft_group'...
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> Adding 2 keys to TREE index 'instance_id' ...
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> Adding 2 keys to TREE index 'replicaset_id' ...
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> Space 'raft_group': done
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> ready to accept requests
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive C> leaving orphan mode
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> set 'log_level' configuration option to 6
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> set 'feedback_enabled' configuration option to false
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> set 'replication' configuration option to []
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon stopped
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive V> feedback_daemon started
i2 | 2022-06-01 20:17:38.550 [2975139] main/103/interactive I> set 'log_format' configuration option to "plain"
i2 | 2022-06-01 20:17:38.550 [2975139] main/106/checkpoint_daemon I> scheduled next checkpoint for Wed Jun 1 22:14:57 2022
i2 | 2022-06-01 20:17:38.551 [2975139] main/103/interactive I> tx_binary: stopped
i2 | 2022-06-01 20:17:38.551 [2975139] main/103/interactive I> tx_binary: bound to 127.0.0.1:3302
i2 | 2022-06-01 20:17:38.551 [2975139] main/103/interactive I> set 'listen' configuration option to "127.0.0.1:3302"
i2 | 2022-06-01 20:17:38.552 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:38.753 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:38.954 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:39.154 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:39.354 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:39.553 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:39.755 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:39.954 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:40.155 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:40.356 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:40.555 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:40.756 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:40.956 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i1 | 2022-06-01 20:17:41.053 [2975128] main/113/raft_main_loop I> starting a new election, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:41.053 [2975128] main/113/raft_main_loop I> became pre-candidate at term 2, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:41.054 [2975128] main/113/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgRequestPreVote to: 2 term: 3 log_term: 2 index: 6 commit: 6 commit_term: 2, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:41.055 [2975128] main/113/raft_main_loop I> broadcasting vote request, to: [2], log_index: 6, log_term: 2, term: 2, type: MsgRequestPreVote, raft_id: 1
i1 | 2022-06-01 20:17:41.056 [2975128] main/115/lua V> Interact with 127.0.0.1:3302 -> Sever respond with error: uninitialized yet
i2 | 2022-06-01 20:17:41.157 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:41.357 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:41.557 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:41.759 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:41.959 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:42.160 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:42.360 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:42.559 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:42.761 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:42.962 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:43.162 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:43.363 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:43.563 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i1 | 2022-06-01 20:17:43.667 [2975128] main/113/raft_main_loop I> starting a new election, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:43.668 [2975128] main/113/raft_main_loop I> became pre-candidate at term 2, term: 2, raft_id: 1
i1 | 2022-06-01 20:17:43.668 [2975128] main/113/raft_main_loop V> Sending from 1 to 2, msg: msg_type: MsgRequestPreVote to: 2 term: 3 log_term: 2 index: 6 commit: 6 commit_term: 2, to: 2, from: 1, raft_id: 1
i1 | 2022-06-01 20:17:43.668 [2975128] main/113/raft_main_loop I> broadcasting vote request, to: [2], log_index: 6, log_term: 2, term: 2, type: MsgRequestPreVote, raft_id: 1
i1 | 2022-06-01 20:17:43.670 [2975128] main/115/lua V> Interact with 127.0.0.1:3302 -> Sever respond with error: uninitialized yet
i2 | 2022-06-01 20:17:43.763 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:43.964 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:44.164 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:44.364 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
i2 | 2022-06-01 20:17:44.564 [2975139] main/103/interactive tarantool.rs:224 W> net_box_call failed: Sever respond with error: leader_id is unknown for node 1, fn: .proc_discover, peer: 127.0.0.1:3301
--------------------------- Captured stderr teardown ---------------------------
Instance(i1, listen=127.0.0.1:3301) killed
Instance(i2, listen=127.0.0.1:3302) killed
=========================== short test summary info ============================
FAILED test/int/test_couple.py::test_restart_both - AssertionError: assert False
======================= 1 failed, 16 deselected in 8.16s =======================