- Jun 27, 2022
-
-
Yaroslav Dynnikov authored
-
Valentin Syrovatskiy authored
-
- Jun 23, 2022
-
-
deactivation means that instance - is demoted to learner (if it wasn't one already) - is marked "inactive" so that it's ignored when determining the number of voters required for the cluster
-
Topology::diff, Topology::to_replace, etc. only store ids, so that peer info is consistent
-
Valentin Syrovatskiy authored
-
- Jun 20, 2022
-
-
Georgy Moshkin authored
-
Georgy Moshkin authored
-
Georgy Moshkin authored
-
-
-
-
-
Yaroslav Dynnikov authored
-
Valentin Syrovatskiy authored
-
-
- Jun 17, 2022
-
-
Yaroslav Dynnikov authored
-
Yaroslav Dynnikov authored
-
-
Alexander Tolstoy authored
-
Alexander Tolstoy authored
-
Valentin Syrovatskiy authored
-
Bootstrapping the replication was implemented in 2dac77c5. But it was configured on the new instance only. The old instance (that joined earlier) couldn't update `box.cfg({replication})` until now. Close https://git.picodata.io/picodata/picodata/picodata/-/issues/52
-
- Jun 16, 2022
-
-
Alexander Tolstoy authored
-
- Jun 15, 2022
-
-
Yaroslav Dynnikov authored
Just supply it with a default value "demo". No new tests are necessary, we already have `test/int/test_joining.py::test_cluster_id_mismatch`. Close https://git.picodata.io/picodata/picodata/picodata/-/issues/96
-
-
Alexander Tolstoy authored
-
- Jun 06, 2022
-
-
-
Georgy Moshkin authored
If proc_discover is invoked after raft node was initialized but before raft leader was elected, it would return an error before this commit. Because of that it was impossible to restart the whole cluster at once. This commit change proc_discover such that in case leader_id is not ready, the normal discovery algorithm takes place. Closes #93
-
- Jun 03, 2022
-
-
- Jun 02, 2022
-
-
Georgy Moshkin authored
-
Georgy Moshkin authored
-
Was broken because `tarantool_free` checks if the current process is the main one and not the child, which was forked at some point (at what point?). This check was implemented by saving the original process's id in the static variable master_pid, which got set when the code got loaded the first time into memory. So we broke this when we started forking the process in picodata, which resulted in `master_pid` being set to the pid of the picodata's "supervisor" process, which doesn't even enter tarantool runtime. Closes #37
-
- Jun 01, 2022
-
-
Yaroslav Dynnikov authored
Restarting both instances doesn't work yet, to be fixed later. Close https://git.picodata.io/picodata/picodata/picodata/-/issues/90
-
Yaroslav Dynnikov authored
Since commit d87dd4ca `leader_id` became an `Option`, so the `None` value isn't rendered in the `picolib.raft_status` response: ```python status={'is_ready': False, 'raft_state': 'Follower', 'id': 1} ``` It makes pytest complain about missing argument: ``` cluster2 = Cluster("127.0.0.1:3300", n=2) def test_restart_leader(cluster2: Cluster): i1, i2 = cluster2.instances i1.assert_raft_status('Leader') i2.assert_raft_status('Follower') i1.restart() > i1.wait_ready() test/int/test_joining.py:209: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../.local/share/virtualenvs/picodata-6sv6l6y-/lib/python3.10/site-packages/funcy/decorators.py:45: in wrapper return deco(call, *dargs, **dkwargs) ../../.local/share/virtualenvs/picodata-6sv6l6y-/lib/python3.10/site-packages/funcy/flow.py:127: in retry return call() ../../.local/share/virtualenvs/picodata-6sv6l6y-/lib/python3.10/site-packages/funcy/decorators.py:66: in __call__ return self._func(*self._args, **self._kwargs) test/int/conftest.py:305: in wait_ready status = self._raft_status() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Instance(i1, listen=127.0.0.1:3301) def _raft_status(self) -> RaftStatus: status = self.call("picolib.raft_status") assert isinstance(status, dict) eprint(f"{status=}") > return RaftStatus(**status) E TypeError: RaftStatus.__init__() missing 1 required positional argument: 'leader_id' test/int/conftest.py:280: TypeError ``` This patch fixes the failure message: ``` self = Instance(i1, listen=127.0.0.1:3301) @funcy.retry(tries=20, timeout=0.1) def wait_ready(self): status = self._raft_status() > assert status.is_ready E AssertionError: assert False E + where False = RaftStatus(id=1, raft_state='Follower', is_ready=False, leader_id=None).is_ready test/int/conftest.py:306: AssertionError ```
-
Sergey V authored
* Make `--cluster-id` CLI mandatory. * Handle cluster_id mismatch in raft_join. When an instance attempts to join the cluster and the instances's `--instance-id` parameter mismatches the cluster_id of the cluster an error is raised inside the raft_join handler.
-
Sergey V authored
-
Sergey V authored
-
Sergey V authored
-
-
- May 31, 2022
-
-
Georgy Moshkin authored
Previously the discovery algorithm would try to reach each known peer sequentially requiring each consequent request to succeed until the next one can be attempted. This would not work in some cases (see test in previous commit). So the new algorithm instead makes a single attempt to reach each peer within a round, and if some failed they're retried in the next round of requests. This allows overall discovery to succeed in cases when some of the initial peers never respond. Closes #54
-