- Jan 27, 2025
-
-
Georgy Moshkin authored
Remove the corresponding calls, add module-level doc-comments with explanations on what the hell is going on. This also fixes some flaky tests including #1294
-
Georgy Moshkin authored
Turns out blocking inside proc_replication_demote is not safe, as it may lead to deadlocking if a replicaset master switchover happens during a replicaset catch-up via raft snapshot. The symptom of the problem was the flakiness of the tests (for example issue #1294). Closes #1294 This reverts commit ec2b760a.
-
Вартан Бабаян authored
-
- Jan 16, 2025
-
-
Georgy Moshkin authored
-
Georgy Moshkin authored
Closes #996 This was the intended behavior all along, but I erroneously remove it thinking that it's a bad thing that we're going to be having references from _pico_plugin_migration to plugins which don't exist. If you drop the plugin leaving it's data, the only way to drop that data would be to re-create the same plugin and then drop it using `DROP PLUGIN WITH DATA` syntax. Also note that there's currently no way to drop the plugin's data without dropping the plugin itself (other then from lua via pico.migration_down but no analogue in SQL).
-
Georgy Moshkin authored
test_plugin_remove is split into 2: test_drop_plugin_basics and test_drop_plugin_with_or_without_data. The use of lua plugin api is replaced with SQL plugin api (except for down_migration, because there's no alternative in SQL API).
-
Georgy Moshkin authored
There was also a bug in storage::Plugins::get_all_versions. Close #1283
-
Prints all current members of the cluster and their status. The same result can be achieved by executing SQL query to system tables.
-
- Jan 15, 2025
-
-
- Jan 14, 2025
-
-
Here is how DROP USER IF EXISTS query can result in such error: - [client 1]: check user exists - [client 1]: send CAS request 1 - [client 2]: check user exists - [client 2]: send CAS request 2 - [leader]: recv CAS request 1 -> access_check: ok -> check predicate: ok -> apply(async) - [leader]: recv CAS request 2 -> access_check: no such user /|\ | | applied To address such errors, access_check was made to not return errors when trying to drop non-existent users or roles. Now the above example is handled as follows: When a user is dropped during handling the request, it causes a schema change that leads to the rejection of the operation on the predicate check. Upon retry, the initiator will detect that the user has been dropped and handle it accordingly.
-
- Jan 13, 2025
-
-
Вартан Бабаян authored
Replicaset name: {tier_name}_{replicaset_number_in_this_tier} Instance name: {tier_name}_{replicaset_number}_{instance_number_in_replicaset}
-
- Jan 10, 2025
-
-
-
Diana Tikhonova authored
Added a `wait_index` parameter to `proc_replication_demote` to improve synchronization during demotion operations.
-
Diana Tikhonova authored
-
Diana Tikhonova authored
-
Key updates include: Adding the term field to relevant RPC requests (ConfigureReplicationRequest, ReplicationSyncRequest, and DemoteRequest). Utilizing node.status().check_term(req.term) in critical sections of the replication logic.
-
- Jan 09, 2025
-
-
Maksim Kaitmazian authored
Part of #998
-
- Dec 28, 2024
-
-
Dmitry Rodionov authored
The patch adds support of ldap auth method to pgproto. For ldap we request user password in clear text using corresponding protocol message. Afterwards password is passed to tarantool `authenticate` method which handles ldap based auth. Important: since password is transferred as clear text it is advised to setup ssl.
-
Вартан Бабаян authored
-
Вартан Бабаян authored
-
- Dec 27, 2024
-
-
Previously, using a default port of another protocol would result in an error like the following ``` error: Invalid value "localhost:8080" for '--pg-listen <[HOST][:PORT]>': PGPROTO cannot listen on port 8080 because it is default port for HTTP ``` This commit allows default ports to be used by other protocols, as long as the port is not already occupied. Close #1254
-
Georgy Moshkin authored
The bug was introduced when I changed the behaviour of conf_change in regard to instances with target_state Expelled. As a result we would sometimes arbitrarily demote healthy voters in presence of degenerate ones. For example we could have this situation: instance i1: raft_id=1, target_state=Online, raft_configuration=voter instance i2: raft_id=2, target_state=Expelled, raft_configuration=voter (!) instance i3: raft_id=3, target_state=Offline, raft_configuration=voter (!) instance i4: raft_id=4, target_state=Online, raft_configuration=learner (!) instance i5: raft_id=5, target_state=Online, raft_configuration=learner (!)
-
Georgy Moshkin authored
-
Кирилл Безуглый authored
BREAKING CHANGE: as soon as PostgreSQL protocol is enabled by default, we must be careful with `--pg-listen` cli flag when creating more than a single instance, because we will suddenly get a port confict error (busy port)
-
- Dec 25, 2024
-
-
Dmitry Rodionov authored
I found this by compiling tarantool with CMAKE_BUILD_TYPE=Debug
-
Georgy Moshkin authored
We used to automatically truncate the index in compact_log if the caller requested to compact too many entries. This made it so that the requirement of not compacting any un-applied entries was implicit in our code base, which is not good as it allows for some bugs to creep in (like the one we fix a couple commits ago). Now this is changed and instead of silently adjusting the index of last compacted entry, we just assert that it's no greater than the applied index. As a consequence there's a minor improvement in do_raft_log_auto_compaction function.
-
Georgy Moshkin authored
There was a hard-to-reproduce bug in our snapshot application code. We always compact the raft log before applying the snapshot, because the snapshot replaces the entries and some of the logic in raft-rs seems to rely on this. The problem was, that our compact_log function would not remove any unapplied entries, which makes sense for compaction triggered automatically by raft log size, but doesn't make sense for raft snapshot, as the snapshot contains the state corresponding to the newer entries. The fix is simple: don't guard from unapplied entry compaction in case the compaction is for raft snapshot. We don't add any regression tests for this, because the implementation would be too difficult and would need us to pollute the code with error injection logic, which is not a worthy trade off in this case. But also the logic will still be tested, because this bug was responsible for a large amount of flaky tests, so we should see a significant reduction in flakiness from now on in tests concerning raft snapshots.
-
- Dec 20, 2024
-
-
Вартан Бабаян authored
-
Вартан Бабаян authored
-
Вартан Бабаян authored
-
Erik Khamitov authored
-
- Dec 19, 2024
-
-
Вартан Бабаян authored
-
-
-
Georgy Moshkin authored
TopologyCache is a collection of deserialized structures with information about cluster topology. This currently includes data from _pico_instance, _pico_replicaset, _pico_tier & _pico_service_route_table. The info is automatically kept up to date with the corresponding system tables. The TopologyCache also caches the immutable info related to the current instance, like instance name, replicaset uuid, etc. From now on we should be reading this data from TopologyCache whenever possible instead of going directly to system tables as we were doing previosly. At the moment only the plugin RPC module has transitioned to using TopologyCache but other modules should be refactored as well. Especially governor.
-
Georgy Moshkin authored
-
Georgy Moshkin authored
-
Georgy Moshkin authored
Before this fix if requesting RPC by an invalid bucket_id we would send an RPC to every replicaset to check if they have such a bucket_id. This is not needed because we know the allowed range.
-
Georgy Moshkin authored
-
- Dec 17, 2024
-
-
Кирилл Безуглый authored
-