- Oct 10, 2023
-
-
Vladimir Davydov authored
Currently, vy_run_remove_files calls coio several times under the hood - once per each run file and data directory. Apart from being inefficient, this also prevents us from adding some extra logic for thorough file deletion. So let's perform all the operations in a single coio call. Needed for tarantool/tarantool-ee#540 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
- Oct 09, 2023
-
-
Mergen Imeev authored
The structure is no longer used, so it is dropped. Follow-up #9112 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Mergen Imeev authored
This patch introduces variations of DROP CONSTRAINT with a declared constraint type. Closes #9112 @TarantoolBot document Title: upgrade of DROP CONSTRAINT Now, instead of just `ALTER TABLE table DROP CONSTRAINT constraint;` we have 8 operator variants: 1) Statement to drop PRIMARY KEY, UNIQUE, tuple FOREIGN NEY or tuple CHECK constraints: ``` ALTER TABLE tab_name DROP CONSTRAINT constr_name; ``` This statement cannot drop a constraint if `constr_name` matches more than one constraint. 2) Statement to drop field FOREIGN NEY or field CHECK constraints: ``` ALTER TABLE tab_name DROP CONSTRAINT field_name.constr_name; ``` This statement cannot drop a constraint if `constr_name` matches more than one constraint for the `field_name` field. 3) Statement to drop PRIMARY KEY constraint: ``` ALTER TABLE tab_name DROP CONSTRAINT constr_name PRIMARY KEY; ``` 4) Statement to drop UNIQUE constraint: ``` ALTER TABLE tab_name DROP CONSTRAINT constr_name UNIQUE; ``` 5) Statement to drop tuple FOREIGN KEY constraint: ``` ALTER TABLE tab_name DROP CONSTRAINT constr_name FOREIGN KEY; ``` 6) Statement to drop tuple CHECK constraint: ``` ALTER TABLE tab_name DROP CONSTRAINT constr_name CHECK; ``` 7) Statement to drop field FOREIGN KEY constraint: ``` ALTER TABLE tab_name DROP CONSTRAINT field_name.constr_name FOREIGN KEY; ``` 8) Statement to drop field CHECK constraint: ``` ALTER TABLE tab_name DROP CONSTRAINT field_name.constr_name CHECK; ```
-
Mergen Imeev authored
This patch prohibits DROP CONSTRAINT if more than one constraint matches a given name. Part of #9112 NO_DOC=will be added later NO_CHANGELOG=will be added later
-
Mergen Imeev authored
This patch introduces "ALTER TABLE table_name DROP CONSTRAINT field_name.constraint_name" which can be used to drop field constraints. Also, after this patch, field constraints cannot be dropped using "ALTER TABLE table_name DROP CONSTRAINT constraint_name;". Part of #9112 NO_DOC=will be added later NO_CHANGELOG=will be added later
-
Mergen Imeev authored
This patch replaces region_alloc() by xregion_alloc() in mp_vformat_on_region(). NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Nikolay Shirokovskiy authored
Similarly to release_asan_clang but to test debug build. It is also run only under `asan-ci` and `full-ci` labels. Fiber stack size is 2 times bigger than in the release workflow for luajit tests to pass. Note that this factor is a wild guess. Part of #7327 NO_TEST=ci NO_CHANGELOG=ci NO_DOC=ci
-
Nikolay Shirokovskiy authored
This test is quite a flaky in debug ASAN build. Let's fix it before turning debug ASAN on in CI. The issue is due to heavy load popen.read may return nil with 'TimedOut: timed out' error. Just read again as in the other cases of this test. Part of #7327 NO_CHANGELOG=internal NO_DOC=internal
-
Nikolay Shirokovskiy authored
This blocks us from turning debug ASAN CI currently. The ticket for the leakage is #9213. Part of #7327 NO_TEST=internal NO_CHANGELOG=internal NO_DOC=internal
-
Nikolay Shirokovskiy authored
Introducing ASAN-friendly small allocators slows down execution notably. As a result several tests start to fail due to hitting max slice limit. I guess we don't interested if fibers in ASAN build grabs control for too long as we have release build run in CI anyway. Some tests set max slice limit explicitly to some large value thus overwriting default infinity value for ASAN. Unfortunately this large value is not large enough for ASAN. Let's set some really large value. Part of #7327 NO_CHANGELOG=internal NO_DOC=internal
-
Oleg Jukovec authored
The patch returns integration test run with `crud`. The test run was removed earlier [1] because the `crud` did not support tests with Tarantool 3.0. But now it supports [2]. 1. https://github.com/tarantool/tarantool/commit/7316d8165e80b3678b45fd1b42823a8f92b734f6 2. https://github.com/tarantool/crud/pull/381 NO_DOC=ci NO_TEST=ci NO_CHANGELOG=ci
-
Georgiy Lebedev authored
This first version is quite basic and only benchmarks random `get`s of existing keys and `select`s of all keys for a tree index (these benchmarks are needed for #6964) — its main goal is to provide a foundation (i.e., all the necessary initialization logic) for benchmarking memtx. Extending this benchmark using the provided memtx singleton and fixture should be fairly simple. The results of running this benchmark compiled with clang-16 on my Intel MacBook Pro (13-inch, 2020) laptop [1]: NO_WRAP georgiy.lebedev@georgiy-lebedev perf % ./memtx.perftest --benchmark_min_warmup_time=10 --benchmark_repetitions=10 --benchmark_report_aggregates_only=true --benchmark_display_aggregates_only=true 2023-10-02T12:59:36+03:00 Running ./memtx.perftest Run on (8 X 2000 MHz CPU s) CPU Caches: L1 Data 48 KiB L1 Instruction 32 KiB L2 Unified 512 KiB (x4) L3 Unified 6144 KiB Load Average: 5.67, 10.05, 7.89 mapping 4398046511104 bytes for memtx tuple arena... Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.090508 fiber has not yielded for more than 0.500 seconds -------------------------------------------------------------------------------------------------------- Benchmark Time CPU Iterations UserCounters... -------------------------------------------------------------------------------------------------------- MemtxFixture/TreeGetRandomExistingKeys_mean 682 ns 667 ns 10 items_per_second=1.51504M/s MemtxFixture/TreeGetRandomExistingKeys_median 704 ns 693 ns 10 items_per_second=1.44387M/s MemtxFixture/TreeGetRandomExistingKeys_stddev 81.7 ns 72.7 ns 10 items_per_second=169.696k/s MemtxFixture/TreeGetRandomExistingKeys_cv 11.97 % 10.90 % 10 items_per_second=11.20% MemtxFixture/TreeGet1RandomExistingKey_mean 253 ns 241 ns 10 items_per_second=4.20104M/s MemtxFixture/TreeGet1RandomExistingKey_median 233 ns 229 ns 10 items_per_second=4.36911M/s MemtxFixture/TreeGet1RandomExistingKey_stddev 46.7 ns 29.7 ns 10 items_per_second=464.187k/s MemtxFixture/TreeGet1RandomExistingKey_cv 18.43 % 12.34 % 10 items_per_second=11.05% MemtxFixture/TreeSelectAll_mean 4766728 ns 4705622 ns 10 items_per_second=27.978M/s MemtxFixture/TreeSelectAll_median 4605936 ns 4580478 ns 10 items_per_second=28.6184M/s MemtxFixture/TreeSelectAll_stddev 447495 ns 349499 ns 10 items_per_second=1.85573M/s MemtxFixture/TreeSelectAll_cv 9.39 % 7.43 % 10 items_per_second=6.63% NO_WRAP [1]: https://support.apple.com/kb/SP819?locale=en_US Needed for #6964 NO_CHANGELOG=benchmark NO_DOC=benchmark NO_TEST=benchmark
-
Georgiy Lebedev authored
The tuple format and access subsystems have static variables holding their states which don't get reset during cleanup: initialize them explicitly in `*_init` functions — that way we can re-initialize these subsystems multiple times (e.g., when setting up and tearing down benchmarks). Opted for initializing them in ``*_init` functions rather than resetting them in `*_free` functions for logical consistency. Needed for #6964 NO_CHANGELOG=cleanup fix NO_DOC=cleanup fix NO_TEST=cleanup fix
-
Serge Petrenko authored
Force recovery first tries to collect all rows of a transaction into a single list, and only then applies those rows. The problem was that it collected rows based on the row replica_id. For local rows replica_id is set to 0, but actually such rows can be part of a transaction coming from any instance. Fix recovery of such rows Follow-up #8746 Follow-up #7932 NO_DOC=bugfix NO_CHANGELOG=the broken behaviour couldn't be seen due to bug #8746
-
Serge Petrenko authored
In order to preserve transaction boundaries over replication, Tarantool writes a global NOP row after the last transaction row, if this row happens to be local. This is done to make sure that the is_commit flag, which is set only in the last transaction row, reaches the replica. This wouldn't happen if the last row was local. This workaround works fine for transactions completely authored by one instance: when both global and local rows come from operations of a single master. However, it's possible to append local rows to a remote master's transaction on a replica. For example, one can use on_replace triggers to write to replica's local space on each new transaction coming from master. In this case essentially a global NOP entry is added at the end of a remote master's transaction. This leads to several problems. First of all, this bumps replica's LSN, which is counter-intuitive, given that the replica might even be read-only. Besides, in a star topology this leads to master being unable to connect to the replica later on due to their vclocks becoming incompatible. Secondly, even if replication channel between master and replica is bidirectional, it creates a new row which should be replicated from replica to master, but at the same time is the last row of the master's transaction. Once master receives this row, it breaks its connection to replica due to transaction boundary violation (the last row of the transaction is received without its beginning). Adding a NOP row became extraneous since the previous commit, which made relay find transaction boundaries by itself. Closes #8958 NO_DOC=bugfix
-
Serge Petrenko authored
Some time ago we started writing transaction boundaries to WAL and respecting them in the replication stream: replicas wait for a full transaction receipt before applying it. However, during all these changes relay remained transaction-agnostic: it simply read single rows from WAL and sent them over to the receiver. This lead to a handful of ugly crutches: for example, tsn is not always equal to the lsn of the first global row of the transaction: if the first row is local, tsn is deduced from the first global row of the transaction. Also a dummy NOP was appended to the end of a transaction ending by a local row, so that is_commit flag wasn't lost by the replication. Let's make relay read a full transaction, filter out all the unnecessary rows, set the transaction boundaries accordingly and then send the transaction at once. Since in relay a single fiber sends data to the remote peer, there is no chance for a heartbeat to get in between rows of a single transaction: they're all sent at once. Hence the deletion of a corresponding guard `relay->is_sending_tx`. Prerequisite #8958 NO_DOC=internal change NO_CHANGELOG=internal change NO_TEST=covered by existing tests
-
Serge Petrenko authored
Transaction boundaries were not updated correctly for transactions in which local space writes were made from a replication trigger. Existing transaction boundaries and row flags from the master were written as is on the replica. Actually, the replica should recalculate transaction boundaries and even WAIT_SYNC/WAIT_ACK flags. Transaction boundaries should be recalculated when a replica appends a local write at the end of the master's transaction, and WAIT_SYNC/WAIT_ACK should be overwritten when nopifying synchronous transactions coming from an old term. The latter fix has uncovered the bug in skipping outdated synchronous transactions: if one replica replaces a transaction from an old term with NOPs and then passes that transaction to the other replica, the other replica raises a split brain error. It believes the NOPs are an async transaction form an old term. This worked before the fix, because the rows were written with the original WAIT_ACK = true bit. Now this is fixed properly: we allow fully NOP async tranasctions from the old term. Closes #8746 NO_DOC=bugfix NO_CHANGELOG=covered by the next commit
-
- Oct 06, 2023
-
-
Andrey Saranchin authored
The commit adds missing changelogs for tarantool.trigger.on_change and triggers that were moved to the trigger registry. The second changelog is especially important because it describes a breaking change of space triggers behavior. Follow-up #8664 Part of #8657 NO_TEST=changelog NO_DOC=later
-
- Oct 05, 2023
-
-
Nikolay Shirokovskiy authored
If non-terminal symbol is referenced in C code then destructor for expression is not called. Thus we don't need to duplicate. Otherwise we got a memory leak. See https://www.sqlite.org/cgi/src/doc/trunk/doc/lemon.html#destructor Close #9159 NO_DOC=bugfix NO_TEST=tested by debug ASAN CI (to be turned on)
-
- Oct 03, 2023
-
-
Alexander Turenko authored
Part of https://github.com/tarantool/tarantool-ee/issues/564 NO_DOC=The documentation request is to be added as part of Tarantool Enterprise Edition patchset. NO_CHANGELOG=see NO_DOC NO_TEST=To be tested in Tarantool Enterprise Edition.
-
Alexander Turenko authored
The new 'supervised' failover mode uses an external failover agent to make decisions regarding leadership in a replicaset. This is a feature of Tarantool Enterprise Edition. This commit adds a new `replication.failover` value `supervised`, adds corresponding instance startup code and necessary configuration validation. The most interesting part is how to start all the instances in RO, but if the replicaset is not bootstrapped yet, start one instance in RW to perform the replicaset bootstrap. See comments in applier/box_cfg.lua for details. Part of https://github.com/tarantool/tarantool-ee/issues/564 NO_DOC=The documentation request is to be added as part of Tarantool Enterprise Edition patchset. NO_CHANGELOG=see NO_DOC NO_TEST=The overall logic of this mode is to be tested in Tarantool Enterprise Edition.
-
Nikolay Shirokovskiy authored
It is convenient to have a label to run ASAN CI without running full CI. NO_DOC=ci NO_TEST=ci NO_CHANGELOG=ci
-
Nikolay Shirokovskiy authored
It is not convenient that test_downgrade_from_more_recent_version breaks if we create tag for new version and do not add next version to the downgrade versions list. If the version is released we should add it to the list anyway but it is not matter of this test. Follow up #9182 NO_DOC=internal NO_CHANGELOG=internal
-
Sergey Bronnikov authored
Performance tests added to perf directory are not automated and currently we run these tests manually from time to time. From other side source code that used rarely could lead to software rot [1]. The patch adds CMake target "test-perf" and GitHub workflow, that runs these tests in CI. Workflow is based on workflow release.yml, it builds performance tests and runs them. 1. https://en.wikipedia.org/wiki/Software_rot NO_CHANGELOG=testing NO_DOC=testing NO_TEST=testing
-
Sergey Bronnikov authored
Note that targets for running performance tests are generated only when CMAKE_BUILD_TYPE is equal to Release or RelWithDebug. Additionally, C++ performance tests require Google Benchmark library. Using non-debug build and having installed Google Benchmark library is rare case, so I suppose we don't need to introduce CMake option for performance testing. NO_CHANGELOG=testing NO_DOC=testing NO_TEST=testing infrastructure
-
Sergey Bronnikov authored
The patch adds a targets for each C performance test in a directory perf/ and a separate target "test-c-perf" that runs all C performance tests at once. NO_CHANGELOG=testing NO_DOC=testing NO_TEST=test infrastructure
-
Sergey Bronnikov authored
The patch adds a targets for each Lua performance test in a directory perf/lua/ (1mops_write_perftest, box_select_perftest, uri_escape_unescape_perftest) and a separate target "test-lua-perf" that runs all Lua performance tests at once. NO_CHANGELOG=testing NO_DOC=testing NO_TEST=test infrastructure
-
- Oct 02, 2023
-
-
Nikolay Shirokovskiy authored
In this case we don't have knowledge how to downgrade correctly. Close #9182 NO_DOC=bugfix
-
- Sep 29, 2023
-
-
Serge Petrenko authored
mp_compare_decimal_any_number erroneously assumed that any float or double from which a decimal can't be created is either infinite or NaN. This is not true. Any float greater than 1e38 can't fit into our decimal representation. When such a float got compared to a decimal, an assertion fired, which was wrong. Luckily, on release build the comparison was correct. Only the assertion is wrong. Fix it. Closes #8472 NO_DOC=bugfix
-
Magomed Kostoev authored
Since the sort order check is not required for keys without descending parts it's decided to move the check to template, so that the keys that has no descending parts don't pay for the sort order check on each comparison. Created proxy functions with increasing number of template parameters to make the comparator selection code less branchy. `key_def_set_compare_func_for_func_index` has been renamed to `key_def_set_compare_func_of_func_index` in order to make the proxy-call fit in 80 lines. NO_DOC=see previous commits NO_TEST=see previous commits NO_CHANGELOG=see previous commits
-
Magomed Kostoev authored
NO_CHANGELOG=see the previous commit @TarantoolBot document Title: C API update - added support for key part sort order. Looks like the `box_key_part_def_t` type isn't documented, but `BOX_KEY_PART_DEF_SORT_ORDER` bit field was added to its `flags` member. The possible values are: `BOX_KEY_PART_DEF_SORT_ORDER_ASC`: default sort order. `BOX_KEY_PART_DEF_SORT_ORDER_DESC`: reversed sort order.
-
Magomed Kostoev authored
* Add ability to specify sort_order when creating a key def via lua key_def API. * Update key_def.totable to also emit the sort order. NO_CHANGELOG=see the previous commit @TarantoolBot document Title: key_def API update - added support for key part sort order. The part sort order is specified the same way as in index creation: ``` key_def.new({{1, 'unsigned', sort_order = 'desc'}}) ``` The sort order is now taken into account in comparison functions. Affected documentation: [link](https://www.tarantool.io/en/doc/latest/reference/reference_lua/key_def/)
-
Magomed Kostoev authored
The `sort_order` parameter was introduced earlier but had no effect until now. Now it allows to specify a sort (iteration) order for each key part. The parameter is only applicable to ordered indexes, so any value except 'undef' for the `sort_order` is disallowed for all indexes except TREE. The 'undef' value of the `sort_order` field of the `key_part_def` is translated to 'asc' on `key_part` creation. In order to make the key def aware if its index is unordered, the signature of `key_def_new` has been changed: the `for_func_index` parameter has been moved to the new `flags` parameter and `is_unordered` flag has been introduced. Alternative iterator names has been introduced (which are aliases to regular iterators): box.index.FORWARD_[INCLUSIVE/EXCLUSIVE], box.index.REVERSE_[INCLUSIVE/EXCLUSIVE]. By the way fixed the `key_hint_stub` overload name, which supposed to be called `tuple_hint_stub`. `tuple_hint` and `key_hint` template declarations has been changed because of the checkpatch diagnostics. Closes #5529 @TarantoolBot document Title: Now it's possible to specify sort order of each index part. Sort order specifies the way indexes iterate over tuples with different fields in the same part. It can be either ascending (which is the case by default) and descending. Tuples with different ascending parts are sorted in indexes from lesser to greater, whereas tuples with different descending parts are sorted in the opposte order: from greater to lesser. Given example: ```lua box.cfg{} s = box.schema.create_space('tester') pk = s:create_index('pk', {parts = { {1, 'unsigned', sort_order = 'desc'}, {2, 'unsigned', sort_order = 'asc'}, {3, 'unsigned', sort_order = 'desc'}, }}) s:insert({1, 1, 1}) s:insert({1, 1, 2}) s:insert({1, 2, 1}) s:insert({1, 2, 2}) s:insert({2, 1, 1}) s:insert({2, 1, 2}) s:insert({2, 2, 1}) s:insert({2, 2, 2}) s:insert({3, 1, 1}) s:insert({3, 1, 2}) s:insert({3, 2, 1}) s:insert({3, 2, 2}) ``` In this case field 1 and 3 are descending, whereas field 2 is ascending. So `s:select()` will return this result: ```yaml --- - [3, 1, 2] - [3, 1, 1] - [3, 2, 2] - [3, 2, 1] - [2, 1, 2] - [2, 1, 1] - [2, 2, 2] - [2, 2, 1] - [1, 1, 2] - [1, 1, 1] - [1, 2, 2] - [1, 2, 1] ... ``` Beware, that when using other sort order than 'asc' for any field 'GE', 'GT', 'LE' and 'LT' iterator lose their meaning and specify 'forward inclusive', 'forward exclusive', 'reverse inclusive' and 'reverse exclusive' iteration direction respectively. Given example above, `s:select({2}, {iterator = 'GT'})` will return this: ```yaml --- - [1, 1, 2] - [1, 1, 1] - [1, 2, 2] - [1, 2, 1] ... ``` And `s:select({1}, {iterator = 'LT'})` will give us: ```yaml --- - [2, 2, 1] - [2, 2, 2] - [2, 1, 1] - [2, 1, 2] - [3, 2, 1] - [3, 2, 2] - [3, 1, 1] - [3, 1, 2] ... ``` In order to be more clear alternative iterator aliases can be used: 'FORWARD_INCLUSIVE', 'FORWARD_EXCLUSIVE', 'REVERSE_INCLUSIVE', 'REVERSE_EXCLUSIVE': ``` > s:select({1}, {iterator = 'REVERSE_EXCLUSIVE'}) --- - [2, 2, 1] - [2, 2, 2] - [2, 1, 1] - [2, 1, 2] - [3, 2, 1] - [3, 2, 2] - [3, 1, 1] - [3, 1, 2] ... ```
-
Magomed Kostoev authored
Currently there's a huge code duplication in the comparators. In order to simplify further development without affecting performance field comparison was moved to a separated templated function. NO_TEST=refactoring NO_DOC=refactoring NO_CHANGELOG=refactoring
-
Magomed Kostoev authored
The tests are required to perform safe refactoring of comparators. Covered comparison functions are: - `tuple_compare_sequential`: all valid `is_nullable` and `has_optional_parts` combinations. - `tuple_compare_with_key_sequential`: all valid `is_nullable` and `has_optional_parts` combinations. - `tuple_compare_slowpath`: all valid `is_nullable` and `has_optional_parts` combinations. `has_json_paths` and `is_multikey` options are not covered since they don't affect the comparison logic. - `tuple_compare_with_key_slowpath`: all valid `is_nullable` and `has_optional_parts` combinations. See the point above about other options. - `key_compare`: both `is_nullable` option variants. NO_DOC=new tests NO_CHANGELOG=new tests
-
Magomed Kostoev authored
Currently the test is written in C. To simplify the following test update it's required to switch the test to the C++ language. NO_DOC=refactoring NO_CHANGELOG=refactoring
-
Mergen Imeev authored
After commit 9b2b3e58 ("box: apply dynamic cfg even if option value is unchanged"), running box.cfg{} with any parameters logs the values of those options, even if the option value has not changed. This is quite awkward for config as it gives a lot of options to box.cfg{}, although many of them may have the old value. This patch causes box.cfg{} to log only those options whose values have changed. Closes #9195 NO_DOC=No need. NO_CHANGELOG=No need since the mentioned commit was not released yet.
-
Andrey Saranchin authored
New event named 'tarantool.trigger.on_change' is called when any event is modified (trigger.set or trigger.del). All the handlers are called with one argument - name of the modified event. Returned value of each handler is ignored. Handlers are fired after the event is changed (the event contains inserted trigger, if any, and does not contain deleted one, if any). All thrown errors are logged with error level and do not stop execution of triggers. Closes #8664 NO_CHANGELOG=later NO_DOC=later
-
Andrey Saranchin authored
Module trigger should be cleaned up after each test. It didn't cause any troubles because each test used a unique name of event for its purposes. Let's remove all registered triggers after each test for the sake of maintainability. Also, move the tests to a separate server instance - currently tests use tarantool which is a test runner, so it may affect other tests during a competitive launch. NO_CHANGELOG=test NO_DOC=test
-
Andrey Saranchin authored
The patch moves space triggers to trigger registry. Triggers for each space are stored in two events: event, associated with space by name, and event, associated with space by id. For example, if we have a space named 'test' with id = 512, its on_replace trigger will be stored in 'box.space[512].on_replace' and 'box.space.test.on_replace' events. When space triggers are fired, triggers, associated by id, are called first. Since the triggers are moved to trigger registry, space trigger API is slightly changed - it is populated with optional third argument (trigger name) and its key-value variant. One of the main advantages of using trigger registry is setting triggers before box.cfg{}, but it would not be used for before_replace triggers since they can change tuples on recovery, and most of the time the user does not need it. To solve this problem, we decided to disable space triggers on recovery. However, ability to change tuples during recovery is necessary (for example, to be able to override engine on replica), that is why the patch introduces recovery triggers for spaces - they are fired only on recovery. Similar to regular triggers, recovery triggers for each space are stored in two events. Recovery version of before_replace triggers for space 'test' with id 512 are stored in 'box.space[512].before_recovery_replace' and 'box.space.test.before_recovery_replace'. Triggers on_replace have their recovery version as well. Recovery triggers receive 2 arguments in addition to its regular versions - xrow header and xrow body of type MsgPack object. Both MsgPacks are maps with integer keys. Since regular and recovery triggers cannot be used at the same time, we store them at the same pointer in space - it refers to recovery triggers on recovery and to regular ones when recovery is over. To update the triggers, helper space_on_final_recovery_complete is used. However, it is not fired after bootstrap of a new master, and triggers of all spaces, created during bootstrap, would not be updated - a new helper space_on_bootstrap_complete is introduced. There is another, more serious, breakage of backward compatibility. All triggers were stored in the space - it means that after the space is renamed it still has all its triggers, and when the space is dropped, all the triggers are removed. This patch moves triggers of space to trigger registry, which is just a key-value storage of triggers. That is why space do not own its triggers anymore - it just takes triggers from trigger registry. So, when the space is renamed, all the triggers, associated by id, are still associated with the space, but not the triggers associated by name - the space will fire triggers from an event, associated with a space by its new name. In order to relieve the pain of broken compatibility, all the triggers, which are set with an old API, are set to an event, associated by id. Also, when one sets a space trigger with an old API, it is set to both regular version of trigger and a recovery one if recovery has not been finished yet. For example, s:before_replace(foo) will set a trigger to both 'box.space[512].before_replace' and 'box.space[512].before_recovery_replace' events before recovery is finished and to the first event only after recovery. Along the way, fixed an assertion that failed when before_replace changed new tuple on recovery from snapshot. Also, message of error ER_BEFORE_REPLACE_RET is changed. The type of value, returned by before_replace triggers was in the message before, but new triggers are called with a func_apapter, which does not allow to get a type of returned value, so the type was removed from the message. Part of #8657 Closes #8859 Closes #9127 NO_CHANGELOG=later NO_DOC=later
-