- May 23, 2024
-
-
Nikolay Shirokovskiy authored
``` /home/shiny/dev/tarantool/src/lib/core/coio_task.c:114:58: error: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Werror=calloc-transposed-args] 114 | struct cord *cord = (struct cord *)calloc(sizeof(struct cord), 1); ``` NO_TEST=build fix NO_CHANGELOG=build fix NO_DOC=build fix
-
- May 22, 2024
-
-
Andrey Saranchin authored
When upgrading a space, attribute `has_optional_parts` of indexes can be changed. So in order to correctly index both old and new tuples we should set new min_field_count value to the minimal min_field_count of old and new formats. Actual value will be set when space upgrade completes. Part of tarantool/tarantool-ee#698 Part of tarantool/tarantool-ee#750 NO_TEST=in ee NO_CHANGELOG=in ee NO_DOC=bugfix
-
Aleksandr Lyapunov authored
Implement 'np' (next prefix) and 'pp' (previous prefix) iterators. They work only in memtx tree and in a nutshell searches for strings with greater ('np') or less ('pp') prefix of size as in given key, comparing with given key. Closes #9994 @TarantoolBot document Title: 'np' and 'pp' (next/previous prefix) iterators Now there are two more iterators available: 'np' (next prefix) and 'pp' (previous prefix). They work only in memtx tree. Also, if the last part of key is not a string, they degrade to 'gt' and 'lt' iterators. These iterators introduce special comparison of the last part of key (if it is a string). In terms of lua, if s is the search part, and t is the corresponding tuple part, 'np' iterator searches for the first tuple with string.sub(t, 1, #s) > s, while 'pp' searches for the last tuple with string.sub(t, 1, #s) < s. Comparison of all other parts of the key remains normal. As usual, these iterators are available both in select and pairs, in index and space methods. Similar to all other tree iterators, they change only initial search of selection. Once the first tuple found, the rest are selected sequentially in direct (for 'np') or reverse (for 'pp') order of the index. For example: ``` tarantool> s:select{} --- - - ['a'] - ['aa'] - ['ab'] - ['b'] - ['ba'] - ['bb'] - ['c'] - ['ca'] - ['cb'] ... tarantool> s:select({'b'}, {iterator = 'np'}) --- - - ['c'] - ['ca'] - ['cb'] ... tarantool> s:select({'b'}, {iterator = 'pp'}) --- - - ['ab'] - ['aa'] - ['a'] ... ```
-
- May 21, 2024
-
-
Serge Petrenko authored
wal_queue_max_size took effect only after the initial box.cfg call, meaning that users with non-zero `replication_sync_timeout` still synced using the default 16 Mb queue size. In some cases the default was too big and the same issues described in #5536 arose. Fix this. Closes #10013 NO_DOC=bugfix
-
- May 20, 2024
-
-
Vladimir Davydov authored
The code setting ER_TUPLE_FOUND uses index_name_by_id() to find the index name, but it passes an index in the dense index map to it while the function expects an index in the sparse index map. Apparently, this doesn't work as expected after an index is removed from the middle of the index map. This bug was introduced by commit fc3834c0 ("vinyl: check key uniqueness before modifying tx write set"). Instead of just fixing the index passed to index_name_by_id(), we do a bit of refactoring. We stop passing index_name and space_name to vy_check_is_unique_*() functions and instead get them right before raising ER_TUPLE_FOUND. Note, to get the space name, we need to call space_by_id() but it should be fine because (a) the space is very likely to be cached as the last accessed one and (b) this is an error path so it isn't performance critical. We also drop index_name_by_id() and extract the index name from the LSM tree object. Closes #5975 NO_DOC=bug fix
-
Vladimir Davydov authored
Like UPDATE, UPSERT must not modify primary key parts. Unlike UPDATE, such an invalid UPSERT statement doesn't fail (raise an error) - we just log the error and ignore the statement. The problem is, we don't clear txn_stmt. As a result, if we're currently building a new index, the on_replace trigger installed by the build procedure will try to process this statement, triggering the assertion in the transaction manager that doesn't expect any statements in a secondary index without the corresponding statement in the primary index: ./src/box/vy_tx.c:728: vy_tx_prepare: Assertion `lsm->space_id == current_space_id' failed. Let's fix this by clearing the txn_stmt corresponding to a skipped UPSERT. Note, this also means that on_replace triggers installed by the user won't run on invalid UPSERT (hence test/vinyl/on_replace.result update), but this is consistent with the memtx engine, which doesn't run them in this case, either. Closes #10026 NO_DOC=bug fix
-
- May 17, 2024
-
-
Vladislav Shpilevoy authored
Not only for own txns, but also on the txns authored by other instances. Note that the lag isn't updated when the replica got new txns from another master. The lag still only reflects the replication between this relay and its specific applier. The motivation is that otherwise the lag sometimes shows irrelevant things, like that the replica is very outdated, while it keeps replicating just fine. Only not txns of this specific master, who might even turned into a replica itself already. Closes #9748 NO_DOC=bugfix
-
Vladislav Shpilevoy authored
From the code it isn't obvious, but relay->status_msg.vclock and relay->last_recv_ack.vclock are both coming from the applier. Status_msg is the previous ack, last_recv_ack is the latest ack. They can never go down. And are not affected anyhow by the master committing its own transactions. I.e. master can commit something, relay->r->vclock (recovery cursor) will go up, and recovery vclock might become incomparable with the last ACK vclock. But the prev and last ACK vclocks are always comparable and always go up. This invariant was broken though, because relay on restart didn't nullify the current applier status (status_msg). It could break if the replica would loose its xlog files or its ID would be taken by another instance - then its vclock would go down, making last_recv_ack.vclock < status_msg.vclock. But that is not right and is fixed in this patch. In scope of #9748 NO_DOC=bugfix NO_TEST=test 5158 already covers it NO_CHANGELOG=bugfix
-
Vladislav Shpilevoy authored
To reduce the insane indentation level. And to isolate the further changes in next commits more. Part of #9748 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Vladislav Shpilevoy authored
Before the patch if the applier was reconnected, the master would see downstream lag equal to the time since it replicated the last txn to this applier. This happened because applier between reconnects kept the txn timestamp used for acks. On the master's side the relay was recreated, received the ack, thought the applier just applied this txn, and displayed this as a lag. The test makes a master restart because this is the easiest way to reproduce it. Most importantly, the applier shouldn't be re-created, and relay should restart. Part of #9748 NO_DOC=bugfix NO_CHANGELOG=later
-
Vladislav Shpilevoy authored
It was stored in struct replica, now is in struct applier. The motivation is that applier-specific data must be inside the applier. Also it makes the next commits look more logical. They are going to change this timestamp when applier progresses through its state machine. It looks strange when the applier is changing the replica object. Replica is on an upper level in the hierarchy. It owns the applier and the applier ideally mustn't know about struct replica (hardly possible to achieve), or at least not change it (this is feasible). In scope of #9748 NO_DOC=internal NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Vladimir Davydov authored
A unique nullable key definition extended with primary key parts (cmp_def) assumes that two tuples are equal *without* comparing primary key fields if all secondary key fields are equal and not nulls, see tuple_compare_slowpath(). This is a hack required to ignore the uniqueness constraint for nulls in memtx. The memtx engine can't use the secondary key definition as is (key_def) for comparing tuples in the index tree, as it does for a non-nullable unique index, because this wouldn't allow insertion of any duplicates, including nulls. It couldn't use cmp_def without this hack, either, because then conflicting tuples with the same secondary key fields would always compare as not equal due to different primary key parts. For Vinyl, this hack isn't required because it explicitly skips the uniqueness check if any of the indexed fields are nulls, see vy_check_is_unique_secondary(). Furthermore, this hack is harmful because Vinyl relies on the fact that two tuples compare as equal by cmp_def if and only if *all* key fields (both secondary and primary) are equal. For example, this is used in the transaction manager, which overwrites statements equal by cmp_def, see vy_tx_set_entry(). Let's disable this hack by resetting unique_part_count in cmp_def. Closes #9769 NO_DOC=bug fix
-
Alexander Turenko authored
Fixes #9986 @TarantoolBot document Title: Interactive console now autorequires a couple of built-in modules There are built-in modules that are frequently used for administration or debugging purposes. It is convenient to have them accessible in the interactive console without extra actions. They're accessible now without a manual `require` call if the `console_session_scope_vars` compat option is set to `new` (see also tarantool/doc#4191). The list of the autorequired modules is below. * clock * compat * config * datetime * decimal * ffi * fiber * fio * fun * json * log * msgpack * popen * uuid * varbinary * yaml See tarantool/tarantool#9986 for motivation behind this feature. This list forms so called initial environment for an interactive console session. The default initial environment may be adjusted by an application, for example, to include application specific administrator functions. Two public functions are added for this purpose: `console.initial_env()` and `console.set_initial_env(env)`. Example 1 (keep autorequired modules, but add one more variable): ```lua local console = require('console') -- Add myapp_info function. local initial_env = console.initial_env() initial_env.myapp_info = function() <...> end ``` Example 2 (replace the whole initial environment): ```lua local console = require('console') -- Add myapp_info function, discard the autorequired modules. console.set_initial_env({ myapp_info = function() <...> end, }) ``` The `console.set_initial_env()` call without an argument or with a `nil` argument drops the initial environment to its default. A modification of the initial environment doesn't affect existing console sessions. It affects console sessions that are created after the modification. Please, adjust the `console_session_scope_vars` compat option description and extend the built-in `console` module reference with the new functions.
-
Alexander Turenko authored
I need to capture the module table inside a module function in a next commit. NO_DOC=refactoring, no behavior changes NO_CHANGELOG=see NO_DOC NO_TEST=see NO_DOC
-
Ilya Verbin authored
It was intended to do 1000 inserts per transaction, but by mistake box.commit() was called after each insertion. NO_DOC=perf test NO_TEST=perf test NO_CHANGELOG=perf test
-
- May 16, 2024
-
-
Vladimir Davydov authored
Between picking an LSM tree from a heap and taking a reference to it in vy_task_new() there are a few places where the scheduler may yield: - in vy_worker_pool_get() to start a worker pool; - in vy_task_dump_new() to wait for a memory tree to be unpinned; - in vy_task_compaction_new() to commit an entry to the metadata log after splitting or coalescing a range. If a concurrent fiber drops and deletes the LSM tree in the meanwhile, the scheduler will crash. To avoid that, let's take a reference to the LSM tree. It's quite difficult to write a functional test for it without a bunch of ugly error injections so we rely on fuzzing tests. Closes #9995 NO_DOC=bug fix NO_TEST=fuzzing
-
- May 14, 2024
-
-
Alexander Turenko authored
Fixes #9985 @TarantoolBot document Title: Interactive console now has its own per-session variables scope It is counter-intuitive that all the non-local assignments in the console affect globals and may interfere with application's logic. It is also counter-intuitive that the non-local assignments are shared between different console sessions. Now, each console session has its own variables scope and the non-local assignments use it instead of globals. Let's consider examples of the new behavior. Example 1. A console session has a variable scope that is separate from globals. ```lua console_1> _G.x = 1 console_1> x = 2 console_1> _G.x --- - 1 ... console_1> x --- - 2 ... ``` Note: A global variable is still accessible using `_G` even if the same named session scope variable exists. Example 2. A global variable is read if there is no session local variable. ```lua console_1> _G.x = 1 console_1> x --- - 1 ... ``` Example 3. Different console sessions have separate variable scopes. ```lua console_1> x = 1 console_2> x = 2 console_1> x --- - 1 ... console_2> x --- - 2 ... ``` The new behavior is enabled using the `console_session_scope_vars` compat option. The option is `old` by default in Tarantool 3.X, `new` by default in 4.X. The `old` behavior is to be removed in 5.X. Please, create the following page: https://tarantool.io/compat/console_session_scope_vars Please, add the new compat option into the configuration reference.
-
Alexander Turenko authored
It encapsulates all the needed actions to connect to a remote console using a Unix socket. Part of #9985 NO_DOC=testing helper change NO_CHANGELOG=see NO_DOC
-
Alexander Turenko authored
See #7169 for details about the hide/show prompt feature. In short, it hides readline's prompt before `print()` or `log.<level>()` calls and restores the prompt afterwards. This feature sometimes badly interferes with `test.interactive_tarantool` heuristics about readline's command echoing. This commit disables the feature in `test.interactive_tarantool` by default and enables it explicitly where needed. Part of #9985 NO_DOC=testing helper change NO_CHANGELOG=see NO_DOC
-
Alexander Turenko authored
Before this patch the `:roundtrip()` method in the `test.interative_tarantool` instance considered the following calls as equivalent: ```lua g.it = it.new() -- Doesn't check the response. g.it:roundtrip('x') -- Before the patch it was the same as above. -- -- Now it checks that the response is nil. local expected = nil g.it:roundtrip('x', expected) -- It is the same as previous. g.it:roundtrip('x', nil) ``` Now the response is checked against the provided expected value if the value is passed to arguments, even if it is `nil`. Also, a command's response is now returned from the method. It may be useful if the response returns some dynamic information (such as a TCP port number or a file descriptor) that is used later in the test or if the response should be verified in some non-trivial way, not just a deep compare. The `:roundtrip()` method is just `:execute_command()` plus `:read_response()` plus `luatest.assert_equals()`. However, I like using `:roundtrip()` even when the assertion is not needed, because it is shorter and because using the same method brings less context to a reader. For example, ```lua g.it = it.new() g.it:roundtrip('x = 2') g.it:roundtrip('y = 3') g.it:roundtrip('x + y', 6) ``` Part of #9985 NO_DOC=testing helper change NO_CHANGELOG=see NO_DOC NO_TEST=see NO_DOC
-
- May 07, 2024
-
-
Georgiy Lebedev authored
Let's pass the source tuple received as the argument to DML to the `default_func` field option of `space:format` to give users more versatility and the opportunity to compute the field value using other fields from the source tuple. For the tuple argument, we create a tuple rather than pass a MsgPack object for consistency with our other box APIs, even though it is suboptimal in terms of performance. We create the tuple argument with the empty default format, however, in the future it is possible to create it with a `names_only=true` format so that the source tuple can have the space format's data dictionary. We create the tuple argument from the source tuple data, which implies the following: (i) fields may not adhere to the space format; (ii) nil fields are always nil (i.e., the `default` value and the `default_func` are not used). (i) is because we can only validate the tuple after we finish building it. (ii) is because trying to use the `default` value and the `default_func` would have field build ordering ambiguity and would hurt performance of field building. Closes #9825 @TarantoolBot document Title: Tuple argument of `default_func` field option of `space:format` Product: Tarantool Since: 3.2 Root documents: https://www.tarantool.io/en/doc/latest/reference/reference_lua/box_space/format/ The source tuple (i.e., the argument of DML) is now passed as a second argument to the `default_func` field option of `space:format`. See also tarantool/tarantool#9825 and [PRD](https://www.notion.so/tarantool/Pass-tuple-as-argument-to-field-s-default_func-8785637fb79f43e4b8ca729e75fc4582). Please note that the tuple argument is created from the source tuple data, which implies the following: (i) fields may not adhere to the space format; (ii) nil fields are always nil (i.e., the `default` value and the `default_func` are not used).
-
- May 02, 2024
-
-
Magomed Kostoev authored
Fix the clang warning in the BPS tree with logarithmic offset support. Closes #9987 NO_DOC=no functional changes NO_TEST=no functional changes NO_CHANGELOG=no functional changes
-
- Apr 27, 2024
-
-
Vladislav Shpilevoy authored
When a replica subscribes, it might in the beginning try to position its reader cursor to the end of a large xlog file. Positioning inside of this file can take significant time during which the WAL reader yielded and tried to send heartbeats, but couldn't, because the relay thread wasn't communicating with the TX thread. When there are no messages from TX for too long time, the heartbeats to the replica are not being sent (commit 56571d83 ("raft: make followers notice leader hang")). The relay must communicate with the TX thread even when subscribe is just being started and opens a large xlog file. This isn't the first time when the missing heartbeats result into timeouts. See more here: - commit 30ad4a55 ("relay: yield explicitly every N sent rows"). - commit 17289440 ("recovery: make it yield when positioning in a WAL"). - commit ee6de025 ("relay: send heartbeats while reading a WAL"). Given that this is fixed fourth time, it might suggest that the relay has not the best architecture having some slight drawbacks. See more in #9968. Closes #9094 NO_DOC=bugfix
-
Magomed Kostoev authored
Since the performance benchmarks for three additional flavors of the BPS tree had been introduced, the amount of test in this suite has increased to 228. Given some tests work with datasets of 10M entries, the amount of time required to run these increased significantly. Mitigate this by reducing the test datasets. NO_DOC=perf test NO_TEST=perf test NO_CHANGELOG=perf test
-
Magomed Kostoev authored
These add three new configs to be tested in the benchmarks: tree with child cardinalities enabled, with inner cardinality enabled and with both of these. By the way simplified the performance analisys by reducing the memory allocation overhead (it's not required to be zero-initialized) and by moving the test tree build into a separated function. NO_DOC=perf test NO_TEST=perf test NO_CHANGELOG=perf test
-
Magomed Kostoev authored
The current tree does not allow to find offset of an element or create an iterator to an element based on its offset. This patch is meant to fix this by expanding the data structure with additional information and introducing methods using it: subtree cardinalities. A subtree cardinality is the amount of elements in it. For example, cardinality of a leaf block is count of elements in it (effectively it equals to leaf.header.size), cardinality of an inner block is the sum of cardinalities of its chlidren. The patch includes two chosable ways to store this information: `BPS_INNER_CARD` and `BPS_INNER_CHILD_CARDS`. The first implementation sores block cardinality in each inner block. This implementation has minimal memory overhead (it just introduces a new 64-bit field in `struct bps_inner`), but calculation of offsets is not that fast, since in order to find an offset of a particular child of an inner node we have to look into each of its children prior to the looking one. The second one sores an array of children cardinalities in inner blocks. The memory overhead of this implementation is visible since it significantly decreases the children capacity of inner blocks. The max count in inner block is decreased from 42 to 25 for tree of 8-byte elements with 512-byte blocks and from 25 to 18 for tree of 16-byte elements with 512-byte blocks. Offset calcluations are faster though. It's possible (though impractical) to enable both solutions, the tree will use the best ways to perform offset-based tasks, but will have to maintain both children cardinality array and inner own cardinalities. Along with the theoretical support this patch introduces a bunch of functions using it: - `iterator_at(t, offset)`: gives an iterator to an element of a tree or tree view by its offset; - `find_get_offset(t, key, offset_ptr)`: the same as `find` but also provides to the user the offset of the found element in the output parameter; - `[lower|upper]_bound[_elem]_get_offset(t, key, exact, offset_ptr)`: the same as upper/lower bound functions but provide to the user the offset to the found position (end of the tree included). - `insert_get_offset(t, new_elem, replaced, offset_ptr)`: the same as `insert`, but also provides the offset to the inserted element. - `delete_get_offset(t, elem, offset_ptr)`: same as `delete`, but also returns offset to the deleted element prior to the deletion in the output parameter. Another new function introduced is bps_tree_view_debug_check(t). This function is similar to the bps_tree_debug_check(t), but is applicable to tree views. It's used to adopt default tree view tests to the new tree variations. Each new implementation is tested by old tree tests (these now support several tree variations selected with a C definition, the definitions are specified in the test/unit/CMakeLists.txt). New offset API-related test introduced (this one tests both of two tree variations - BPS_INNER_CARD and BPS_INNER_CHILD_CARDS). Part of #8204 NO_DOC=internal NO_CHANGELOG=internal
-
Magomed Kostoev authored
New BPS tree flavors are to be introduced and tested with the existing test suite. There're a bunch of problems though: 1. The white box test uses magic constants to performs its checks, it is better to use constants defined by the bps_tree.h instead. 2. The bps_tree.cc test itself is not TAP-compatible, fix this by introducing more assertions. 3. The bps_tree_iteartor.c test is not TAP-compatible too, is uses the result file to check some cases. Let's remove the manual printing tests and modify the automated ones to cover the removed cases. By the way performed minor bps_tree.cc test refactoring. NO_DOC=test update NO_CHANGELOG=test update
-
Magomed Kostoev authored
The checkpatch does not permit to modify several parts of inner debug check functions complaining about too big indentation. The modification will be required further to implement the LogN offset in the BPS tree, so this patch refactors the functions and introduces a helper function for this: bps_tree_debug_insert_and_move_next. The refactored functions are: - bps_tree_debug_check_insert_and_move_to_right_inner - bps_tree_debug_check_insert_and_move_to_left_inner - bps_tree_debug_check_insert_and_move_to_right_leaf - bps_tree_debug_check_insert_and_move_to_left_leaf NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
- Apr 24, 2024
-
-
Alexander Turenko authored
This commit solves several problems: * Eliminates polling with fiber sleeps for a process status in `:wait()`. Now the method waits for libev's SIGCHLD watcher (via a fiber cond). * Fixes use-after-free and crash/infinite hang in `:wait()` when the handle is closed from another fiber. * Adds `timeout` parameter to `:wait()`. Popen handles are not reference counted, so the code that waits for a process completion needs to be a bit tricky to don't access possibly freed memory. I guess things would be simpler if we implemented refcounting on the handles, but the same set of problems are generally solved on the lua/popen side (it tracks `:close()` calls), and I don't see enough motivation to rearrange it. At least, until we'll create the handles not only from Lua. Fixes #4915 Fixes #7653 Fixes #4916 @TarantoolBot document Title: popen: :wait() now has the timeout parameter Usage example: ```lua local ph = popen.new(<...>) local res, err = ph:wait({timeout = 1}) if res == nil then -- Timeout is reached. assert(err.type == 'TimedOut') <...> end ``` Also `:wait()` now has defined behavior when the popen handle is closed from another fiber: the method returns the `ChannelIsClosed` error. Both updates should have 'Since X.Y.Z' marks in the documentation to allow users to decide whether to use the new features based on what tarantool releases should be supported by the calling code. IOW, a user may lean on the defined close-during-wait behavior or decide to don't. The same is true for the new timeout option. See the `lbox_popen_wait()` comment for the updated formal description of the `<popen handle>:wait(<...>)` method.
-
Andrey Saranchin authored
The replication test of persistent triggers was waiting only for the persistent triggers to arrive on replica, so the replica tried to write to the space which was not created there yet. Let's wait for all changes to arrive to make the test stable. Closes #9967 NO_CHANGELOG=test NO_DOC=test
-
- Apr 23, 2024
-
-
Georgiy Lebedev authored
Currently, we close the transport from transport from `luaT_netbox_transport_stop`, and we do not wait for the worker fiber to stop. This causes several problems. Firstly, the worker can switch context by yielding (`coio_wait`) or entering the Lua VM (`netbox_on_state_change`). During a context switch, the connection can get closed. When the connection is closed, its receive buffer is reset. If there was some pending response that was partially retrieved (e.g., a large select), then after resetting the buffer we will read some inconsistent data. We must not allow this to happen, so let's check for this case after returning from places where the worker can switch context. In between closing the connection and cancelling the connection's worker, an `on_disconnect` trigger can be called, which, in turn, can also yield, returning control to the worker before it gets cancelled. Secondly, when the worker enters the Lua VM, garbage collection can be triggered and the connection owning the worker could get closed unexpectedly to the worker. The fundamental source of these problems is that we close the transport before the worker's loop stops. Instead, we should close it after the worker's loop stops. In `luaT_netbox_transport_stop`, we should only cancel the worker, and either wait for the worker to stop, if we are not executing on it, or otherwise throw an exception (`luaL_testcancel`) to stop the worker's loop. The user will still have the opportunity to catch this exception and prevent stoppage of the worker at his own risk. To safeguard from this scenario, we will now keep the `is_closing` flag enabled once `luaT_netbox_transport_stop` is called and never disable it. There also still remains a special case of the connection getting garbage collected, when it is impossible to stop the worker's loop, since we cannot join the worker (yielding is forbidden from finalizers), and an exception will not go past the finalizer. However, this case is safe, since the connection is not going to be used by this point, so the worker can simply stop on its own at some point. The only thing we need to account for is that we cannot wait for the worker to stop: we can reuse the `wait` option of `luaT_netbox_transport_stop` for this. Closes #9621 Closes #9826 NO_DOC=<bugfix> Co-authored-by:
Vladimir Davydov <vdavydov@tarantool.org>
-
Nikolay Shirokovskiy authored
Add next UPDATE error payload fields: - space name - space id - index name - index id - tuple (tuple value on the moment of update) - ops (update operations) Add next UPSERT error payload fields for invalid operations syntax: - space name - space id - ops (upsert operations) Closes #7223 NO_DOC=minor
-
Nikolay Shirokovskiy authored
Add next error payload fields: - space name - space id - old tuple - new tuple Part of #7223 NO_CHANGELOG=unfinished NO_DOC=minor
-
Nikolay Shirokovskiy authored
In this case payload field will be omitted. We are going to use it with CANT_UPDATE_PRIMARY_KEY error. Follows up #7223 NO_CHANGELOG=internal NO_DOC=internal
-
Nikolay Shirokovskiy authored
Add next error payload fields: - space name - space id - index name (where index is involved) - index id (where index is involved) - tuple Part of #7223 NO_CHANGELOG=unfinished NO_DOC=minor
-
Nikolay Shirokovskiy authored
Part of #7223 NO_TEST=refactoring NO_CHANGELOG=refactoring NO_DOC=refactoring
-
Nikolay Shirokovskiy authored
Part of #7223 NO_TEST=refactoring NO_CHANGELOG=refactoring NO_DOC=refactoring
-
Nikolay Shirokovskiy authored
Add next error payload fields: - space name - space id - index name - index id - key Part of #7223 NO_CHANGELOG=unfinished NO_DOC=minor
-
Nikolay Shirokovskiy authored
Add index uniqness check to the `exact_key_validate`. Also while at it let's dropd dead `index_find_.*xc` and excess `exact_key_validate_nullable`. Part of #7223 NO_TEST=refactoring NO_CHANGELOG=refactoring NO_DOC=refactoring
-
- Apr 16, 2024
-
-
Sergey Ostanevich authored
Remove all changelogs reported in release notes for 3.1.0. NO_CHANGELOG=changelog NO_DOC=changelog NO_TEST=changelog
-