- Jul 20, 2021
-
-
Mergen Imeev authored
After this patch, uuid values can be bound like any other supported by SQL values. Part of #6164
-
Mergen Imeev authored
Prior to this patch, built-in SQL function quote() could not work with uuid. It now returns a string representation of the received uuid. Part of #6164
-
Aleksandr Lyapunov authored
At some point memtx_tx_tuple_clarify_slow was changed in terms of its signarure - index ID was replaced with pointer to index. Unfortunately one of calls of this function was not fixed - zero was passed as index ID which compiles successfully with the new API. This patch fixes that place as well and adds a bunch of tests. Closes #6229
-
- Jul 19, 2021
-
-
VitaliyaIoffe authored
Build for Fedora 34 is breaking out due to uninitialized variables in a few places: For example, [100%] Built target merger.test /source/build/usr/src/debug/tarantool-2.9.0.116/src/box/sql.c: In function 'tarantoolSqlNextSeqId': /source/build/usr/src/debug/tarantool-2.9.0.116/src/box/sql.c:1186:13: error: 'key' may be used uninitialized [-Werror=maybe-uninitialized] 1186 | if (box_index_max(BOX_SEQUENCE_ID, 0 /* PK */, key, Needed for: #6074
-
- Jul 16, 2021
-
-
Serge Petrenko authored
Every error that happens during master processes a join or subscribe request is sent to the replica for better diagnostics. This could lead to the following situation with the TimedOut error: it could be written on top of a half-written row and make the replica stop replication with ER_INVALID_MSGPACK error. The error is unrecoverable and the only way to resume replication after it happens is to reset box.cfg.replication. Here's what happened: 1) Replica is under heavy load, meaning it's event loop is occupied by some fiber not yielding control to others. 2) applier and other fibers aren't scheduled while the event loop is blocked. This means applier doesn't send heartbeat messages to the master and doesn't read any data coming from the master. 3) The unread master's data piles up. First in replica's receive buffer, then in master's send buffer. 4) Once master's send buffer is full, the corresponding socket stops being writeable and the relay yields waiting for the socket to become writeable again. The send buffer might contain a partially written row by now. 5) Replication timeout happens on master, because it hasn't heard from replica for a while. An exception is raised, and the exception is pushed to the replica's socket. Now two situations are possible: a) the socket becomes writeable by the time exception is raised. In this case the exception is logged to the buffer right after a partially written row. Once replica receives the half-written row with an exception logged on top, it errors with ER_INVALID_MSGPACK. Replication is broken. b) the socket isn't writeable still (the most probable scenario) The exception isn't logged to the socket and the connection is closed. Replica eventually receives a partially-written row and retries connection to the master normally. In order to prevent case a) from happening, let's not push TimedOut errors to the socket at all. They're the only errors that could be raised while a row is being written, i.e. the only errors that could lead to the situation described in 5a. Closes #4040
-
- Jul 14, 2021
-
-
Mergen Imeev authored
Prior to this patch, in some cases the type mismatch error description showed the value, and in some cases the type of the value. After this patch, both the type and value will be shown. Also, inconsistent type error description also become more informative. Previously it contained only type of value, now it contains value and its type. Close #6176
-
Mergen Imeev authored
Prior to this patch, the type mismatch error description and the inconsistent types error description in some cases displayed type names that were different from the default ones. After this patch, all types in these descriptions are described using the default names. Part of #6176
-
Mergen Imeev authored
Currently, some values are displayed improperly in the type mismatch error description. For VARBINARY, the word "varbinary" is printed instead of the value. STRING values are printed without quotes, which can be confusing in some cases, such as when it consists of spaces. This patch introduces the following changes: 1) VARBINARY value will be printed as x'<value in hexadecimal format>'. 2) STRING value will be printed in single quotes. 3) UUID value will be printed in single quotes. UUID value does not need to be enclosed in single quotes, since there are no literals for UUIDs, but it looks more convenient. Part of #6176
-
Mergen Imeev authored
STRING, MAP, and ARRAY values that are too long can make the type mismatch error description less descriptive than necessary. This patch truncates values that are too long and adds "..." to indicate that the value has been truncated. Part of #6176
-
- Jul 12, 2021
-
-
Vladislav Shpilevoy authored
box_promote() when called manually used to wait for the existing transactions from a foreign limbo to end during a timeout. Giving them a chance to end on their terms. The waiting was done via polling like while (!done) sleep(small_timeout); Polling is almost always super bad both for execution time and for CPU usage. The patch replaces it with proper waiting based on events happening in the limbo. Closes #5190
-
- Jul 09, 2021
-
-
Andrey Kulikov authored
Fix build errors on arm64 with backtraces being enabled. Fixes #6142 See also: - https://github.com/libunwind/libunwind/pull/221 - #5471 - #6142
-
Aleksandr Lyapunov authored
There was a serious problem in txm: index_id from struct index was used as an index in some arrays (for example in array of links in stories). As a result, if a user had created an index specifying ID that is not sequential, the array access would have been out of range which could lead to segfault. This patch makes use of indexes directly, and when it comes to array aceess, a dense_id is used, which fits perfectly for that. As a part of #5515 this patch makes the cases in it at least stable. Part of #5515
-
Aleksandr Lyapunov authored
Histoically an index space may be accessed by iid (index ID), that is the ID set in index definition, or by sequential ID, that is a number in [0..space->index_count]. In other words, a space holds two arrays of indexes: 1) sparse (by iid) and 2) dense, by sequential ID. Since an instance of index belongs to one and only once space, any index is implicitly has this sequential ID. We can simply save this ID in index and distinguish indexes by it too. We could call this member 'sequential_id', but this name has too general meaning, while dense_id directly mentions dence array of a space. Part of #5515
-
Aleksandr Lyapunov authored
Before this patch garabage collector was executed right before allocation of a new story. That means that, for example, in the memtx_tx_history_add_stmt GC could be called a couple of times. Garbage collector is free to delete stories if they are no more used. Removing a story can cause an index modification with further tuple delete. For example imagine a space with one index, where one tuple {1, 1, 1} is placed. Then a transaction comes, deletes that tuple and commits. In this moment the tuple {1, 1, 1} can be still in index, marked as 'dirty' and having a corresponding story, which states that the tuple is deleted. This a valid situation, even necessary, for the case when another transaction is in a read view and must see that {1, 1, 1} not yet deleted. But when possible, GC would try to delete the story and remove the tuple from index. Now imagine that this GC happens when a new transaction inserts, for example, {1, 1, 1, 4}. In memtx_tx_history_add_stmt the new tuple replaces the old one in index, but the story of new tuple is not created yet. Then the new story is created, that causes GC, that tries to remove {1, 1, 1} from index and delete it from memory. An this moment memtx_tx_history_add_stmt relies on existance of {1, 1, 1} which doesn't exist. That is an example of general problem: a cleanup should not be done in the middle of complex function that can have some half made not valid intermediate state. The cleanup, including GC, should be done in the end of functions. This patch move story GC to the end of functions that use it. Part of #5515
-
- Jul 08, 2021
-
-
Cyrill Gorcunov authored
When new raft message comes in from the network we need to be sure that the payload is suitable for processing, in particular `raft_msg::state` must be valid because our code logic depends on it. For this sake make `raft_msg::state` being uint64_t which allows to an easier processing of the state field verification. Same time use panic() instead of unreacheable() macro because the test for valid state must be enabled all the time. Closes #6067 Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
- Jul 07, 2021
-
-
Alexander Turenko authored
And added a general comment about the compilation unit. Everything is to simplify reading. Part of #3228
-
Alexander Turenko authored
It is non-static function, so it looks logical to have the API comment in the header. Made a few code style fixes, while I'm here. Part of #3228
-
Alexander Turenko authored
It is easier to glance on tightly coupled structures and functions, when they're not mixed with others. Just move without actual changes. Part of #3228
-
- Jul 05, 2021
-
-
Aleksandr Lyapunov authored
With MVCC a case may happen: TX1 does something with some space and yields. TX2 deletes the space and commits. TX1 rolls back. The problem was that TX1 does something with already deleted space. This commit fixes that. Part of #6140
-
Aleksandr Lyapunov authored
That's a good practice in general. In particular it makes test case from #6140 to be stable. Part of #6140
-
Aleksandr Lyapunov authored
The problem was in case when mvcc engine was enabled and a transaction that was sent to read view due to conflict was trying to read a key that was the cause of the conflict. Closes #6131
-
- Jul 02, 2021
-
-
mechanik20051988 authored
Static buffer to save snapshot filename, reused later in `xlog_cursor_open` function. So when we log this name after it, we get corrupted name that has nothing to do with the real name. We should use `cursor.name` instead.
-
Nikita Pettik authored
In tree_iterator_start() it was assumed that iterator always contains valid space id. However, ephemeral spaces are known to have zero space id. So in case we are starting iterator which belongs to ephemeral space, we can't simply find that space in space cache. Moreover, we don't need to track ephemeral spaces in MVCC at all since they can be accessed only pointers and their lifespan is restricted by SQL query execution. So let's skip any MVCC-related routine while starting an iterator. Closes #6095
-
- Jul 01, 2021
-
-
Vladimir Davydov authored
An LSM tree (space index, that is) can be dropped while compaction is in progress for it. In this case compaction will still commit the new run to vylog upon completion. This usually works fine, but not if gc has already purged all the information about the dropped LSM tree from vylog by that time, in which case an attempt to commit the new run will result in permanently broken vylog (because compaction will write vylog records for a non-existing object): ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 13 deleted but not registered To prevent this from happening, let's make compaction silently drop the new run without committing it to vylog if the LSM tree has been dropped. This should work just fine - since the LSM tee isn't used anymore we don't need to have it compacted, neither do we need to delete the run, since gc will eventually clean up all artefacts left from the dropped LSM tree. One thing to be noted is that we also must exclude dropped LSM trees from further compaction - if we don't do that, we might end up picking the dropped LSM tree for compaction over and over again (because it isn't actually compacted). This patch also drops the gh-5141-invalid-vylog-file test, because the latter just ensured that the issue fixed by this patch is there. Closes #5436
-
Egor Elchinov authored
Now idle fibers are present in fiber.info() but without their stacks. Added test ensuring that fiber.info doesn't get cluttered by idle fibers stacks after dispatching multiple requests in short time. Closes #4235
-
Egor Elchinov authored
In some cases it's good to have an opportunity to detect if fiber is idle in a fiber_pool. Now this can be done as fiber->flags & FIBER_IS_IDLE. Needed for: #4235
-
- Jun 24, 2021
-
-
VitaliyaIoffe authored
Due to a build is going as out-of-source after the patch 781fd38, where was deleted the path of a source dir, macro __FILE__ leads to the compilation fail on ubuntu_21_04. Change __FILE__ to the file path. Needed for: #5825
-
- Jun 23, 2021
-
-
Cyrill Gorcunov authored
We already have `box.replication.upstream.lag` entry for monitoring sake. Same time in synchronous replication timeouts are key properties for quorum gathering procedure. Thus we would like to know how long it took of a transaction to traverse `initiator WAL -> network -> remote applier -> initiator ACK reception` path. Typical output is | tarantool> box.info.replication[2].downstream | --- | - status: follow | idle: 0.61753897101153 | vclock: {1: 147} | lag: 0 | ... | tarantool> box.space.sync:insert{69} | --- | - [69] | ... | | tarantool> box.info.replication[2].downstream | --- | - status: follow | idle: 0.75324084801832 | vclock: {1: 151} | lag: 0.0011014938354492 | ... Closes #5447 Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> @TarantoolBot document Title: Add `box.info.replication[n].downstream.lag` entry `replication[n].downstream.lag` represents a lag between the main node writes a certain transaction to it's own WAL and a moment it receives an ack for this transaction from a replica.
-
Cyrill Gorcunov authored
Applier fiber sends current vclock of the node to remote relay reader, pointing current state of fetched WAL data so the relay will know which new data should be sent. The packet applier sends carries xrow_header::tm field as a zero but we can reuse it to provide information about first timestamp in a transaction we wrote to our WAL. Since old instances of Tarantool simply ignore this field such extension won't cause any problems. The timestamp will be needed to account lag of downstream replicas suitable for information purpose and cluster health monitoring. We update applier statistics in WAL callbacks but since both apply_synchro_row and apply_plain_tx are used not only in real data application but in final join stage as well (in this stage we're not writing the data yet) the apply_synchro_row is extended with replica_id argument which is non zero when applier is subscribed. The calculation of the downstream lag itself lag will be addressed in next patch because sending the timestamp and its observation are independent actions. Part-of #5447 Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
- Jun 21, 2021
-
-
Cyrill Gorcunov authored
Currently we use synchro packets filtration based on their contents, in particular by their xrow->replica_id value. Still there was a question if we can optimize this moment and rather filter out all packets coming from non-leader replica. Raft specification requires that only data from a current leader should be applied to local WAL but doesn't put a concrete claim on the data transport, ie how exactly rows are reaching replicas. This implies that data propagation may reach replicas indirectly via transit hops. Thus we drop applier->instance_id filtering and rely on xrow->replica_id matching instead. In the test (inspired by Serge Petrenko's test) we recreate the situation where replica3 obtains master's node data (which is a raft leader) indirectly via replica2 node. Closes #6035 Co-developed-by:
Serge Petrenko <sergepetrenko@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
- Jun 18, 2021
-
-
Oleg Babin authored
After this patch digest module will be use bundled xxhash. It fixes build problem if user doesn't use bundled zstd. Closes #6135 Follow-up #2003
-
Oleg Babin authored
This patch is the first step for fixing regression introduced in f998ea39 (digest: introduce FFI bindings for xxHash32/64). We used xxhash library that is shipped with zstd. However it's possible that user doesn't use bundled zstd. In such cases we couldn't export xxhash symbols and build failed with following error: ``` [ 59%] Linking CXX executable tarantool /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xd80): undefined reference to `XXH32' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xd88): undefined reference to `XXH32_copyState' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xd90): undefined reference to `XXH32_digest' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xd98): undefined reference to `XXH32_reset' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xda0): undefined reference to `XXH32_update' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xda8): undefined reference to `XXH64' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xdb0): undefined reference to `XXH64_copyState' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xdb8): undefined reference to `XXH64_digest' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xdc0): undefined reference to `XXH64_reset' /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: CMakeFiles/tarantool.dir/exports.c.o:(.data.rel+0xdc8): undefined reference to `XXH64_update' collect2: error: ld returned 1 exit status ``` To avoid a problem this patch introduces standalone xxhash library that will be bundled anyway. It's worth to mention that our approach is still related to zstd. We use Cyan4973/xxHash that is used in zstd and passes the same compile flags to it. Single difference is usage of XXH_NAMESPACE to avoid symbols clashing with zstd. Need for #6135
-
- Jun 16, 2021
-
-
Vladislav Shpilevoy authored
When txn_commit/try_async() failed before going to WAL thread, they installed TXN_SIGNATURE_ABORT signature meaning that the caller and the rollback triggers must look at the global diag. But they called txn_rollback() before doing return and calling the triggers, which overrode the signature with TXN_SIGNATURE_ROLLBACK leading to the original error loss. The patch makes TXN_SIGNATURE_ROLLBACK installed only when a real rollback happens (via box_txn_rollback()). This makes the original commit errors like a conflict in the transaction manager and OOM not lost. Besides, ERRINJ_TXN_COMMIT_ASYNC does not need its own diag_log() anymore. Because since this commit the applier logs the correct error instead of ER_WAL_IO/ER_TXN_ROLLBACK. Closes #6027
-
Vladislav Shpilevoy authored
Sometimes a transaction can fail before it goes to WAL. Then the signature didn't have any sign of it, as well as the journal_entry result (which might be not even created yet). Still if txn_commit/try_async() are called, they invoke on_rollback triggers. The triggers only can see TXN_SIGNATURE_ROLLBACK and can't distinguish it from a real rollback like box.rollback(). Due to that some important errors like a transaction manager conflict or OOM are lost. The patch introduces a new error signature TXN_SIGNATURE_ABORT which says the transaction didn't manage to try going to WAL and for an error need to look at the global diag. The next patch is going to stop overriding it with TXN_SIGNATURE_ROLLBACK. Part of #6027
-
Vladislav Shpilevoy authored
A transaction in WAL thread could be rolled back not only due to an IO error. But also if there was a cascading rollback in progress. The patch makes such case use a special error code turned into its own diag when it reaches the TX thread. Usage of ER_WAL_IO wasn't correct here. Part of #6027
-
Vladislav Shpilevoy authored
Previously all journal and txn errors were turned into ER_WAL_IO error code. It led to loss of the real error, which sometimes was absolutely not related to IO. For example, a timeout in the limbo for a synchronous transaction. The patch makes journal/txn errors turn into proper diags. Part of #6027
-
Vladislav Shpilevoy authored
In the journal write trigger the transaction assumed it might be already rolled back and completed, hence does not need to do anything except free itself. But it can't happen. The only imaginable reason why a transaction might be rolled back before it completed its WAL write is a ROLLBACK entry issued after the transaction. But ROLLBACK applies its effects only after it is written. Hence only after all the other pending txns are written too. Therefore it is not possible for a transaction to get ROLLBACK before it finishes its own WAL write. Probably it was possible in the time when applier used to execute ROLLBACK before writing it to WAL. But that was fixed in b259e930 ("applier: process synchro rows after WAL write"). Can't happen now. This became easier to realize when not finished transaction signature got its own value TXN_SIGNATURE_UNKNOWN.
-
Vladislav Shpilevoy authored
Journal used to have only one error code in journal_entry.res: -1. It had at least 2 problems: - There was an assumption that TXN_SIGNATURE_ROLLBACK is the same as journal_entry error = -1; - It wasn't possible to tell if the entry tried to be written and failed, or it didn't try yet. Both looked as -1. The patch introduces a new error code JOURNAL_ENTRY_ERR_UNKNOWN. The IO error now has its own value: JOURNAL_ENTRY_ERR_IO. This helps to ensure that a not finished journal entry or a transaction won't try to obtain a diag error for their result. Part of #6027
-
Vladislav Shpilevoy authored
A transaction on rollback used to check if it was already rolled back inside of the limbo by looking at its signature as signature != TXN_SIGNATURE_ROLLBACK It meant the transaction is already completed. TXN_SIGNATURE_ROLLBACK was used as a default value of the signature. Therefore if it is not default, it is completed. This is going to break if normal (not synchronous) transactions would have more rollback codes except just TXN_SIGNATURE_ROLLBACK. Also treatment of TXN_SIGNATURE_ROLLBACK as a default value looks confusing. Next patches are going to rework the codes and render the assumptions above incorrect. This patch makes the transaction use a correct way to check whether it is in the limbo still - look at TXN_WAIT_SYNC flag. It is set for all txns in the limbo and is not set for all the others. Part of #6027
-
Vladislav Shpilevoy authored
ER_WAL_IO is set on any WAL error if it was after journal_write() success. It is not correct, because there can be plenty of reasons. In WAL it could be an actual IO error or a cascading rollback in progress. When used for transactions, it could be an error related to synchronous transactions like a timeout, or a persistent ROLLBACK. These errors are overridden by ER_WAL_IO. The patch encapsulates the diag installation for bad journal write and for transaction rollback. The next patches are going to introduce more error codes and use proper ones to install a diag. Part of #6027
-