- Aug 20, 2018
-
-
Vladimir Davydov authored
If `tarantoolctl eval` fails, apart from returning 3 and printing the eval error to stderr, tarantoolctl will also emit the following message: Error while reloading config: This message is quite confusing and useless too, as we have the return code for that. Let's zap it. Closes #3560
-
- Aug 17, 2018
-
-
Vladimir Davydov authored
Currently, there's only vy_stmt_new_surrogate_delete(), which takes a tuple. Let's add vy_stmt_new_surrogate_delete_raw(), which takes raw msgpack data. Needed for #2129
-
Vladimir Davydov authored
For some reason, vy_stmt_new_surrogate_delete() checks that the source tuple has all fields mandated by the space format (min_field_count). This is pointless, because to generate a surrogate DELETE statement, we don't need all tuple fields - if a field is absent it will be replaced with NULL. We haven't stepped on this assertion, because we always create surrogate DELETEs from full tuples. However, to implement #2129 we need to be able to create surrogate DELETEs from tuples that only have indexed fields. So let's remove this assertion. Needed for #2129
-
Vladimir Davydov authored
Currently, the last statement returned by the write iterator is referenced indirectly, via a read view. This works, because the write iterator can only return a statement if it corresponds to a certain read view. However, in the scope of #2129, the write iterator will also have to keep statements for which a deferred DELETE hasn't been generated yet, even if no read view needs it. So let's make the write iterator reference the last returned statement explicitly, i.e. via a dedicated member of the write_iterator struct. Needed for #2129
-
Vladimir Davydov authored
In the scope of #2129 we need to mark REPLACE statements for which we generated DELETE in secondary indexes so that we don't generate DELETE again on compaction. We also need to mark DELETE statements that were generated on compaction so that we can skip them on SELECT. Let's add flags field to struct vy_stmt. Flags are stored both in memory and on disk - they are encoded in tuple meta in the latter case. Needed for #2129
-
Vladimir Davydov authored
This patch set allows to store msgpack map with arbitrary keys inside a request. In particular, this is needed to store vinyl statement flags in run files. Needed for #2129
-
Vladislav Shpilevoy authored
On commit/rollback triggers are already implemented within Tarantool internals. The patch just exposes them for Lua. Below the API is described, which deserves an attention though. Closes #857 @TarantoolBot document Title: Document box.on_commit/on_rollback triggers On commit/rollback triggers can be set similar to space:on_replace triggers: box.on_commit/rollback(new_trigger, old_trigger) A trigger can be set only inside an active transaction. When a trigger is called, it takes 1 parameter: an iterator over the transaction statements. box.on_commit/on_rollback(function(iterator) for i, old_tuple, new_tuple, space_id in iterator() do -- Do something with tuples and space ... end end) On each step the iterator returns 4 values: statement number (grows from 1 to statement count), old tuple or nil, new tuple or nil and space id. Old tuple is not nil when the statement updated or deleted the existing tuple. New tuple is not nil when the statement updated or inserted the tuple. Limitations: * the iterator can not be used outside of the trigger. Otherwise it throws an error; * a trigger can not do any database requests (DML, DDL, DQL) - behaviuor is undefined; * on_commit/rollback triggers shall not fail, otherwise Tarantool exits with panic.
-
Kirill Shcherbatov authored
This problem triggered asan checks on start tarantool with existent xlog. We don't have to touch even static non-initialized memory.
-
Serge Petrenko authored
If relay thread is already exiting (but hadn't executed relay_stop() yet) and relay_cancel() is called we may encounter an error trying to call pthread_cancel() after the thread has exited. Handle this case. Follow-up #3485
-
Vladimir Davydov authored
Besides vy_check_is_unique, other callers of vy_check_is_unique_primary and vy_check_is_unique_secondary are only called when vinyl engine is online. So let's move the optimization that skips uniqueness check on recovery to vy_check_is_unique and remove the env argument.
-
Serge Petrenko authored
One possible case when two applier errors happen one after another wasn't handled in replica_on_applier_disconnect(), which lead to occasional test failures and crashes. Handle this case and add a regression test. Part of #3510
-
Serge Petrenko authored
Fix a bug where crash_expected option lead to test hang.
-
Serge Petrenko authored
When `tarantoolctl status` is called immediately after `tarantoolctl stop` there is a chance that tarantool hasn't exited yet, so the pid file still exists, which is reported by `tarantoolctl status`. This leads to occasional test failures. Fix this by waiting till tarantool exits before calling `status`. Closes #3557
-
- Aug 16, 2018
-
-
N.Tatunov authored
Add string.fromhex method. Add test for string.fromhex(). Closes #2562
-
Serge Petrenko authored
Field is_nullable option must be the same in index parts and space format. This causes problems with altering this option. The only way to do it is to drop space format, alter nullability in index, and then reset space format. Not too convenient. Fix this by allowing different nullability in space format and indices. This allows to change nullability in space format and index separately. If at least one of the options is set to false, the resulting nullability is also set to false. Closes #3430
-
Serge Petrenko authored
Relay threads keep using tx upon shutdown, which leads to occasional segmentation faults and assertion fails (e.g. in replication test suite). Fix this by forcefully cancelling (with pthread_cancel) and joining relay threads before proceeding to tx destruction. Closes #3485
-
- Aug 15, 2018
-
-
Eugine Blikh authored
We can throw any Lua object as Lua error, but current behaviour won't convert it to string. So diag error's object will have NULL, instead of string. luaT_tolstring honors __tostring metamethod and thus can convert table to it's string representation. For example, old behaviour is: ``` tarantool> fiber.create(error, 'help') LuajitError: help tarantool> fiber.create(error, { message = 'help' }) LuajitError: tarantool> fiber.create(error, setmetatable({ message = 'help' }, { __tostring = function(self) return self.message end })) LuajitError: ``` New behaviour is: ``` tarantool> fiber.create(error, 'help') LuajitError: help tarantool> fiber.create(error, { 'help' }) LuajitError: table: 0x0108fa2790 tarantool> fiber.create(error, setmetatable({ message = 'help' }, { __tostring = function(self) return self.message end })) LuajitError: help ``` It won't break anything, but'll add new behaviour
-
Eugine Blikh authored
`lua_tostring`/`lua_tolstring` ignores metatable/boolean/nil and return NULL, but sometimes it's needed to have similar behaviour, like lua functions tostring. Lua 5.1 and LuaJIT ignores it by default, Lua 5.2 introduced auxilary function luaL_to(l)string with supporting of __tostring. This function is backport of Lua 5.1 "lauxlib.h"s luaL_tostring in the luaT namespace.
-
Vladimir Davydov authored
If a tuple inserted into a secondary index under construction had the same key parts, both primary and secondary, as a tuple already stored in the index (i.e. it's not indexed fields that got updated), the uniqueness check failed before commit fc3834c0 ("vinyl: check key uniqueness before modifying tx write set"). The commit added a piece of code to vy_check_is_unique_secondary() that compares the new and the found tuple by primary key parts and doesn't raise an error if they match (they point to the same tuple and hence the inserted tuple doesn't actually modify the index, let alone violate the unique constraint). This patch adds a test for that fix. Closes #3578
-
Vladimir Davydov authored
Currently, we handle INSERT/REPLACE/UPDATE requests by iterating over all space indexes starting from the primary and inserting the corresponding statements to tx write set, checking key uniqueness if necessary. This means that by the time we write a REPLACE to the write set of a secondary index, it has already been written to the primary index write set. This is OK, and vy_tx_prepare() relies on that to implement the common memory level. However, this also means that when we check uniqueness of a secondary index, the new REPLACE can be found via the primary index. This is OK now, because all indexes are fully independent, but it isn't going to fly after #2129 is implemented. The problem is in order to check if a tuple is present in a secondary index, we will have to look up the corresponding full tuple in the primary index. To illustrate the problem, consider the following situation: Primary index covers field 1. Secondary index covers field 2. Committed statements: REPLACE{1, 10, lsn=1} - present in both indexes DELETE{1, lsn=2} - present only in the primary index Transaction: REPLACE{1, 10} When we check uniqueness of the secondary index, we find committed statement REPLACE{1, 10, lsn=1}, then look up the corresponding full tuple in the primary index and find REPLACE{1, 10}. Since the two tuples match, we mistakenly assume that there's a conflict. To avoid a situation like that, let's check uniqueness before modifying the write set of any index. Needed for #2129
-
- Aug 14, 2018
-
-
Olga Arkhangelskaia authored
We should test log_nonblock mode. In some cases the loss of this flag lead to tarantool hanging forever. This tests checks such possibility. Follow-up #3615
-
Vladimir Davydov authored
-
Serge Petrenko authored
On bootstrap and after initial configuration replication_connect_quorum was ignored. The instance tried to connect to every replica listed in replication parameter, and failed if it wasn't possible. The patch alters this behaviour. An instance still tries to connect to every node listed in box.cfg.replication, but does not raise an error if it was able to connect to at least replication_connect_quorum instances. Closes #3428 @TarantoolBot document Title: replication_connect_quorum is not ignored Now on replica set bootstrap and in case of replication reconfiguration (e.g. calling box.cfg{replication=...} for the second time) tarantool doesn't fail, if it couldn't connect to to every replica, but could connect to replication_connect_quorum replicas. If after replication_connect_timeout seconds the instance is not connected to at least replication_connect_quorum other instances, we throw an error.
-
Serge Petrenko authored
Add start arguments to replication test instances to control replication_timeout and replication_connect_timeout settings between restarts. Needed for #3428
-
Serge Petrenko authored
Allows to pass arguments to servers started with create_cluster().
-
- Aug 13, 2018
-
-
Olga Arkhangelskaia authored
During syslog reconnect we lose nonblock flag. This leads to misbehavior while logging. Tarantool hangs forever. Closes #3615
-
Vladimir Davydov authored
-
- Aug 11, 2018
-
-
Vladimir Davydov authored
Reproduce file: - [box/access.test.lua, null] - [box/iterator.test.lua, null] - [box/bitset.test.lua, null] The issue happens, because box/bitset.lua:dump() uses iterate(), which gets cleared by box/iterator test. Fix this by using utils.iterate() instead.
-
- Aug 10, 2018
-
-
Vladimir Davydov authored
index.update() looks up the old tuple in the primary index, applies update operations to it, then writes a DELETE statement to secondary indexes to delete the old tuple and a REPLACE statement to all indexes to insert the new tuple. It also sets a column mask for both DELETE and REPLACE statements. The column mask is a bit mask which has a bit set if the corresponding field is updated by update operations. It is used by the write iterator for two purposes. First, the write iterator skips REPLACE statements that don't update key fields. Second, the write iterator turns a REPLACE that has a column mask that intersects with key fields into an INSERT (so that it can get annihilated with a DELETE when the time comes). The latter is correct, because if an update() does update secondary key fields, then it must have deleted the old tuple and hence the new tuple is unique in terms of extended key (merged primary and secondary key parts, i.e. cmp_def). The problem is that a bit may be set in a column mask even if the corresponding field does not actually get updated. For example, consider the following example. s = box.schema.space.create('test', {engine = 'vinyl'}) s:create_index('pk') s:create_index('sk', {parts = {2, 'unsigned'}}) s:insert{1, 10} box.snapshot() s:update(1, {{'=', 2, 10}}) The update() doesn't modify the secondary key field so it only writes REPLACE{1, 10} to the secondary index (actually it writes DELETE{1, 10} too, but it gets overwritten by the REPLACE). However, the REPLACE has column mask that says that update() does modify the key field, because a column mask is generated solely from update operations, before applying them. As a result, the write iterator will not skip this REPLACE on dump. This won't have any serious consequences, because this is a mere optimization. What is worse, the write iterator will also turn the REPLACE into an INSERT, which is absolutely wrong as the REPLACE is preceded by INSERT{1, 10}. If the tuple gets deleted, the DELETE statement and the INSERT created by the write iterator from the REPLACE will get annihilated, leaving the old INSERT{1, 10} visible. The issue may result in invalid select() output as demonstrated in the issue description. It may also result in crashes, because the tuple cache is very sensible to invalid select() output. To fix this issue let's clear key bits in the column mask if we detect that an update() doesn't actually update secondary key fields although the column mask says it does. Closes #3607
-
- Aug 08, 2018
-
-
Mergen Imeev authored
In some cases operation box.snapshot() takes longer than expected. This leads to situations when the previous error is reported instead of the new one. Now these errors completely separated. Closes #3599
-
Olga Arkhangelskaia authored
Added server option to syslog configuration. Server option is responsible for log destination. At the momemt there is two ways of usage:server=unix:/path/to/socket or server=ipv4:port. If port is not set default udp port 514 is used. If logging to syslog is set, however there is no server options - default location is used: Linux /dev/log and Mac /var/run/syslog. Closes #3487
-
- Aug 07, 2018
-
-
Kirill Yukhin authored
-
Kirill Yukhin authored
Print reproduce file.
-
Kirill Yukhin authored
-
Sergei Voronezhskii authored
The -j -1 used to legacy consistent mode. Reducing the number of jobs to one by switching to -j 1, uses same part of the code as in parallel mode. The code in parallel mode kills hung tests. Part of https://github.com/tarantool/test-run/issues/106
-
Kirill Yukhin authored
-
Vladimir Davydov authored
It is dangerous to call box.cfg() concurrently from different fibers. For example, replication configuration uses static variables and yields so calling it concurrently can result in a crash. To make sure it never happens, let's protect box.cfg() with a lock. Closes #3606
-
- Aug 03, 2018
-
-
Kirill Yukhin authored
-
Alexander Turenko authored
Fixes #3489.
-
- Aug 02, 2018
-
-
Alexander Turenko authored
-