- Jan 15, 2019
-
-
Vladimir Davydov authored
During local recovery we may encounter an LSM tree marked as dropped. This means that the LSM tree was dropped before restart and hence will be deleted before recovery completion. There's no need to add such trees to the vinyl scheduler - it looks confusing and can potentially result in mistakes when the code gets modified.
-
Vladimir Davydov authored
Currently, we bump range->version in vy_scheduler.c. This looks like an encapsulation violation, and may easily result in an error (as we have to be cautious to inc range->version whenever we modify a range). That said, let's bump range version right in vy_range.c.
-
Vladimir Davydov authored
compact_input sounds confusing, because 'compact' works as an adjective here. Saving 3 characters per variable/stat name related to compaction doesn't justify this. Let's rename 'compact' to 'compaction' both in stats and in the code.
-
Vladimir Davydov authored
'in' is a reserved keyword in Lua, so using 'in' as a map key was a bad decision - one has to access it with [] rather than simply with dot. Let's rename 'in' to 'input' and 'out' to 'output' both in the output and in the code.
-
Vladimir Davydov authored
This test is huge and takes long to complete. Let's move ddl, tx, and stat related stuff to separate files.
-
Vladimir Davydov authored
The test was called 'info' in the first place, because back when it was introduced vinyl statistics were reported by 'info' method. Today, stats are reported by 'stat' so let's rename the test as well to conform.
-
- Jan 14, 2019
-
-
Konstantin Osipov authored
Add box.info.gc.checkpoint_is_in_progress which is true when there is an ongoing checkpoint/snapshot and false otherwise Closes gh-3935 @TarantoolBot document Title: box.info.gc().checkpoint_is_in_progress Extend box.info.gc() documentation with a new member - checkpoint_is_in_progress, which is true if there is an ongoing checkpoint, false otherwise.
-
- Jan 10, 2019
-
-
Mergen Imeev authored
Not really critical as obuf_alloc() fails only on OOM, i.e. never in practice.
-
Kirill Shcherbatov authored
Introduced a new OP_Update opcode executing Tarantool native Update operation. In case of UPDATE or REPLACE we can't use new OP_Update as it has a complex SQL-specific semantics: CREATE TABLE tj (s1 INT PRIMARY KEY, s2 INT); INSERT INTO tj VALUES (1, 3),(2, 4),(3,5); CREATE UNIQUE INDEX i ON tj (s2); SELECT * FROM tj; [1, 3], [2, 4], [3, 5] UPDATE OR REPLACE tj SET s2 = s2 + 1; SELECT * FROM tj; [1, 4], [3, 6] I.e. [1, 3] tuple is updated as [1, 4] and have replaced tuple [2, 4]. This logic is implemented as preventive tuples deletion by all corresponding indexes in SQL. The other significant change is forbidden primary key update. It was possible to deal with it the same way like with or REPLACE specifier but we need an atomic UPDATE step for #3691 ticket to support "or IGNORE/or ABORT/or FAIL" specifiers. Reworked tests to make testing avoiding primary key UPDATE where possible. Closes #3850
-
Kirill Shcherbatov authored
Introduced new sql_vdbe_mem_encode_tuple and mpstream_encode_vdbe_mem routines to perform Vdbe memory to msgpack encoding on region without previous size estimation call. Got rid of sqlite3VdbeMsgpackRecordLen and sqlite3VdbeMsgpackRecordPut functions that became useless. This approach also resolves problem with invalid size estimation #3035 because it is not required anymore. Needed for #3850 Closes #3035
-
Kirill Shcherbatov authored
UPDATE operation doesn't fail when fkey self-reference condition unsatisfied and table has other records. To do not raise error where it is not necessary Vdbe inspects parent table with OP_Found. This branch is not valid for self-referenced table since its looking for a tuple affected by UPDATE operation and since the foreign key has already detected a conflict it must be raised. Example: CREATE TABLE t6(a INTEGER PRIMARY KEY, b TEXT, c INT, d TEXT, UNIQUE(a, b), FOREIGN KEY(c, d) REFERENCES t6(a, b)); INSERT INTO t6 VALUES(1, 'a', 1, 'a'); INSERT INTO t6 VALUES(100, 'one', 100, 'one'); UPDATE t6 SET c = 1, d = 'a' WHERE a = 100; -- fk conflict must be raised here Needed for #3850 Closes #3918
-
Kirill Shcherbatov authored
Function sql_vdbe_mem_alloc_region() that constructs the value of Vdbe Mem object used to change only type related flags. However, it is also required to erase other flags (for instance flags related to allocation policy: static, dynamic etc), since their combination may be invalid. In a typical Vdbe scenario, OP_MakeRecord and OP_RowData release memory with sqlite3VdbeMemRelease() and allocate on region with sql_vdbe_mem_alloc_region(). An integrity assert based on sqlite3VdbeCheckMemInvariants() would fire here due to incompatible combination of flags: MEM_Static | (MEM_Blob | MEM_Ephem). Needed for #3850
-
Kirill Shcherbatov authored
Removed vdbe code generation making type checks from vdbe_emit_constraint_checks as it is useless since strict types have been introduced.
-
- Jan 09, 2019
-
-
Georgy Kirichenko authored
Reclaim memory used while previous page recovery not the last one. There is no specific test case. Fixes: 3920
-
- Jan 05, 2019
-
-
Alexander Turenko authored
It catched by ASAN at build time (lemon is executed to generate parse.[ch]), so tarantool couldn't be built with -DENABLE_ASAN=ON.
-
- Dec 29, 2018
-
-
Kirill Shcherbatov authored
Reworked tuple_init_field_map to fill a local bitmap and compare it with template required_fields bitmap containing information about required fields. Each field is mapped to bitmap with field:id - unique field identifier. This approach to check the required fields will work even after the introduction of JSON paths, when the field tree becomes multilevel. @locker: massive code refactoring, comments. Needed for #1012
-
Kirill Shcherbatov authored
@locker: comments. Needed for #1012
-
Nikita Pettik authored
Closes #3906
-
Kirill Shcherbatov authored
Allowed to make SELECT requests that have HAVING clause without GROUP BY. It is possible when both - left and right parts of request have aggregate function or constant value. Closes #2364. @TarantoolBot document Title: HAVING without GROUP BY clause A query with a having clause should also have a group by clause. If you omit group by, all the rows not excluded by the where clause return as a single group. Because no grouping is performed between the where and having clauses, they cannot act independently of each other. Having acts like where because it affects the rows in a single group rather than groups, except the having clause can still use aggregates. Having without group by is not supported for select from multiple tables. 2011 SQL standard "Part 2: Foundation" 7.10 <having clause> p.381 Example: SELECT MIN(s1) FROM te40 HAVING SUM(s1) > 0; -- is valid SELECT 1 FROM te40 HAVING SUM(s1) > 0; -- is valid SELECT NULL FROM te40 HAVING SUM(s1) > 0; -- is valid SELECT date() FROM te40 HAVING SUM(s1) > 0; -- is valid
-
Vladimir Davydov authored
xlog and xlog_cursor must be opened and closed in the same thread, because they use cord's slab allocator. Follow-up #3910
-
Vladimir Davydov authored
An xlog_cursor created and used by a relay via recovery context is destroyed by the main thread once the relay thread has exited. This is incorrect, because xlog_cursor uses cord's slab allocator and therefore must be destroyed in the same thread it was created by, otherwise we risk getting a use-after-free bug. So this patch moves recovery_delete() invocation to the end of the relay thread routine. No test is added, because our existing tests already cover this case - crashes don't usually happen, because we are lucky. The next patch will add some assertions to make the bug 100% reproducible. Closes #3910
-
Vladimir Davydov authored
A few changes intended to make error messages more clear, remove duplicates, etc: - Don't log an error when xstream_write() fails in recover_xlog() - it's a responsibility of the caller. Logging it there results in the same error occuring twice in the log. - If recover_xlog() fails to apply a row and continues due to force_recovery flag, log the row's LSN - it might be useful for problem analysis. - Don't override relay error in relay_process_wal_event(), otherwise we can get 'fiber is cancelled' error in the status, which is meaningless. - Break replication if we fail to send an ack as it's pointless to continue then. - Log a relay error only once - when the relay thread is exiting. Don't log subsequent errors - they don't make much sense. - Set the relay cord name before setting WAL watcher: the WAL watcher sends an event as soon as it's installed, which starts xlog recovery, which is logged by the relay so we want the relay name to be valid. Note, there's a catch here: we used the original cord name as cbus endpoint name so now we have to pass endpoint name explicitly - this looks better anyway. While we are at it, let's also add some comments to relay_subscribe_f() and remove diag_is_empty() check as diag is always set when relay exits. Part of #3910
-
Vladimir Davydov authored
relay_process_wal_event() may be called if the relay fiber is already exiting, e.g. by wal_clear_watcher(). We must not try to scan xlogs in this case, because we could have written an incomplete packet fragment to the replication socket, as described in the previous commit message, so that writing another row would lead to corrupted replication stream and, as a result, permanent replication breakdown. Actually, there was a check for this case in relay_process_wal_event(), but it was broken by commit adc28591 ("replication: do not delete relay on applier disconnect"), which replaced it with a relay->status check, which is completely wrong, because relay->status is reset only after the relay thread exits. Part of #3910
-
Vladimir Davydov authored
In case force_recovery flag is set, recover_xlog() ignores any errors returned by xstream_write(), even SocketError or FiberIsCancelled. This may result in permanent replication breakdown as described in the next paragraph. Suppose there's a master and a replica and the master has force_recovery flag set. The replica gets stalled on WAL while applying a row fetched from the master. As a result, it stops sending ACKs. In the meantime, the master writes a lot of new rows to its WAL so that the relay thread sending changes to the replica fills up all the space available in the network buffer and blocks on the replication socket. Note, at this moment it may occur that a packet fragment has been written to the socket. The WAL delay on the replica takes long enough for replication to break on timeout: the relay reader fiber on the master doesn't receive an ACK from the replica in time and cancels the relay writer fiber. The relay writer fiber wakes up and returns to recover_xlog(), which happily continues to scan the xlog attempting to send more rows (force_recovery is set), failing, and complaining to the log. While the relay thread is still scanning the log, the replica finishes the long WAL write and reads more data from the socket, freeing up some space in the network buffer for the relay to write more rows. The relay thread, which happens to be still in recover_xlog(), writes a new row to the socket after the packet fragment it had written when it was cancelled, effectively corrupting the stream and breaking a replication with an unrecoverable error, e.g. xrow.c:99 E> ER_INVALID_MSGPACK: Invalid MsgPack - packet header Actually, taking into account force_recovery in relay threads looks dubious - after all this option was implemented to allow start of a tarantool instance when local data are corrupted, not to force replication from a corrupted data set. The latter is dangerous anyway - it's better to rebootstrap replicas in case of master data corruption. That being said, let's ignore force_recovery option in relay threads. It's difficult to write a test for this case, since too many conditions have to be satisfied simultaneously for the issue to occur. Injecting errors doesn't really help here and would look artificial, because it'd rely too much on the implementation. So I'm committing this one without a test case. Part of #3910
-
Vladimir Davydov authored
These distributions are past EOL.
-
- Dec 27, 2018
-
-
Kirill Shcherbatov authored
A new json_token_is_leaf routine tests if the passed JSON token is a JSON tree leaf (i.e. has no child record). @locker: test refactoring. Needed for #1012
-
Kirill Shcherbatov authored
Snprint-style function that prints the path to a token in a JSON tree. It will be used for error reporting related to JSON path indexes. @locker: massive code refactoring. Needed for #1012
-
Roman Khabibov authored
The reason for the bug was that X'00' is a terminal symbol. If the char set contained X'00', all characters are ignored after it (including itself). Closes #3543
-
- Dec 25, 2018
-
-
Nikita Pettik authored
Closes #2647
-
Nikita Pettik authored
Part of #2647
-
Nikita Pettik authored
We don't rely on this string anymore and it can be removed for ordinary tables. However, it is still used to hold SELECT body for view. Part of #2647
-
Nikita Pettik authored
Since we are going to remove string of SQL "CREATE TABLE ..." statement from space's opts, lets rework methods in sqltester to drop all tables and views in order to avoid relying on this parameter. Part of #2647
-
Nikita Pettik authored
Since SQL string containing "CREATE TABLE ..." statement is not used anymore for ordinary tables/space, it makes no sense to modify it during renaming. Hence, now rename routine needs only to update name in _space, so it can be done using simple update operation. Moreover, now we are able to rename spaces created from Lua-land. Part of #2647
-
Nikita Pettik authored
At the last stage of trigger creation, trigger's create statement ("CREATE TRIGGER ...") is encoded to msgpack. Since only this string is only member of a map to be encoded, is makes no sense to call whole sql_encode_table_opts() function, which in turn processes table's checks, opts for VIEW etc. Needed for #2647
-
Nikita Pettik authored
Currently, all routines connected with expression AST processing rely on recursive approaches. On the other hand, SQL is executed in a standard fiber, which features only 64kb of stack memory. Hence, deep recursion can result in stack overflow. To avoid obvious overflows lets significantly restrict allowed depth of expression AST. Note that it is not radical solution to this problem but rather temporary fix. Workaround for #3861
-
Nikita Pettik authored
Before this patch it was allowed to rename space which is referenced by a view. In turn, view contains SELECT statement in a raw form (i.e. as a string) and it is not modified during renaming routine. Hence, after renaming space still has referencing counter > 0, but no usage of view is allowed (since execution of SELECT results in "Space does not exist"). To avoid such situations, lets ban renaming space if its view reference counter > 0. Note that RENAME is ANSI extension, so different DBs behave in this case in different ways - some of them allow to rename tables referenced by a view (PostgreSQL), others - don't (Oracle). Closes #3746
-
Kirill Shcherbatov authored
The space_def_destroy_fields routine is used in make_scoped_guard on alter:space_def_new_from_tuple always pass extern_alloc=true for sql_expr_delete routine. It shouldn't as the first-time allocated AST object (field_def_decode) is not external-allocated. Introduced a new flag 'extern_alloc' for space_def_destroy_fields routine. Also fixed def_guard declaration in space_def_new_from_tuple: it should be right after space_def_new_xc call, otherwise subsequent tnt_raise/diag_raise call wouldn't fire it. Closes #3908
-
Kirill Shcherbatov authored
Reworked ER_FIELD_TYPE and ER_ACTION_MISMATCH error to pass path string to field instead of field number. This patch is required for further JSON patches, to give detailed information about field on error. Needed for #1012
-
- Dec 24, 2018
-
-
Vladimir Davydov authored
This function and the associated table don't depend on key definition, only on field type. So let's move it from key_def.h to field_def.h.
-
Kirill Shcherbatov authored
- Reworked field types and nullability checks to set error message in tuple_init_field_map manually. We will specify full JSON path to the field further patches. - Introduced field_mp_type_is_compatible routine making field_type and mp_type compatibility test taking into account field nullability. - Refactored key_part_validate to pass const char *key argument and to reuse field_mp_type_is_compatible code. Needed for #1012
-