- Jun 06, 2019
-
-
Kirill Shcherbatov authored
This patch introduces a new system space to persist check constraints. The format of the new system space is _ck_constraint (space id = 364) [<space id> UINT, <constraint name> STR, <is_deferred>BOOL, <language>STR, <code>STR] A CK constraint is local for a space, so every pair <space id, CK name> is unique (it is also the PK in the _ck_constraint space). After insertion into this space, a new instance describing check constraint is created. Check constraint holds an exspression AST. While space features some check constraints, it isn't allowed to be dropped. The :drop() space method firstly deletes all check constraints and then removes an entry from the _space. Because the space alter, the index alter and the space truncate operations cause space recreation process, a new RebuildCkConstrains object is introduced. This alter object compiles a new ck constraint object, replaces and removes an existent instances atomically (but if the assembly of some ck constraint object fails, nothing is changed). In fact, in scope of this patch we don't really need to recreate a ck_constraint object in such situations (it is enough to patch space_def pointer in AST tree like we did it before, but we are going to recompile a VDBE that represents ck constraint in further patches, and that operation is not safe). The main motivation for these changes is an ability to support ADD CHECK CONSTRAINT operation in the future. CK constraints are easier to manage as self-sustained objects: such change is managed with atomic insertion(unlike the current architecture). Finally, the xfer optimization is disabled now if some space have ck constraints. In following patches this xfer optimisation becomes impossible, so there is no reason to rewrite this code now. Needed for #3691
-
Kirill Shcherbatov authored
Refactored OP_Column instruction with a new vdbe_field_ref class. The vdbe_field_ref is a reusable object that speeds-up field access for given tuple or tuple data. Introduced OP_Fetch opcode that uses vdbe_field_ref given as a first argument. This opcode makes possible to perform binding of a new tuple to an existent VDBE without decoding it's fields. Needed for #3691
-
Kirill Shcherbatov authored
This preparatory refactoring is necessary to simplify the process of introducing a new OP_Fetch statement in the next patch. - got rid of useless sMem local variable - got rid of useless payloadSize in VdbeCursor structure Needed for #3691
-
Kirill Shcherbatov authored
A new sql_bind_ptr routine allows to bing a generic pointer to VDBE variable. This change is required to pass a tuple_fetcher representing a new tuple to the check constraint VDBE. Needed for #3961
-
Kirill Shcherbatov authored
The sql_flags is a parser parameter that describes how to parse the SQL request, determines general behaviour: like whether foreign keys are handled as deferred or not etc. But now this information is taken from the global user session object. When we need to run the parser with some other parameters, it is necessary to change global session object, which may lead to unpredictable consequences in general case. Introduced a new parser and vdbe field sql_flags which is responsible for SQL parsing results. Needed for #3691
-
Kirill Shcherbatov authored
The SQL_NullCallback flag is never set now so it redundant. Let's get red if it because we are going to pass user_session->sql_flags to parser and use it instead of session instance. Needed for #3691
-
- Jun 04, 2019
-
-
Serge Petrenko authored
After making memtx space format check non-blocking, move the appropriate vinyl test case to engine suite. Introduce a new errinj, ERRINJ_CHECK_FORMAT_DELAY, to unify the test case for both engines. Follow-up #3976
-
Serge Petrenko authored
Just like index build, space check in memtx stalls event loop for the whole check time. Add occasional yields, and an on_replace trigger, which checks format for tuples inserted while space format check is in progress. Follow-up #3976 @TarantoolBot document Title: memtx now checks space format in background There is no event loop stall when memtx engine checks space format anymore. You may insert tuples into a space, while its new format is being checked. If the tuples don't match the new format, format change will be aborted.
-
Ilya Konyukhov authored
Currently to determine current source path, user has to do it by getting current source info from the stack and parse it manually This commit adds couple little helpers for this task called `debug.sourcefile()` and `debug.sourcedir()`, which returns a path to the file being executed. The path is relative to the current working directory. If such path cannot be determined, '.' is returned (i.e. interactive mode). There is an exception if you are running script with an absolute path. In that case, absolute path will be returned too. ```bash tarantool /absolute/path/to/the/script.lua ``` There are also handy shortcuts for that functions: `debug.__file__` and `debug.__dir__` @TarantoolBot document Title: Document new debug helper methods There are two more helpers in debug module `sourcefile()` and `sourcedir()`. This two helpers returns relative paths to source file and directory respectively. There are also `debug.__file__` and `debug.__dir__` which are just handy ways to call `sourcefile()` and `sourcedir()`. It should be mentioned that it is only possible to determine real path to the file, if the function was defined in a some lua file. `loadstring` may be exception here, since lua will store whole string in debug info. There is also a possible pitfall because of tail call optimization, so if the function `fn` is defined inside `myapp/init.lua` like this: ```lua function fn() return debug.sourcefile() end ``` It would not be possible to determine file path correctly because that function would not appear in a stacktrace. Even worse, it may return wrong result, i.e. actual path where "parent" function was defined. To force the interpreter to avoit this kind of optimizations you may use parenthesis like this: ```lua function fn() return (debug.sourcefile()) end ``` Also both `sourcefile` and `sourcedir` functions have optional level argument, which allows set the level of the call stack function should examine for source path. By default, 2 is used.
-
Serge Petrenko authored
Example: ``` [001] box/on_replace.test.lua [ pass ] [001] box/on_replace.test.lua [ fail ] [001] [001] Test failed! Result content mismatch: [001] --- box/on_replace.result Tue Jun 4 10:44:39 2019 [001] +++ box/on_replace.reject Tue Jun 4 10:44:50 2019 [001] @@ -4,7 +4,7 @@ [001] -- test c and lua triggers: must return only lua triggers [001] #box.space._space:on_replace() [001] --- [001] -- 0 [001] +- 2 [001] ... ``` This happened because we forgot to remove triggers set on _space. Clear them below the test case.
-
- Jun 03, 2019
-
-
Serge Petrenko authored
This commit fixes a regression introduced by commit 5ab0763b (pass operation type to lua triggers) When passing operation type to the trigger, we relied on a corresponding xrow_header, which would later be written to the log. When before_replace triggers are fired, there is no row connected to the txn statement yet, so we faked one to pass operations. We, however, didn't account for temporary spaces, which never have a row associated with a corresponding txn stmt. This lead to a segfault on an attemt to use on_replace triggers with temporary spaces. Add a fake row for temporary spaces to pass operation type in on_replace triggers and add some tests. Closes #4266
-
- May 31, 2019
-
-
Alexander V. Tikhonov authored
This reverts commit 6600dc1b. Pushed by mistake before all changes.
-
Vladimir Davydov authored
Since DDL is triggered by the admin, it can be deliberately initiated when the workload is known to be low. Throttling it along with DML requests would only cause exasperation in this case. So we don't apply disk-based rate limit to DDL. This should be fine, because the disk-based limit is set rather strictly to let the workload some space to grow, see vy_regulator_update_rate_limit(), and in contrast to the memory-based limit, exceeding the disk-based limit doesn't result in abrupt stalls - it may only lead to a gradual accumulation of disk space usage and read latency. Closes #4238
-
Alexander V. Tikhonov authored
Fixed swim headers in addition to commit Closes #4050
-
- May 30, 2019
-
-
Serge Petrenko authored
Since we have implemented memtx background index build, the corresponding vinyl test cases are now also suitable for memtx, so move them to engine suite so that both engines are tested. Also add some tests to check that an ongoing index build is aborted in case a tuple violating unique constraint or format of the new index is inserted. Add some error injections to unify appropriate memtx/vinyl tests. Closes #3976
-
Serge Petrenko authored
Memtx index build used to stall event loop for all the build period. Add occasional yields so that the loop is not blocked for too long. Also make index build set on_replace triggers so that concurrent replaces are also correctly handled during the build. Part of #3976 @TarantoolBot document Title: memtx indices are now built in background Memtx engine no longer stalls the event loop during an index build. You may insert tuples into a space while an index build is in progress: the tuples will be correctly added to the new index. If such tuple violates the new indexes unique constraint or doesn't match the new index format, the index build will be aborted.
-
Vladimir Davydov authored
If a key isn't found in the tuple cache, we fetch it from a run file. In this case disk read and page decompression is done by a reader thread, however key lookup in the fetched page is still performed by the tx thread. Since pages are immutable, this could as well be done by the reader thread, which would allow us to save some precious CPU cycles for tx. Close #4257
-
Vladimir Davydov authored
To handle fiber cancellation during page read we need to pin all objects referenced by vy_page_read_task. Currently, there's the only such object, vy_run. It has reference counting so pinning it is trivial. However, to move page lookup to a reader thread, we need to also reference key def, tuple format, and key. Format and key have reference counting, but key def doesn't - we typically copy it. Copying it in this case is too heavy. Actually, cancelling a fiber manually or on timeout while it's reading disk doesn't make much sense with PCIE attached flash drives. It used to be reasonable with rotating disks, since a rotating disk controller could retry reading a block indefinitely on read failure. It is still relevant to Network Attached Storage. On the other hand, NAS has never been tested, and what isn't tested, can and should be removed. For complex SQL queries we'll be forced to rethink timeout handling anyway. That being said, let's simply drop this functionality.
-
Vladimir Davydov authored
Page reading code is intermixed with the reader thread selection in the same function, which makes it difficult to extend the former. So let's introduce a helper function encapsulating a call on behalf of a reader thread.
-
Vladimir Davydov authored
Since a page read task references the source run file, we don't need to pass page info by value.
-
Vladimir Davydov authored
This function is a part of the run iterator API so we can't use it in a reader thread. Let's make it an independent helper. As a good side effect, we can now reuse it in the slice stream implementation.
-
- May 29, 2019
-
-
Cyrill Gorcunov authored
Since commit f42596a2 we've started to test if a fiber is cancelled in coio_wait. This may hang interactive console with simple fiber.kill(fiber.self()) call. Sane users don't do such tricks and this affects interactive console mode only but still to be on a safe side lets exit if someone occasionally killed the console fiber. Notes: - such exit may ruine terminals settings and one need to reset it after; - the issue happens on interactive console only so I didn't find a way for automatic test and tested manually.
-
Alexander Turenko authored
This warning breaks -Werror -O2 build on GCC 9.1.
-
Kirill Yukhin authored
-
Alexander Turenko authored
We run testing in Travis-CI using docker, which by default enters into a shell session as root. This is follow up for e9c96a4c ('fio: fix mktree error reporting').
-
Vladimir Davydov authored
Tune secondary index content to make sure minor compaction expected by the test does occur. Fixes commit e2f5e1bc ("vinyl: don't produce deferred DELETE on commit if key isn't updated"). Closes #4255
-
Vladimir Davydov authored
Rather than passing 'sequence_part' along with 'sequence' on index create/alter, pass a table with the following fields: - id: sequence id or name - field: auto increment field id or name or path in case of json index If id is omitted, the sequence will be auto-generated (equivalent to 'sequence = true'). If field is omitted, the first indexed field is used. Old format, i.e. passing false/true or sequence name/id instead of a table is still supported. Follow-up #4009
-
Mike Siomkin authored
In case of, say, permission denied the function did try to concatenate a string with a cdata<struct error> and fails. Also unified error messages in case of a single part path and a multi part one.
-
- May 28, 2019
-
-
Cyrill Gorcunov authored
Backport of openrusty/luajit2-test-suite commit 907c536c210ebe6a147861bb4433d28c0ebfc8cd To test unsink 64 bit pointers Part-of #4171
-
Kirill Yukhin authored
-
Nikita Pettik authored
It also fixes misbehaviour during insertion of boolean values to integer field: explicit CAST operator converts boolean to integer, meanwhile implicit cast doesn't.
-
Nikita Pettik authored
As previous commit says, we've decided to allow all meaningful explicit casts. One of these - conversion from string consisting from quoted float literal to integer. Before this patch, mentioned operation was not allowed: SELECT CAST('1.123' AS INTEGER); --- - error: 'Type mismatch: can not convert 1.123 to integer' ... But anyway it can be done in two steps: SELECT CAST(CAST('1.123' AS REAL) AS INTEGER); So now this cast can be done in one CAST operation. Closes #4229
-
Nikita Pettik authored
It was decided that all explicit casts for which we can come up with meaningful semantics should work. If a user requests an explicit cast, he/she most probably knows what they are doing. CAST from REAL to BOOLEAN is disallowed by ANSI rules. However, we allow CAST from INT to BOOLEAN which is also prohibited by ANSI. So, basically it is possible to covert REAL to BOOLEAN in two steps: SELECT CAST(CAST(1.123 AS INT) AS BOOLEAN); For the reason mentioned above, now we allow straight CAST from REAL to BOOLEAN. Anything different from 0.0 is evaluated to TRUE. Part of #4229
-
Nikita Pettik authored
OP_AddImm adds constant defined by P2 argument to memory cell P1. Before addition, content of memory cell is converted to MEM_Int. However, according to the usages of this opcode in source code, memory cell always initially contains integer value. Hence, conversion to integer can be replaced with simple assertion.
-
- May 27, 2019
-
-
Georgy Kirichenko authored
Encode all statements to be written out to wal onto a transaction memory region. This relaxes a relation between transaction and fiber state and required for an autonomous transaction feature. Prerequisites: #1254
-
Konstantin Osipov authored
Issue pending, gh-4254.
-
Vladimir Davydov authored
Even if a statement isn't marked as VY_STMT_DEFERRED_DELETE, e.g. it's a REPLACE produced by an UPDATE request, it may overwrite a statement in the transaction write set that is marked so, for instance: s = box.schema.space.create('test', {engine = 'vinyl'}) pk = s:create_index('pk') sk = s:create_index('sk', {parts = {2, 'unsigned'}}) s:insert{1, 1} box.begin() s:replace{1, 2} s:update(1, {{'=', 2, 3}}) box.commit() If we don't mark REPLACE{3,1} produced by the update operatoin with VY_STMT_DEFERRED_DELETE flag, we will never generate a DELETE statement for INSERT{1,1}. That is, we must inherit the flag from the overwritten statement when we insert a new one into a write set. Closes #4248
-
Vladimir Davydov authored
Consider the following example: s = box.schema.space.create('test', {engine = 'vinyl'}) s:create_index('primary') s:create_index('secondary', {parts = {2, 'unsigned'}}) s:insert{1, 1, 1} s:replace{1, 1, 2} When REPLACE{1,1} is committed to the secondary index, the overwritten tuple, i.e. INSERT{1,1}, is found in the primary index memory, and so deferred DELETE{1,1} is generated right away and committed along with REPLACE{1,1}. However, there's no need to commit anything to the secondary index in this case, because its key isn't updated. Apart from eating memory and loading disk, this also breaks index stats, as vy_tx implementation doesn't expect two statements committed for the same key in a single transaction. Fix this by checking if there's a statement in the log for the deleted key and if there's skipping them both as we do in the regular case, see the comment in vy_tx_set. Closes #3693
-
Vladimir Davydov authored
If an UPDATE request doesn't touch key parts of a secondary index, we don't need to re-index it in the in-memory secondary index, as this would only increase IO load. Historically, we use column mask set by the UPDATE operation to skip secondary indexes that are not affected by the operation on commit. However, there's a problem here: the column mask isn't precise - it may have a bit set even if the corresponding column value isn't changed by the update operation, e.g. consider {'+', 2, 0}. Not taking this into account may result in appearance of phantom tuples on disk as the write iterator assumes that statements that have no effect aren't written to secondary indexes (this is needed to apply INSERT+DELETE "annihilation" optimization). We fixed that by clearing column mask bits in vy_tx_set in case we detect that the key isn't changed, for more details see #3607 and commit e72867cb ("vinyl: fix appearance of phantom tuple in secondary index after update"). It was rather an ugly hack, but it worked. However, it turned out that apart from looking hackish this code has a nasty bug that may lead to tuples missing from secondary indexes. Consider the following example: s = box.schema.space.create('test', {engine = 'vinyl'}) s:create_index('pk') s:create_index('sk', {parts = {2, 'unsigned'}}) s:insert{1, 1, 1} box.begin() s:update(1, {{'=', 2, 2}}) s:update(1, {{'=', 3, 2}}) box.commit() The first update operation writes DELETE{1,1} and REPLACE{2,1} to the secondary index write set. The second update replaces REPLACE{2,1} with DELETE{2,1} and then with REPLACE{2,1}. When replacing DELETE{2,1} with REPLACE{2,1} in the write set, we assume that the update doesn't modify secondary index key parts and clear the column mask so as not to commit a pointless request, see vy_tx_set. As a result, we skip the first update too and get key {2,1} missing from the secondary index. Actually, it was a dumb idea to use column mask to skip statements in the first place, as there's a much easier way to filter out statements that have no effect for secondary indexes. The thing is every DELETE statement inserted into a secondary index write set acts as a "single DELETE", i.e. there's exactly one older statement it is supposed to purge. This is, because in contrast to the primary index we don't write DELETE statements blindly - we always look up the tuple overwritten in the primary index first. This means that REPLACE+DELETE for the same key is basically a no-op and can be safely skip. Moreover, DELETE+REPLACE can be treated as no-op, too, because secondary indexes don't store full tuples hence all REPLACE statements for the same key are equivalent. By marking both statements as no-op in vy_tx_set, we guarantee that no-op statements don't make it to secondary index memory or disk levels. Closes #4242
-
- May 23, 2019
-
-
Cyrill Gorcunov authored
Backport of openresty/luajit2-test-suite commit ce2c916d5582914edeb9499f487d9fa812632c5c To test hash chain bug. Part-of #4171
-