- Jun 06, 2019
-
-
Roman Khabibov authored
Add "_vcollation" sysview to read it from net.box. This sysview is always readable, except when a user doesn't have "public" role. Needed for #3941 @TarantoolBot document Title: box.space._vcollation _vcollation is a system space that represents a virtual view. The structure of its tuples is identical to that of _collation. Tuples of this sysview is always readable, except when the user doesn't have "public" role.
-
Mergen Imeev authored
Follow-up #4196
-
Nikita Pettik authored
It is legacy flag and isn't used anymore.
-
Mergen Imeev authored
Before this patch, the existence of space was checked for the CREATE TABLE or CREATE VIEW statements during the parsing. If the space already exists, an error has been set and cleanup is performed. But, if the statement contained 'IF NOT EXIST', the cleanup was performed, but the error was not set. Meanwhile, create_foreign_key() assumes that if create_table_def->new_space is NULL, then we are dealing with ALTER TABLE statement. This in turn false, since ctd->new_space is nullified also in case of already existing table. This causes an assertion or a segmentation fault when creating a foreign key during space creation. This patch moves this check to VDBE. Parsing now is always processed till the end as in case space doesn’t exist. Closes #4196
-
Vladimir Davydov authored
After compacting runs, we first mark them as dropped (VY_LOG_DROP_RUN), then try to delete their files unless they are needed for recovery from the checkpoint, and finally mark them as not needed in the vylog (VY_LOG_FORGET_RUN). There's a potential race sitting here: the problem is the garbage collector might kick in after files are dropped, but before they are marked as not needed. If this happens, there will be runs that have two VY_LOG_FORGET_RUN records, which will break recovery: Run XX is forgotten, but not registered The following patches make the race more likely to happen so let's eliminate it by making the garbage collector the only one who can mark runs as not needed (i.e. write VY_LOG_FORGET_RUN record). There will be no warnings, because the garbage collector silently ignores ENOENT errors, see vy_gc(). Another good thing about this patch is that now we never yield inside a vylog transaction, which makes it easier to remove the vylog latch blocking implementation of transactional DDL.
-
Kirill Shcherbatov authored
Closes #3691 @TarantoolBot document Title: check constraint for Lua space The check constraint is a type of integrity constraint which specifies a requirement that must be met by tuple before it is inserted into space. The constraint result must be predictable. Expression in check constraint must be <boolean value expression> I.e. return boolean result. Now it is possible to create ck constraints only for empty space having format. Constraint expression is a string that defines relations between top-level tuple fields. Take into account that all names are converted to an uppercase before resolve(like SQL does), use \" sign for names of fields that were created not with SQL. The check constraints are fired on insertion to the Lua space together with Lua space triggers. The execution order of ck constraints checks and space triggers follows their creation sequence. Note: this patch changes the CK constraints execution order for SQL. Previously check of CK constraints integrity was fired before tuple is formed; meanwhile now they are implemented as NoSQL before replace triggers, which are fired right before tuple insertion. In turn, type casts are performed earlier than msgpack serialization. You should be careful with functions that use field types in your check constrains (like typeof()). Consider following situation: ``` box.execute("CREATE TABLE t2(id INT primary key, x INTEGER CHECK (x > 1));") box.execute("INSERT INTO t2 VALUES(3, 1.1)") ``` the last operation would fail because 1.1 is silently cast to integer 1 which is not greater than 1. To create a new CK constraint for a space, use space:create_check_constraint method. All space constraints are shown in space.ck_constraint table. To drop ck constraint, use :drop method. Example: ``` s1 = box.schema.create_space('test1') pk = s1:create_index('pk') ck = s1:create_check_constraint('physics', 'X < Y') s1:insert({2, 1}) -- fail ck:drop() ```
-
Kirill Shcherbatov authored
To perform ck constraints tests before insert or update space operation, we use precompiled VDBE machine associated with each ck constraint, that is executed in on_replace trigger. Each ck constraint VDBE code consists of 1) prologue code that maps new(or updated) tuple via binding, 2) ck constraint code generated by CK constraint AST. In case of ck constraint error the tuple insert/replace operation is aborted and ck constraint error is handled as diag message. Needed for #3691
-
Kirill Shcherbatov authored
This patch introduces a new system space to persist check constraints. The format of the new system space is _ck_constraint (space id = 364) [<space id> UINT, <constraint name> STR, <is_deferred>BOOL, <language>STR, <code>STR] A CK constraint is local for a space, so every pair <space id, CK name> is unique (it is also the PK in the _ck_constraint space). After insertion into this space, a new instance describing check constraint is created. Check constraint holds an exspression AST. While space features some check constraints, it isn't allowed to be dropped. The :drop() space method firstly deletes all check constraints and then removes an entry from the _space. Because the space alter, the index alter and the space truncate operations cause space recreation process, a new RebuildCkConstrains object is introduced. This alter object compiles a new ck constraint object, replaces and removes an existent instances atomically (but if the assembly of some ck constraint object fails, nothing is changed). In fact, in scope of this patch we don't really need to recreate a ck_constraint object in such situations (it is enough to patch space_def pointer in AST tree like we did it before, but we are going to recompile a VDBE that represents ck constraint in further patches, and that operation is not safe). The main motivation for these changes is an ability to support ADD CHECK CONSTRAINT operation in the future. CK constraints are easier to manage as self-sustained objects: such change is managed with atomic insertion(unlike the current architecture). Finally, the xfer optimization is disabled now if some space have ck constraints. In following patches this xfer optimisation becomes impossible, so there is no reason to rewrite this code now. Needed for #3691
-
Kirill Shcherbatov authored
Refactored OP_Column instruction with a new vdbe_field_ref class. The vdbe_field_ref is a reusable object that speeds-up field access for given tuple or tuple data. Introduced OP_Fetch opcode that uses vdbe_field_ref given as a first argument. This opcode makes possible to perform binding of a new tuple to an existent VDBE without decoding it's fields. Needed for #3691
-
Kirill Shcherbatov authored
This preparatory refactoring is necessary to simplify the process of introducing a new OP_Fetch statement in the next patch. - got rid of useless sMem local variable - got rid of useless payloadSize in VdbeCursor structure Needed for #3691
-
Kirill Shcherbatov authored
A new sql_bind_ptr routine allows to bing a generic pointer to VDBE variable. This change is required to pass a tuple_fetcher representing a new tuple to the check constraint VDBE. Needed for #3961
-
Kirill Shcherbatov authored
The sql_flags is a parser parameter that describes how to parse the SQL request, determines general behaviour: like whether foreign keys are handled as deferred or not etc. But now this information is taken from the global user session object. When we need to run the parser with some other parameters, it is necessary to change global session object, which may lead to unpredictable consequences in general case. Introduced a new parser and vdbe field sql_flags which is responsible for SQL parsing results. Needed for #3691
-
Kirill Shcherbatov authored
The SQL_NullCallback flag is never set now so it redundant. Let's get red if it because we are going to pass user_session->sql_flags to parser and use it instead of session instance. Needed for #3691
-
- Jun 04, 2019
-
-
Serge Petrenko authored
After making memtx space format check non-blocking, move the appropriate vinyl test case to engine suite. Introduce a new errinj, ERRINJ_CHECK_FORMAT_DELAY, to unify the test case for both engines. Follow-up #3976
-
Serge Petrenko authored
Just like index build, space check in memtx stalls event loop for the whole check time. Add occasional yields, and an on_replace trigger, which checks format for tuples inserted while space format check is in progress. Follow-up #3976 @TarantoolBot document Title: memtx now checks space format in background There is no event loop stall when memtx engine checks space format anymore. You may insert tuples into a space, while its new format is being checked. If the tuples don't match the new format, format change will be aborted.
-
Ilya Konyukhov authored
Currently to determine current source path, user has to do it by getting current source info from the stack and parse it manually This commit adds couple little helpers for this task called `debug.sourcefile()` and `debug.sourcedir()`, which returns a path to the file being executed. The path is relative to the current working directory. If such path cannot be determined, '.' is returned (i.e. interactive mode). There is an exception if you are running script with an absolute path. In that case, absolute path will be returned too. ```bash tarantool /absolute/path/to/the/script.lua ``` There are also handy shortcuts for that functions: `debug.__file__` and `debug.__dir__` @TarantoolBot document Title: Document new debug helper methods There are two more helpers in debug module `sourcefile()` and `sourcedir()`. This two helpers returns relative paths to source file and directory respectively. There are also `debug.__file__` and `debug.__dir__` which are just handy ways to call `sourcefile()` and `sourcedir()`. It should be mentioned that it is only possible to determine real path to the file, if the function was defined in a some lua file. `loadstring` may be exception here, since lua will store whole string in debug info. There is also a possible pitfall because of tail call optimization, so if the function `fn` is defined inside `myapp/init.lua` like this: ```lua function fn() return debug.sourcefile() end ``` It would not be possible to determine file path correctly because that function would not appear in a stacktrace. Even worse, it may return wrong result, i.e. actual path where "parent" function was defined. To force the interpreter to avoit this kind of optimizations you may use parenthesis like this: ```lua function fn() return (debug.sourcefile()) end ``` Also both `sourcefile` and `sourcedir` functions have optional level argument, which allows set the level of the call stack function should examine for source path. By default, 2 is used.
-
Serge Petrenko authored
Example: ``` [001] box/on_replace.test.lua [ pass ] [001] box/on_replace.test.lua [ fail ] [001] [001] Test failed! Result content mismatch: [001] --- box/on_replace.result Tue Jun 4 10:44:39 2019 [001] +++ box/on_replace.reject Tue Jun 4 10:44:50 2019 [001] @@ -4,7 +4,7 @@ [001] -- test c and lua triggers: must return only lua triggers [001] #box.space._space:on_replace() [001] --- [001] -- 0 [001] +- 2 [001] ... ``` This happened because we forgot to remove triggers set on _space. Clear them below the test case.
-
- Jun 03, 2019
-
-
Serge Petrenko authored
This commit fixes a regression introduced by commit 5ab0763b (pass operation type to lua triggers) When passing operation type to the trigger, we relied on a corresponding xrow_header, which would later be written to the log. When before_replace triggers are fired, there is no row connected to the txn statement yet, so we faked one to pass operations. We, however, didn't account for temporary spaces, which never have a row associated with a corresponding txn stmt. This lead to a segfault on an attemt to use on_replace triggers with temporary spaces. Add a fake row for temporary spaces to pass operation type in on_replace triggers and add some tests. Closes #4266
-
- May 31, 2019
-
-
Alexander V. Tikhonov authored
This reverts commit 6600dc1b. Pushed by mistake before all changes.
-
Vladimir Davydov authored
Since DDL is triggered by the admin, it can be deliberately initiated when the workload is known to be low. Throttling it along with DML requests would only cause exasperation in this case. So we don't apply disk-based rate limit to DDL. This should be fine, because the disk-based limit is set rather strictly to let the workload some space to grow, see vy_regulator_update_rate_limit(), and in contrast to the memory-based limit, exceeding the disk-based limit doesn't result in abrupt stalls - it may only lead to a gradual accumulation of disk space usage and read latency. Closes #4238
-
Alexander V. Tikhonov authored
Fixed swim headers in addition to commit Closes #4050
-
- May 30, 2019
-
-
Serge Petrenko authored
Since we have implemented memtx background index build, the corresponding vinyl test cases are now also suitable for memtx, so move them to engine suite so that both engines are tested. Also add some tests to check that an ongoing index build is aborted in case a tuple violating unique constraint or format of the new index is inserted. Add some error injections to unify appropriate memtx/vinyl tests. Closes #3976
-
Serge Petrenko authored
Memtx index build used to stall event loop for all the build period. Add occasional yields so that the loop is not blocked for too long. Also make index build set on_replace triggers so that concurrent replaces are also correctly handled during the build. Part of #3976 @TarantoolBot document Title: memtx indices are now built in background Memtx engine no longer stalls the event loop during an index build. You may insert tuples into a space while an index build is in progress: the tuples will be correctly added to the new index. If such tuple violates the new indexes unique constraint or doesn't match the new index format, the index build will be aborted.
-
Vladimir Davydov authored
If a key isn't found in the tuple cache, we fetch it from a run file. In this case disk read and page decompression is done by a reader thread, however key lookup in the fetched page is still performed by the tx thread. Since pages are immutable, this could as well be done by the reader thread, which would allow us to save some precious CPU cycles for tx. Close #4257
-
Vladimir Davydov authored
To handle fiber cancellation during page read we need to pin all objects referenced by vy_page_read_task. Currently, there's the only such object, vy_run. It has reference counting so pinning it is trivial. However, to move page lookup to a reader thread, we need to also reference key def, tuple format, and key. Format and key have reference counting, but key def doesn't - we typically copy it. Copying it in this case is too heavy. Actually, cancelling a fiber manually or on timeout while it's reading disk doesn't make much sense with PCIE attached flash drives. It used to be reasonable with rotating disks, since a rotating disk controller could retry reading a block indefinitely on read failure. It is still relevant to Network Attached Storage. On the other hand, NAS has never been tested, and what isn't tested, can and should be removed. For complex SQL queries we'll be forced to rethink timeout handling anyway. That being said, let's simply drop this functionality.
-
Vladimir Davydov authored
Page reading code is intermixed with the reader thread selection in the same function, which makes it difficult to extend the former. So let's introduce a helper function encapsulating a call on behalf of a reader thread.
-
Vladimir Davydov authored
Since a page read task references the source run file, we don't need to pass page info by value.
-
Vladimir Davydov authored
This function is a part of the run iterator API so we can't use it in a reader thread. Let's make it an independent helper. As a good side effect, we can now reuse it in the slice stream implementation.
-
- May 29, 2019
-
-
Cyrill Gorcunov authored
Since commit f42596a2 we've started to test if a fiber is cancelled in coio_wait. This may hang interactive console with simple fiber.kill(fiber.self()) call. Sane users don't do such tricks and this affects interactive console mode only but still to be on a safe side lets exit if someone occasionally killed the console fiber. Notes: - such exit may ruine terminals settings and one need to reset it after; - the issue happens on interactive console only so I didn't find a way for automatic test and tested manually.
-
Alexander Turenko authored
This warning breaks -Werror -O2 build on GCC 9.1.
-
Kirill Yukhin authored
-
Alexander Turenko authored
We run testing in Travis-CI using docker, which by default enters into a shell session as root. This is follow up for e9c96a4c ('fio: fix mktree error reporting').
-
Vladimir Davydov authored
Tune secondary index content to make sure minor compaction expected by the test does occur. Fixes commit e2f5e1bc ("vinyl: don't produce deferred DELETE on commit if key isn't updated"). Closes #4255
-
Vladimir Davydov authored
Rather than passing 'sequence_part' along with 'sequence' on index create/alter, pass a table with the following fields: - id: sequence id or name - field: auto increment field id or name or path in case of json index If id is omitted, the sequence will be auto-generated (equivalent to 'sequence = true'). If field is omitted, the first indexed field is used. Old format, i.e. passing false/true or sequence name/id instead of a table is still supported. Follow-up #4009
-
Mike Siomkin authored
In case of, say, permission denied the function did try to concatenate a string with a cdata<struct error> and fails. Also unified error messages in case of a single part path and a multi part one.
-
- May 28, 2019
-
-
Cyrill Gorcunov authored
Backport of openrusty/luajit2-test-suite commit 907c536c210ebe6a147861bb4433d28c0ebfc8cd To test unsink 64 bit pointers Part-of #4171
-
Kirill Yukhin authored
-
Nikita Pettik authored
It also fixes misbehaviour during insertion of boolean values to integer field: explicit CAST operator converts boolean to integer, meanwhile implicit cast doesn't.
-
Nikita Pettik authored
As previous commit says, we've decided to allow all meaningful explicit casts. One of these - conversion from string consisting from quoted float literal to integer. Before this patch, mentioned operation was not allowed: SELECT CAST('1.123' AS INTEGER); --- - error: 'Type mismatch: can not convert 1.123 to integer' ... But anyway it can be done in two steps: SELECT CAST(CAST('1.123' AS REAL) AS INTEGER); So now this cast can be done in one CAST operation. Closes #4229
-
Nikita Pettik authored
It was decided that all explicit casts for which we can come up with meaningful semantics should work. If a user requests an explicit cast, he/she most probably knows what they are doing. CAST from REAL to BOOLEAN is disallowed by ANSI rules. However, we allow CAST from INT to BOOLEAN which is also prohibited by ANSI. So, basically it is possible to covert REAL to BOOLEAN in two steps: SELECT CAST(CAST(1.123 AS INT) AS BOOLEAN); For the reason mentioned above, now we allow straight CAST from REAL to BOOLEAN. Anything different from 0.0 is evaluated to TRUE. Part of #4229
-