- Jul 18, 2019
-
-
Mergen Imeev authored
This patch makes VDBE to perform a clean-up if the creation of a constraint fails because of the creation of two or more constraints of the same type with the same name and in the same CREATE TABLE statement. For example: CREATE TABLE t1( id INT PRIMARY KEY, CONSTRAINT ck1 CHECK(id > 1), CONSTRAINT ck1 CHECK(id < 10) ); Part of #4183
-
Mergen Imeev authored
To separate the error setting and execution halting, a new opcode OP_SetDiag was created. The only functionality of the opcode is the execution of diag_set(). It is important to note that OP_SetDiag does not set is_aborted to true, so we can continue working with other opcodes, if necessary. This function allows us to perform cleanup in some special cases, for example, when creating a constraint failed because of the creation of two or more constraints with the same name in the same CREATE TABLE statement. Since now diag_set() is executed in OP_SetDiag, this functionality has been removed from OP_Halt. Needed for #4183
-
- Jul 17, 2019
-
-
Vladimir Davydov authored
We are supposed to authenticate guest user without a password. This used to work before commit 076a8420 ("Permit empty passwords in net.box"), when guest didn't have any password. Now it has an empty password and the check in authenticate turns out to be broken, which breaks assumptions made by certain connectors. This patch fixes the check. Closes #4327
-
- Jul 16, 2019
-
-
Roman Khabibov authored
Before this patch, user could use COLLATE with non-string-like literals, columns or subquery results. Disallow such usage. Closes #3804
-
- Jul 15, 2019
-
-
Alexander V. Tikhonov authored
On high loaded host the test vinyl/recover failed on waiting loop for the dumper counter check. It expected that the value should be equal to "2" exactly, while on the high loaded host the dump could be already run more times and the counter value found to be "3" or even bigger values. To fix it the counter value check was changed from the exact value to the range bigger or equal of the expecting value. The error message before the fix was: [003] --- vinyl/recover.result Mon Jul 15 10:46:00 2019 [003] +++ vinyl/recover.reject Mon Jul 15 10:58:10 2019 [003] @@ -517,7 +517,7 @@ [003] ... [003] test_run:wait_cond(function() return pk:stat().disk.dump.count == 2 end) [003] --- [003] -- true [003] +- false [003] ... [003] sk:stat().disk.dump.count -- 1 [003] --- Closes #4345
-
Vladimir Davydov authored
The patch is pretty straightforward - all it does is moves checks for single statement transactions from alter.cc to txn_enable_yield_for_ddl so that now any DDL request may be executed in a transaction unless it builds an index or checks the format of a non-empty space (those are the only two operations that may yield). There's two things that must be noted explicitly. The first is removal of an assertion from priv_grant. The assertion ensured that a revoked privilege was in the cache. The problem is the cache is built from the contents of the space, see user_reload_privs. On rollback, we first revert the content of the space to the original state, and only then start invoking rollback triggers, which call priv_grant. As a result, we will revert the cache to the original state right after the first trigger is invoked and the following triggers will have no effect on it. Thus we have to remove this assertion. The second subtlety lays in vinyl_index_commit_modify. Before the commit we assumed that if statement lsn is <= vy_lsm::commit_lsn, then it must be local recovery from WAL. Now it's not true, because there may be several operations for the same index in a transaction, and they all will receive the same signature in on_commit trigger. We could, of course, try to assign different signatures to them, but that would look cumbersome - better simply allow lsn <= vy_lsm::commit_lsn after local recovery, there's actually nothing wrong about that. Closes #4083 @TarantoolBot document Title: Transactional DDL Now it's possible to group non-yielding DDL statements into transactions, e.g. ```Lua box.begin() box.schema.space.create('my_space') box.space.my_space:create_index('primary') box.commit() -- or box.rollback() ``` Most DDL statements don't yield and hence can be run from transactions. There are just two exceptions: creation of a new index and changing the format of a non-empty space. Those are long operations that may yield so as not to block the event loop for too long. Those statements can't be executed from transactions (to be more exact, such a statement must go first in any transaction). Also, just like in case of DML transactions in memtx, it's forbidden to explicitly yield in a DDL transaction by calling fiber.sleep or any other yielding function. If this happens, the transaction will be aborted and an attempt to commit it will fail.
-
Vladimir Davydov authored
If there are multiple DDL operations in the same transactions, which is impossible now, but will be implemented soon, AlterSpaceOp::commit and rollback methods must not access space index map. To understand that, consider the following example: - on_replace: AlterSpaceOp1 creates index I1 for space S1 - on_replace: AlterSpaceOp2 moves index I1 from space S1 to space S2 - on_commit: AlterSpaceOp1 commits creation of index I1 AlterSpaceOp1 can't lookup I1 in S1 by id, because the index was moved from S1 to S2 by AlterSpaceOp2. If AlterSpaceOp1 attempts to look it up, it will access a wrong index. Fix that by caching pointers to old and new indexes in AlterSpaceOp on construct/prepare instead of using space_index() on commit/rollback to access them.
-
Vladimir Davydov authored
Memtx engine doesn't allow yielding inside a transaction. To achieve that, it installs fiber->on_yield trigger that aborts the current transaction (rolls it back, but leaves it be so that commit fails). There's an exception though - DDL statements are allowed to yield. This is required so as not to block the event loop while a new index is built or a space format is checked. Currently, we handle this exception by checking space id and omitting installation of the trigger for system spaces. This isn't entirely correct, because we may yield after a DDL statement is complete, in which case the transaction won't be aborted though it should: box.begin() box.space.my_space:create_index('my_index') fiber.sleep(0) -- doesn't abort the transaction! This patch fixes the problem by making the memtx engine install the on_yield trigger unconditionally, for all kinds of transactions, and instead explicitly disabling the trigger for yielding DDL operations. In order not to spread the yield-in-transaction logic between memtx and txn code, let's move all fiber_on_yield related stuff to txn, export a method to disable yields, and use the method in memtx.
-
- Jul 13, 2019
-
-
Nikita Pettik authored
This patch extends parser's grammar to allow to create CHECK constraints on already existent tables via SQL facilities. Closes #3097 @TarantoolBot document Title: Document ADD CONSTRAINT CHECK statement Now it is possible to add CHECK constraints to already existent table via SQL means. To achieve this one must use following syntax: ALTER TABLE <table> ADD CONSTRAINT <name> CHECK (<expr>);
-
Kirill Yukhin authored
The argument in func_c_new() is used in Debug mode only. Mark it w/ MAYBE_UNUSED.
-
- Jul 12, 2019
-
-
Konstantin Osipov authored
A follow up on #4182
-
Kirill Shcherbatov authored
Closes #4182 Closes #4219 Needed for #1260 @TarantoolBot document Title: Persistent Lua functions Now Tarantool supports 'persistent' Lua functions. Such functions are stored in snapshot and are available after restart. To create a persistent Lua function, specify a function body in box.schema.func.create call: e.g. body = "function(a, b) return a + b end" A Lua persistent function may be 'sandboxed'. The 'sandboxed' function is executed in isolated environment: a. only limited set of Lua functions and modules are available: -assert -error -pairs -ipairs -next -pcall -xpcall -type -print -select -string -tonumber -tostring -unpack -math -utf8; b. global variables are forbidden Finally, the new 'is_deterministic' flag allows to mark a registered function as deterministic, i.e. the function that can produce only one result for a given list of parameters. The new box.schema.func.create interface is: box.schema.func.create('funcname', <setuid = true|FALSE>, <if_not_exists = true|FALSE>, <language = LUA|c>, <body = string ('')>, <is_deterministic = true|FALSE>, <is_sandboxed = true|FALSE>, <comment = string ('')>) This schema change is also reserves names for sql builtin functions: TRIM, TYPEOF, PRINTF, UNICODE, CHAR, HEX, VERSION, QUOTE, REPLACE, SUBSTR, GROUP_CONCAT, JULIANDAY, DATE, TIME, DATETIME, STRFTIME, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_DATE, LENGTH, POSITION, ROUND, UPPER, LOWER, IFNULL, RANDOM, CEIL, CEILING, CHARACTER_LENGTH, CHAR_LENGTH, FLOOR, MOD, OCTET_LENGTH, ROW_COUNT, COUNT, LIKE, ABS, EXP, LN, POWER, SQRT, SUM, TOTAL, AVG, RANDOMBLOB, NULLIF, ZEROBLOB, MIN, MAX, COALESCE, EVERY, EXISTS, EXTRACT, SOME, GREATER, LESSER, SOUNDEX, LIKELIHOOD, LIKELY, UNLIKELY, _sql_stat_get, _sql_stat_push, _sql_stat_init, LUA A new Lua persistent function LUA is introduced to evaluate LUA strings from SQL in future. This names could not be used for user-defined functions. Example: lua_code = [[function(a, b) return a + b end]] box.schema.func.create('summarize', {body = lua_code, is_deterministic = true, is_sandboxed = true}) box.func.summarize --- - aggregate: none returns: any exports: lua: true sql: false id: 60 is_sandboxed: true setuid: false is_deterministic: true body: function(a, b) return a + b end name: summarize language: LUA ... box.func.summarize:call({1, 3}) --- - 4 ... @kostja: fix style, remove unnecessary module dependencies, add comments
-
Mergen Imeev authored
Test box/net.box.test.lua checks state of the connection in case of an error. It should be 'error_reconnect'. But, in cases where testing was performed on a slow computer or in the case of a very large load, it is possible that the connection status may change from the 'error_reconnect' state to another state. This led to the failure of the test. Since this check is not the main purpose of the test, it is better to simply delete the check. Closes #4335
-
Kirill Shcherbatov authored
Moved UConverter object to collation library. This is required to get rid of sqlRegisterBuiltinFunctions function in further patches. Needed for #4113, #2200, #2233
-
Kirill Shcherbatov authored
Introduce a new flag SQL_FUNC_DERIVEDCOLL for function that may require collation to be applied on its result instead of separate boolean variable. This is required to get rid of FuncDef in further patches. Needed for #4113, #2200, #2233
-
Kirill Shcherbatov authored
Previously analyze functions refer to statically defined service FuncDef context. We need to change this approach due we going to rework the builtins functions machinery in following patches. Needed for #4113, #2200, #2233
-
- Jul 11, 2019
-
-
Kirill Shcherbatov authored
Needed for #4182 @TarantoolBot document Title: Introduce SOUNDEX sql function The SOUNDEX function returns a 4-character code that represents the sound of the words in the argument. The result can be compared to the results of the SOUNDEX function of other strings. The current SOUNDEX function supports only Latin strings. @kostja: fix test tap count; remove optional invocation; remove trailing spaces; fix alignment.
-
Cyrill Gorcunov authored
Quoting feature request | Tarantool is Database and Application Server in one box. | | Appserver development process contains a lot of | lua/luajit-ffi/lua-c-extension code. | | Coredump is very useful in case when some part of appserver crashed. | If the reason is input - data from database is not necessary. If the reason | is output - data from database is already in snap/xlog files. | | Therefore consider core dumps without data enabled by default. For info: the strip_core feature has been introduced in 549140b3 Closes #4337 @TarantoolBot document Title: Document box.cfg.strip_core When Tarantool runs under a heavy load the memory allocated for tuples may be very huge in size and to eliminate this memory from being present in `coredump` file the `box.cfg.strip_core` parameter should be set to `true`. The default value is `true`.
-
Kirill Shcherbatov authored
In relation with FuncDef cache rework we need to clean-up builtins list. The MATCH fucntion is a stub that raises an error and it could be dropped. Needed for #4182 @kostja: make the patch actually pass the tests, remove tap count change in e_expr.test.lua, since it's disabled and was not run
-
Michael Filonenko authored
-
avtikhon authored
box/net.box test flaky failed on grepping the log file for 'ER_NO_SUCH_PROC' pattern on high load running hosts, found that the issue can be resolved by updating the grep_log to wait_log function to make able to wait the needed message for some time. [008] Test failed! Result content mismatch: [008] --- box/net.box.result Tue Jul 9 17:00:24 2019 [008] +++ box/net.box.reject Tue Jul 9 17:03:34 2019 [008] @@ -1376,7 +1376,7 @@ [008] ... [008] test_run:grep_log("default", "ER_NO_SUCH_PROC") [008] --- [008] -- ER_NO_SUCH_PROC [008] +- null [008] ... [008] box.schema.user.revoke('guest', 'execute', 'universe') [008] --- Closes #4329
-
Denis Ignatenko authored
Add python-dev and pip to container to install test-run dependency Install test-run dependency from requirements.txt of test-run subrepo
-
Denis Ignatenko authored
There is compile time option PACKAGE in cmake to define current build distribution info. For community edition is it Tarantool by default. For enterprise it is Tarantool Enterprise There were no option to check distribution name in runtime. This change adds box.info.package output for CE and TE.
-
avtikhon authored
travis-ci APT repository update failed on Debian 10 (Buster) with command 'apt-get update', like: Get:1 http://deb.debian.org/debian buster InRelease [118 kB] Get:2 http://security.debian.org/debian-security buster/updates InRelease [39.1 kB] Get:3 http://deb.debian.org/debian buster-updates InRelease [46.8 kB] Reading package lists... Done N: Repository 'http://security.debian.org/debian-security buster/updates InRelease' changed its 'Version' value from '' to '10' E: Repository 'http://security.debian.org/debian-security buster/updates InRelease' changed its 'Suite' value from 'testing' to 'stable' N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details. N: Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '' to '10.0' E: Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Suite' value from 'testing' to 'stable' N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details. E: Repository 'http://deb.debian.org/debian buster-updates InRelease' changed its 'Suite' value from 'testing-updates' to 'stable-updates' N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details. The cause of the issue was: According to Debian Project News published 1st of July, Debian 10 "Buster" was scheduled to transition from testing to stable on 2019-07-06. It looks like the transition was in fact performed as scheduled, and so the testing distribution should now be catching up witn unstable, to eventually become Debian 11 "Bullseye". You might be experiencing some temporary side effects because of this transition of the Debian mirrors. If you want to stay with Debian 10 "Buster", now would be a good time to switch your /etc/apt/sources.list to use the release name buster instead of testing. Otherwise, you'll soon be getting the raw bleeding-edge stuff from unstable, and you might accidentally get a partial upgrade to proto-"Bullseye". Also, this is a reminder for anyone using the word stable in their /etc/apt/sources.list to consider whether to change it to stretch and stay with the old version, or read the Release Notes and perform the recommended upgrade steps. To fix the issue it had to accept interactively the changes in the repositories NOTE: apt instead of apt-get can accept the changes interactively apt update -y either accept only the needed changes for 'version' and 'suite' apt-get update --allow-releaseinfo-change-version --allow-releaseinfo-change-suite Seems that the only 'version' and 'suite' accept is better than blind accept of the all changes. Closes #4331
-
Yaroslav Dynnikov authored
Notify socket used to be initialized during `box.cfg()`. There is no apparent reason for that, because we can write tarantool apps that don't use box api at all, but still leverage the event loop and async operations. This patch makes initialization of notify socket independent. Instance can notify entering event loop even if box.cfg wasn't called. Closes #4305
-
Kirill Shcherbatov authored
Part of #4182
-
Nikita Pettik authored
Before this patch it was impossible to compare indexed field of integer type and floating point value. For instance: CREATE TABLE t1(id INT PRIMARY KEY, a INT UNIQUE); INSERT INTO t1 VALUES (1, 1); SELECT * FROM t1 WHERE a = 1.5; --- - error: 'Failed to execute SQL statement: Supplied key type of part 0 does not match index part type: expected integer' ... That happened due to the fact that type casting mechanism (OP_ApplyType) doesn't affect FP value when it is converted to integer. Hence, FP value was passed to the iterator over integer field which resulted in error. Meanwhile, comparison of integer and FP values is legal in SQL. To cope with this problem for each equality comparison involving integer field we emit OP_MustBeInt, which checks whether value to be compared is integer or not. If the latter, we assume that result of comparison is always false and continue processing query. For inequality constraints we pass auxiliary flag to OP_Seek** opcodes to notify it that one of key fields must be truncated to integer (in case of FP value) alongside with changing iterator's type: a > 1.5 -> a >= 2. Closes #4187
-
Nikita Pettik authored
Before value to be scanned in index search is passed to the iterator, it is subjected to implicit type casting (which is implemented by OP_ApplyType). If value can't be converted to required type, user-friendly message is raised. Without this cast, type of iterator may not match with type of key which in turn results in unexpected error. However, array of types which is used to provide type conversions is different from types of indexed fields: it is modified depending on types of comparison's operands. For instance, when boolean field is compared with blob value, resulting type is assumed to be scalar. In turn, conversion to scalar is no-op. As a result, value with MP_BIN format is passed to the iterator over boolean field. To fix that let's remove this transformation of types. Moreover, it seems to be redundant. Part of #4187
-
Nikita Pettik authored
In SQL we are able to execute queries involving spaces only with formats. Otherwise, at the very beginning of query compilation error is raised. So, after that any checks of format existence are redundant.
-
Nikita Pettik authored
There are a few situations when booleans can be compared with values of other types. To process them, we assume that booleans are always less than numbers, which in turn are less than strings. On the other hand, function which implements internal comparison of values - sqlMemCompare() always returns 'less' result if one of values is boolean and another one is not, ignoring the order of values. For instance: ... max (false, 'abc') -> 'abc' ... max ('abc', false) -> false This patch fixes this misbehaviour making boolean values always less than values of other types.
-
Nikita Pettik authored
When there were no booleans in SQL, to represent them numeric values 0 and 1 were involved. However, recently booleans have been introduced in SQL, so values of result set can take boolean values. Hence, it makes no sense to continue converting booleans to numeric, so we can use directly booleans.
-
Mergen Imeev authored
This patch creates aliases CHARACTER_LENGTH() and CHAR_LENGTH() for LENGTH(). These functions are added because they are described in ANSI. Closes #3929 @TarantoolBot document Title: SQL functions CHAR_LENGTH() and CHARACTER_LENGTH() The SQL functions CHAR_LENGTH() and CHARACTER_LENGTH() work the same as the LENGTH() function. They take exactly one argument. If an argument of type TEXT or can be cast to a TEXT value using internal casting rules, these functions return the length of the TEXT value that represents the argument. They throw an error if the argument cannot be cast to a TEXT value.
-
- Jul 09, 2019
-
-
Vladislav Shpilevoy authored
Before the patch it was split in two parts by 1.5KB packet, and in the constructor it was nullifying the whole volume. Obviously, these were mistakes. The first problem breaks cache locality, the second one flushes the cache.
-
Vladislav Shpilevoy authored
Before the patch each SWIM member had two preallocated task objects, 3KB in total. It was a waste of memory, because network load per member in SWIM is ~2 messages per round step regardless of cluster size. This patch moves the tasks to a pool, where they can be reused. Even by different SWIM instances running on the same node.
-
Serge Petrenko authored
The test regarding logging corrupted rows failed occasionally with ``` [016] test_run:grep_log('default', 'Got a corrupted row.*') [016] --- [016] -- 'Got a corrupted row:' [016] +- null [016] ... ``` The logs then had ``` [010] 2019-07-06 19:36:16.857 [13046] iproto sio.c:261 !> SystemError writev(1), called on fd 23, aka unix/:(socket), peer of unix/:(socket): Broken pipe ``` instead of the expected message. This happened, because we closed a socket before tarantool could write a greeting to the client, the connection was then closed, and execution never got to processing the malformed request and thus printing the desired message to the log. To fix this, actually read the greeting prior to writing new data and closing the socket. Follow-up #4273
-
Oleg Babin authored
Closes #4323 @TarantoolBot document Title: fio.utime fio.utime (filepath [, atime [, mtime]]) Set access and modification times of a file. The first argument is the filename, the second argument (atime) is the access time, and the third argument (mtime) is the modification time. Both times are provided in seconds since the epoch. If the modification time is omitted, the access time provided is used; if both times are omitted, the current time is used.
-
Vladimir Davydov authored
When a memtx transaction is aborted on yield, it isn't enough to rollback individual statements - we must also run on_rollback triggers, otherwise changes done to the schema by an aborted DDL transaction will be visible to other fibers until an attempt to commit it is made.
-
Alexander V. Tikhonov authored
The test case has two problems that appear from time to time and lead to flaky fails. Those fails are look as shown below in a test-run output. | Test failed! Result content mismatch: | --- box/net.box.result Mon Jun 24 17:23:49 2019 | +++ box/net.box.reject Mon Jun 24 17:51:52 2019 | @@ -1404,7 +1404,7 @@ | ... | test_run:grep_log('default', 'ER_INVALID_MSGPACK.*') | --- | -- 'ER_INVALID_MSGPACK: Invalid MsgPack - packet body' | +- 'ER_INVALID_MSGPACK: Invalid MsgPack - packet length' | ... | -- gh-983 selecting a lot of data crashes the server or hangs the | -- connection 'ER_INVALID_MSGPACK.*' regexp should match 'ER_INVALID_MSGPACK: Invalid MsgPack - packet body' log message, but if it is not in a log file at a time of grep_log() call (just don't flushed to the file yet) a message produced by another test case can be matched ('ER_INVALID_MSGPACK: Invalid MsgPack - packet length'). The fix here is to match the entire message and check for the message periodically during several seconds (use wait_log() instead of grep_log()). Another problem is the race between writing a response to an iproto socket on a server side and closing the socket on a client end. If tarantool is unable to write a response, it does not produce the warning re invalid msgpack, but shows 'broken pipe' message instead. We need first grep for the message in logs and only then close the socket on a client. The similar problem (with another test case) is described in [1]. [1]: https://github.com/tarantool/tarantool/issues/4273#issuecomment-508939695 Closes: #4311
-
- Jul 08, 2019
-
-
Vladimir Davydov authored
Both commit and rollback triggers are currently added to the list head. As a result, they are both run in the reverse order. This is correct for rollback triggers, because this matches the order in which statements that added the triggers are rolled back, but this is wrong for commit triggers. For example, suppose we create a space and then create an index for it in the same transaction. We expect that on success we first run the trigger that commits the space and only then the trigger that commits the index, not vice versa. That said, reverse the order of commit triggers in the scope of preparations for transactional DDL.
-
Vladimir Davydov authored
Changes done to an altered space while a new index is being built or the format is being checked are propagated via an on_replace trigger. The problem is there may be transactions that started before the alter request. Their working set can't be checked so we simply abort them. We can't abort transactions that have reached WAL so we also call wal_sync() to flush all pending WAL requests. This is a yielding operation and we call it even if there's no transactions that need to be flushed. As a result, vinyl space alter yields unconditionally, even if the space is empty and there is no pending transactions affecting it. This prevents us from implementing transactional DDL. Let's call wal_sync() only if there's actually at least one pending transaction affecting the altered space and waiting for WAL.
-