- Mar 11, 2019
-
-
Georgy Kirichenko authored
Form a separate transaction with local changes in case of replication. This is important because we should be able to replicate such changes (e.g. made within an on_replace trigger) back. In the opposite case local changes will be incorporated into originating transaction and would be skipped by the originator replica. Needed for #2798
-
- Mar 07, 2019
-
-
Vladimir Davydov authored
Fixes commit 8031071e ("Lightweight vclock_create and vclock_copy"). Closes #4033
-
Nikita Pettik authored
BLOB column type is represented by SCALAR field type in terms of NoSQL. We attempted at emulating BLOB behaviour, but such efforts turn out to be not decent enough. For this reason, we've decided to abandon these attempts and fairly replace it with SCALAR column type. SCALAR column type acts in the same way as it does in NoSQL: it is aggregator-type for INTEGER, NUMBER and STRING types. So, column declared with this type can contain values of these three (available in SQL) types. It is worth mentioning that CAST operator in this case does nothing. Still, we consider BLOB values as entries encoded in msgpack with MP_BIN format. To make this happen, values to be operated should be represented in BLOB form x'...' (e.g. x'000000'). What is more, there are two built-in functions returning BLOBs: randomblob() and zeroblob(). On the other hand, columns with STRING NoSQL type don't accept BLOB values. Closes #4019 Closes #4023 @TarantoolBot document Title: SQL types changes There are couple of recently introduced changes connected with SQL types. Firstly, we've removed support of DATE/TIME types from parser due to confusing behaviour of these types: they were mapped to NUMBER NoSQL type and have nothing in common with generally accepted DATE/TIME types (like in other DBs). In addition, all built-in functions related to these types (julianday(), date(), time(), datetime(), current_time(), current_date() etc) are disabled until we reimplement TIME-like types as a native NoSQL ones (see #3694 issue). Secondly, we've removed CHAR type (i.e. alias to VARCHAR and TEXT). The reason is that according to ANSI SQL CHAR(len) must accept only strings featuring length exactly equal to given in type definition. Obviously, now we don't provide such checks. Types VARCHAR and TEXT are still legal. For the same reason, we've removed NUMERIC and DECIMAL types, which were aliases to NUMBER NoSQL type. REAL, FLOAT and DOUBLE are still exist as aliases. Finally, we've renamed BLOB column type to SCALAR. We've decided that all our attempts to emulate BLOB behaviour using SCALAR NoSQL type don't seem decent enough, i.e. without native NoSQL type BLOB there always will be inconsistency, especially taking into account possible NoSQL-SQL interactions. In SQL SCALAR type works exactly in the same way as in NoSQL: it can store values of INTEGER, FLOAT and TEXT SQL types at the same time. Also, with this change behaviour of CAST operator has been slightly corrected: now cast to SCALAR doesn't affect type of value at all. Couple of examples: CREATE TABLE t1 (a SCALAR PRIMARY KEY); INSERT INTO t1 VALUES ('1'); SELECT * FROM t1 WHERE a = 1; -- [] Result is empty set since column "a" contains string literal value '1', not integer value 1. CAST(123 AS SCALAR); -- Returns 123 (integer) CAST('abc' AS SCALAR); -- Returns 'abc' (string) Note that in NoSQL values of BLOB type defined as ones decoded in msgpack with MP_BIN format. In SQL there are still a few ways to force this format: declaring literal in "BLOB" format (x'...') or using one of two built-in functions (randomblob() and zeroblob()). TEXT and VARCHAR SQL types don't accept BLOB values: CREATE TABLE t (a TEXT PRIMARAY KEY); INSERT INTO t VALUES (randomblob(5)); --- - error: 'Tuple field 1 type does not match one required: expected string' ... BLOB itself is going to be reimplemented in scope of #3650.
-
Nikita Pettik authored
NMERIC and DECIMAL were allowed to be specified as column types. But in fact, they were just synonyms for FLOAT type and mapped to NUMERIC Tarantool NoSQL type. So, we've decided to remove this type from parser and return back when NUMERIC will be implemented as a native type. Part of #4019
-
Nikita Pettik authored
Since now no checks connected with length of string are performed, it might be misleading to allow specifying this type. Instead, users must rely on VARCHAR type. Part of #4019
-
Nikita Pettik authored
Currently, there is no native (in Tarantool terms) types to represent time-like types. So, until we add implementation of those types, it makes no sense to allow to specify those types in table definition. Note that previously they were mapped to NUMBER type. For the same reason all built-in functions connected with DATE/TIME are disabled as well. Part of #4019
-
Vladislav Shpilevoy authored
SWIM - Scalable Weakly-consistent Infection-style Process Group Membership Protocol. It consists of 2 components: events dissemination and failure detection, and stores in memory a table of known remote hosts - members. Also some SWIM implementations have an additional component: anti-entropy - periodical broadcast of a random subset of members table. Dissemination component spreads over the cluster changes occurred with members. Failure detection constantly searches for failed dead members. Anti-entropy just sends all known information at once about a member so as to synchronize it among all other members in case some events were not disseminated (UDP problems). Anti-entropy is the most vital component, since it can work without dissemination and failure detection. But they can not work properly with out the former. Consider the example: two SWIM nodes, both are alive. Nothing happens, so the events list is empty, only pings are being sent periodically. Then a third node appears. It knows about one of existing nodes. How should it learn about another one? Sure, its known counterpart can try to notify another one, but it is UDP, so this event can get lost. Anti-entropy is an extra simple component, it just piggybacks random part of members table with each regular round message. In the example above the new node will learn about the third one via anti-entropy messages of the second one soon or late. This is why anti-entropy is the first implemented component. Part of #3234
-
Kirill Shcherbatov authored
In order to give a user ability to use a delimiter symbol within a code the real delimiter is user-provided 'delim' plus "\n". Since telnet sends "\r\n" on line break, the updated expression delim + "\n" could not be found in a sequence data+delim+"\r\n", so delimiter feature did not work at all. Added delim + "\r" check along with delim + "\n", that solves the described problem and does not violate backward compatibility. Closes #2027
-
Georgy Kirichenko authored
Remove xstream dependency and use direct box interface to apply all replication rows. This is refactoring needed for transactional replication. Needed for #2798
-
Mergen Imeev authored
The module table.c is not used and should be removed.
-
- Mar 06, 2019
-
-
Vladimir Davydov authored
The test creates a space, but doesn't drop it, which leads to box-tap/on_schema_init failure: | box-tap/trigger_yield.test.lua [ pass ] | box-tap/on_schema_init.test.lua [ fail ] | Test failed! Output from reject file box-tap/on_schema_init.reject: | TAP version 13 | 1..7 | ok - on_schema_init trigger set | ok - system spaces are accessible | ok - before_replace triggers | ok - on_replace triggers | ok - set on_replace trigger | ok - on_schema_init trigger works | | Last 15 lines of Tarantool Log file [Instance "app_server"][/Users/travis/build/tarantool/tarantool/test/var/002_box-tap/on_schema_init.test.lua.tarantool.log]: | 2019-03-06 17:00:12.057 [87410] main/102/on_schema_init.test.lua F> Space 'test' already exists Fix this.
-
Serge Petrenko authored
This patch introduces an on_schema_init trigger. The trigger may be set before box.cfg() is called and is called during box.cfg() right after prototypes of system spaces, such as _space, are created. This allows to set triggers on system spaces before any other non-system data is recovered. For example, it is possible to set an on_replace trigger on _space, which will work even during recovery. Part of #3159 @TarantoolBot document Title: document box.ctl.on_schema_init triggers on_schema_init triggers are set before the first call to box.cfg() and are fired during box.cfg() before user data recovery start. To set the trigger, say ``` box.ctl.on_schema_init(new_trig, old_trig) ``` where `old_trig` may be omitted. This will replace `old_trig` with `new_trig`. Such triggers let you, for example, set triggers on system spaces before recovery of any data, so that the triggers are fired even during recovery. For example, such triggers make it possible to change a specific space's storage engine or make a replicated space replica-local on a freshly bootstrapped replica. If you want to change space's `space_name` storage engine to `vinyl` , you may say: ``` function trig(old, new) if new[3] == 'space_name' and new[4] ~= 'vinyl' then return new:update{{'=', 4, 'vinyl'}} end end ``` Such a trigger may be set on `_space` as a `before_replace` trigger. And thanks to `on_schema_init` triggers, it will happen before any non-system spaces are recovered, so the trigger will work for all user-created spaces: ``` box.ctl.on_schema_init(function() box.space._space:before_replace(trig) end) ``` Note, that the above steps are done before initial `box.cfg{}` call. Othervise the spaces will be already recovered by the time you set any triggers. Now you can say `box.cfg{replication='master_uri', ...}` And replica will have the space `space_name` with same contents, as on master, but on `vinyl` storage engine.
-
Vladislav Shpilevoy authored
SWIM wants to allow to bind to zero ports so as the kernel could choose any free port automatically. It is needed mainly for tests. Zero port means that a real port is known only after bind() has called, and getsockname() should be used to get it. SWIM uses sio library for such lowlevel API. This is why that function is added to sio. Needed for #3234
-
Kirill Shcherbatov authored
Before the commit d9f82b17 "More than one row in fixheader. Zstd compression", xrow_header_decode treated everything until 'end' as the packet body while currently it allows a packet to end before 'end'. The iproto_msg_decode may receive an invalid msgpack but it still assumes that xrow_header_decode sets an error in such case and use assert to test it, bit it is not so. Introduced a new boolean flag to control routine behaviour. When flag is set, xrow_header_decode should raise 'packet body' error unless the packet ends exactly at 'end'. @locker: renamed ensure_package_read to end_is_exact; fixed comments. Closes #3900
-
- Mar 05, 2019
-
-
Vladimir Davydov authored
A Vinyl transaction may yield while having a non-empty write set. This opens a time window for the instance to switch to read-only mode. Since we check ro flag only before executing a DML request, the transaction would successfully commit in such a case, breaking the assumption that no writes are possible on an instance after box.cfg{read_only=true} returns. In particular, this breaks master-replica switching logic. Fix this by aborting all local rw transactions before switching to read-only mode. Note, remote rw transactions must not be aborted, because they ignore ro flag. Closes #4016
-
Vladimir Davydov authored
We will use this callback to abort rw transactions in Vinyl when an instance is switch to read-only mode. Needed for #4016
-
Vladimir Davydov authored
Currently, we add a transaction to the list of writers when executing a DML request, i.e. in vy_tx_set. The problem is a transaction can yield on read before calling vy_tx_set, e.g. to check a uniqueness constraint, which opens a time window when a transaction is not yet on the list, but it will surely proceed to DML after it continues execution. If we need to abort writers in this time window, we'll miss it. To prevent this, let's add a transaction to the list of writers in vy_tx_begin_statement. Note, after this patch, when a transaction is aborted for DDL, it may have an empty write set - it happens if tx_manager_abort_writers is called between vy_tx_begin_statement and vy_tx_set. Hence we have to remove the corresponding assertion from tx_manager_abort_writers. Needed for #4016
-
Vladimir Davydov authored
Rename vy_tx_rollback_to_savepoint to vy_tx_rollback_statement and vy_tx_savepoint to vy_tx_begin_statement, because soon we will do some extra work there. Needed for #4016
-
- Mar 04, 2019
-
-
Stanislav Zudin authored
Adds collation analysis into creating of a composite key for index tuples. The keys of secondary index consist of parts defined for index itself combined with parts defined for primary key. The duplicate parts are ignored. But the search of duplicates didn't take the collation into consideration. If non-unique secondary index contained primary key columns their parts from the primary key were omitted. This fact caused an issue. @locker: comments, renames. Closes #3537
-
Cyrill Gorcunov authored
When building "tags" target we scan the whole working directory which is redundant. In particular .git,.pc,patches directories should not be scanned for sure.
-
Nikita Pettik authored
This file contains unused functions and dead code. Let's remove them. Follow-up #3542
-
Ivan Koptelov authored
If utf-8 string is passed to built-in functions such as LIKE, LENGTH etc, and it contains '\0' symbol, then one is assumed to be end-of-string. This approach is considered to be inappropriate. Lets fix it: treat '\0' as another one utf-8 symbol and process strings containing it entirely. Consider examples: LENGTH(CHAR(65,00,65)) == 3 LIKE(CHAR(65,00,65), CHAR(65,00,66)) == False Also the patch changes the way we count length of utf-8 strings. Before we processed each byte of the string. Now we use the following algorithm. Starting from the first byte in string, we try to determine what kind of byte it is: first byte of 1,2,3 or 4 byte sequence. Then we skip corresponding amount of bytes and increment symbol length (e.g. 2 bytes for 2 byte sequence). If current byte is not a valid first byte of any sequence, when we skip it and increment symbol length. Note that new approach might increase performance of LENGTH(), INSTR() and TRIM(). Closes #3542 @TarantoolBot document Title: null-term is treated now as usual character in str funcs User-visible behavior of sql functions dealing with strings would change as it is described in the commit message.
-
- Mar 02, 2019
-
-
Cyrill Gorcunov authored
Suitable for those who is using quilt for development.
-
- Mar 01, 2019
-
-
Kirill Shcherbatov authored
Introduced a new JSON_TOKEN_ANY json token that makes possible to perform anonymous lookup in marked tree nodes. This feature is required to implement multikey indexes. Since the token entered into the parser becomes available to user, additional server-side check is introduced so that an error occurs when trying to create a multikey index. Needed for #1257
-
Vladimir Davydov authored
Follow-up 5993e149 vinyl: sanitize full/empty key stmt detection 4273ec52 box: introduce JSON Indexes
-
Vladimir Davydov authored
Historically, we use tuple_field_count to check whether a statement represents an empty key (match all) or a full key (point lookup): if the number of fields in a tuple is greater than or equal to the number of parts in a key definition, it can be used as a full key; if the number of fields is zero, then the statement represents an empty key. While this used to be correct not so long ago, appearance of JSON indexes changed the rules of the game: now a tuple can have nested indexed fields so that the same field number appears in the key definition multiple times. This means tuple_field_count can be less than the number of key parts and hence the full key check won't work for a statement representing a tuple. Actually, any tuple in vinyl can be used as a full key as it has all key parts by definition, there's no need to use tuple_field_count for such statements - we only need to do that for statements representing keys. Keeping that in mind, let's introduce helpers for checking whether a statement can be used as a full/empty key and use them throughout the code.
-
Alexander Turenko authored
Fix test_run:cmd('set variable ...') for string values (PR #146). It is needed for enabling the use_unix_sockets_iproto option.
-
Vladislav Shpilevoy authored
Mhash 'random' method is supposed to return a valid node id given an arbitrary integer, likely generated randomly. But on some values it was returning 'end' marker despite emptiness of the container. This was because of confusion in usage of mh_size() and mh_end(). Mh_size() means real number of objects, stored in the cache, while mh_end() means hash capacity, or 'number of buckets' as it is named. Generally capacity is bigger than size, and sometimes it led to a situation like this: size = 1 capacity = 4 rnd = 3 [0] [1] [2] [3] - - x - When code iterates only 'size' times, looking for an element starting from 'rnd' position, it will not find anything. It should iterate 'capacity' times instead.
-
Vladislav Shpilevoy authored
SWIM module API is going to provide a set of clear and pure functions with appropriately settled const qualifiers. And it wants to use sio_strfaddr() to provide to user an easy way to get a pointer to URI of a SWIM member stored in a const memory. It requires this two-line modification of sio module.
-
- Feb 28, 2019
-
-
Ilya Kosarev authored
Now there is new member in box.stat.net() called "CONNECTIONS" which is a number of active iproto connections. Closes #3905 @TarantoolBot document Title: box.stat.net Update the documentation for box.stat.net to reflect the addition of the field which reports iproto connections number.
-
Vladimir Davydov authored
The patch fixes the following test failure: | --- app/socket.result Mon Feb 25 17:32:49 2019 | +++ app/socket.reject Mon Feb 25 17:39:51 2019 | @@ -2827,7 +2827,7 @@ | ... | echo_fiber ~= nil | --- | -- true | +- false | ... | client:write('hello') | --- This happens, because we don't wait for echo_fiber to start. Use a channel to make sure it does. Also, increase read/write timeouts from 0.1 up to 5 seconds - it won't increase the test runtime, but it will make it more robust. Closes #4022
-
Vladimir Davydov authored
This patch fixes the following test failure: | --- box/iproto_stress.result Tue Dec 25 09:56:54 2018 | +++ box/iproto_stress.reject Tue Dec 25 10:12:22 2018 | @@ -80,7 +80,7 @@ | ... | n_workers -- 0 | --- | -- 0 | +- 340 | ... | n_errors -- 0 | --- | @@ -93,5 +93,3 @@ | --- | ... | box.cfg{net_msg_max = net_msg_max} | ---- | -... The problem is the test is quite cpu intensive so if the host is heavily loaded (as it is often the case when tests are run on Travis CI), it may take a few minutes to complete, while the timeout is set to 10 seconds. To fix it, let's - Increase the timeout up to 60 seconds and use test_run.wait_cond instead of a homebrew loop. - Decrease the number of fibers from 400 down to 100 and adjust box.cfg.net_msg_max respectively. Closes #3911
-
Konstantin Osipov authored
-
Vladimir Davydov authored
If a key is frequently updated, iteration to the next key stored in the memory level can take quite a while, because: - In case of GE/GT iterator, vy_mem_iterator_next_key will have to iterate the tree from left to right to skip older key versions. - In case of LE/LT iterator, vy_mem_iterator_find_lsn will have to iterate the tree from right to left to find the newest key version visible in the read view. To avoid that, let's fall back on key lookup if we failed to find an appropriate statement after one iteration, because in this case there's a good chance that there's more statements for this key. This should be fine since a lookup in a memory tree is pretty cheap.
-
Vladimir Davydov authored
- Don't pass iterator_type to vy_mem_iterator_seek and functions called by it. Instead pass only a key and jump to the first statement following the key according to the iterator search criteria. Turns out this is enough for memory iterator implementation. - Fold EQ check in vy_mem_iterator_seek to avoid code duplication. - Drop vy_mem_iterator_start and use vy_mem_iterator_seek directly.
-
Vladimir Davydov authored
This patch fixes the following test failure: | --- box/push.result Thu Jan 24 13:10:04 2019 | +++ var/001_box/push.result Thu Jan 24 13:13:08 2019 | @@ -536,17 +536,3 @@ | --- | ... | chan_disconnected:get() | ---- | -- true | -... | -chan_push:put(true) | ---- | -- true | -... | -chan_push:get() | ---- | -- Session is closed | -... | -box.schema.func.drop('do_long_and_push') | ---- | -... The problem occurs because the main fiber may close the connection before do_long_and_push sets the session.on_disconnect trigger, in which case chan_disconnected:get() will never return. Fix this by setting the trigger in the main fiber and adding another channel to wait for do_long_and_push function to start. Also, don't forget to clear the trigger once the test is complete. Fixes commit 43af2de2 ("session: outdate a session of a closed connection"). Closes #3947
-
Kirill Shcherbatov authored
Due to the fact that in the case of multikey indexes, the size of the field map may depend on a particular tuple, the tuple_int_field_map function has been reworked in such a way as to allocate the field map of the required size and return it. Needed for #1257
-
- Feb 27, 2019
-
-
Alexander Turenko authored
* Added basic luacov support. * Added use_unix_sockets_iproto option. * Fixed TARANTOOL_SRC_DIR on >=tarantool-2.1.1-322-g3f5f59bb5. - It is important for app-tap/http_client.test.lua, it fails now. * Renamed pre_cleanup to pretest_clean. * pretest_clean: clean up _cluster space.
-
Alexander Turenko authored
The bug was introduced in d735b6bf (move 'uri' lib to src/lib/).
-
Alexander Turenko authored
lcov reports the following warnings: Cannot open source file src/uri.rl Cannot open source file src/uri.c coveralls-lcov then fails with this message: coveralls-lcov --service-name travis-ci --service-job-id 498721113 --repo-token [FILTERED] coverage.info /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/converter.rb:63:in `initialize': No such file or directory @ rb_sysopen - /tarantool/src/lib/uri/CMakeFiles/uri.dir/src/uri.c (Errno::ENOENT) from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/converter.rb:63:in `open' from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/converter.rb:63:in `generate_source_file' from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/converter.rb:16:in `block in convert' from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/converter.rb:15:in `each' from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/converter.rb:15:in `convert' from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/lib/coveralls/lcov/runner.rb:68:in `run' from /var/lib/gems/2.3.0/gems/coveralls-lcov-1.5.1/bin/coveralls-lcov:5:in `<top (required)>' from /usr/local/bin/coveralls-lcov:22:in `load' from /usr/local/bin/coveralls-lcov:22:in `<main>' So coverage target in Travis-CI fails and a coverage does not reported to coveralls.io. The bug was introduced in d735b6bf (move 'uri' lib to src/lib/).
-