- Apr 02, 2019
-
-
Mergen Imeev authored
Before this commit, the luaL_tofield() function threw a Lua exception when an error occurred. This behavior has been changed in this patch. Now it sets diag_set() and returns -1 in case of an error. Needed for #3505
-
Mergen Imeev authored
This field become unused and should be removed. Part of #3965
-
- Apr 01, 2019
-
-
Ivan Koptelov authored
Sometimes this test failed because collation 'c' was not dropped automatically. Now it is always dropped manually, so the problem is no more.
-
Ivan Koptelov authored
Before this patch we had instr() SQL function. After the patch it is renamed to position() for a better ANSI compatibility. Also a few things have been changed: arguments order, allowed arguments types and usage of collations. Note: after the patch position() syntax is still different from ANSI. The desirable syntax is POSITION(substring IN string). It is not possible to implement right now, because in our SQL grammar we lack expr types. We have only basic 'expr' type and it is not possible to write unambiguous rule for POSITION IN. To solve this we have to refactor grammar and add something like 'string_expr' (as it is done in other DBs grammars) Workaround for #3933 @TarantoolBot document Title: instr() is replaced with position() Name and order of the arguments has changed for a better ANSI compatibility: Before: instr(haystack, needle). After: position(needle, haystack). Type checking became more strict: before it was possible to call the function with INTEGER arguments or with arguments of different types. Now both arguments must have the same type and be either text or binary strings. Before the patch collations were not taken into consideration during the search. Now it is fixed, and both implicit (column) collations and explicit (using COLLATE expression) are used. Single collation which would be used in function is determined using ANSI “Type combination” rules.
-
Ivan Koptelov authored
Before the patch determination of collation in SQL functions was too narrow: the arguments were scanned from left to right, till the argument with collation was encountered, then its collation was used. Now every arguments collation is considered. The right collation which would be used in function is determined using ANSI compatibility rules ("type combination" rules).
-
Vladimir Davydov authored
It's an independent piece of code that is definitely worth moving from vy_scheduler to vy_lsm internals anyway. Besides, having it wrapped up in a separate function will make it easier to patch.
-
- Feb 13, 2019
-
-
Mergen Imeev authored
Due to removal of box.sql.execute(), it makes sense to completely remove box.sql. This patch moves the SQL statistics to box.stat and removes box.sql.debug(). Part or #3505
-
Mergen Imeev authored
Currently, functions sql_execute() and sql_prepare_and_execute() set the ER_SQL_EXECUTE code for all errors that occur during the execution of a SQL command. This is considered incorrect because some of these errors may have their own error code. After this patch, only errors without an error code will have the error code ER_SQL_EXECUTE. Part of #3505
-
Mergen Imeev authored
Currently, if the count_changes pragma is enabled, the INSERT, REPLACE and UPDATE statements will return the number of changes at execution time. This patch sets an INTEGER type for this result. Follow up #3832 Needed from #3505
-
- Mar 29, 2019
-
-
Vladislav Shpilevoy authored
* fix some obvious errors in swim test utils; * fix a bug with NULL URI garbage on recfg; * fix typos in a comment and in a log message in swim.c; * do not start any timers in swim_cfg. Indeed, round timer should start only when at least one new member is added except self, and it is already done in swim_new_member; * log not only round begin, but each round step - it helps in debug, but does not affect production anyway because the logs are verbose; * in SWIM's event loop log new watch value instead of the old one - turned out, that new is more useful for debug; * log 'process <name> component' inside swim_process_<name>() functions. It is needed for failure detection, where a log of kind 'process failure detection' says nothing - much better to say 'process ping from', or 'process ack'; * in swim tests instead of swim_cluster_wait_...(max_steps) use swim_cluster_wait_...(timeout). Step count restriction appeared to be useful for anti-entropy being equal to number of round steps, but it is not so once failure detection appears. Replies for failure detection requests does not depend on SWIM heartbeat and affect step count in a not trivial way - it makes test writing, debugging and supporting much harder. Follow-up for 03b9a6e9
-
- Mar 28, 2019
-
-
Vladimir Davydov authored
This reverts commit c2de45c4. Not needed anymore as the DDL bug it was supposed to work around has been finally fixed. Follow-up #3420
-
Vladimir Davydov authored
A DDL operation creates a new struct space container, moving unaffected indexes from the old container, then destroying it. The problem is there may be a DML request for this space, which was passed the old container in the argument list and then yielded on disk read. When it resumes, it may try to dereference the old space container, which may have already been destroyed. This will most certainly result in a crash. To address this problem, we introduce a new space callback, invalidate, which is called for the old space on space_cache_replace(). In case of vinyl, this callback aborts all transactions involving the space. To prevent a DML request from dereferencing a destroyed space, we also make the iterator code check the current transaction state after each yield and return an error if it was aborted. This should make any DML request bail out early without dereferencing the space anymore. Closes #3420
-
Vladimir Davydov authored
We need to abort all transactions writing to an altered space when a new index is built. Currently, we use the write set to look up such transactions, but it isn't quite correct, because a transaction could yield on disk read before inserting a statement into the write set. To address this problem, this patch adds vy_tx->last_stmt_space, which points to the space affected by the last prepared transaction. Now, tx_manager_abort_writers_for_ddl will look not only at the write set, but also at this variable to check if it needs to abort a transaction. Needed for #3420
-
Vladimir Davydov authored
The 'restore' method, which is implemented by txw, cache, and memory sources, restores an iterator position after a yield in case the source has changed. It is passed the last key and it is supposed to position the iterator to the statement following the last key provided it had to reposition the iterator at all. For all kinds of sources, it first checks the source version and bails out early if it hasn't changed since the last time the iterator was used. If it has changed, it either looks up the statement following the given last key or tries to advance the iterator statement by statement (in case of a cache iterator, to avoid extra cache lookups). Currently, the method returns 1 if the iterator position has actually changed as a result of the call, 0 otherwise. That is, it returns 0 even if the version has changed and it had to perform a full lookup, but landed on the same statement. This is confusing, because in this case the caller has to check if it has to advance the iterator even though it doesn't need to, as the iterator is already positioned at the next key - see vy_read_iterator_scan_* family of functions. Let's simplify the restoration rules - make the 'restore' method return 1 if it had to restore the iterator position irrespective of whether the position has actually changed or not. This means that in case the method returns 1, the caller knows that the iterator is already positioned properly and doesn't need to be advanced. If it returns 0, it means that the iterator is positioned at the same statement as before and we need to check if we need to advance it. This simplifies the iterator code by removing position checking, which would turn real nasty once multikey indexes are introduced. Note, comments to 'restore' methods already conform to this behavior (they weren't quite correct before this patch). There's a catch though - vy_read_iterator_restore_mem relies on the old behavior to skip the cache source in case the current key gets updated during yield. However, it's easy to fix that without introducing any extra overhead - we just need to check if the cache source is at the front of the iterator and clear its history if it is. BTW it turned out that this behavior wasn't covered by tests - when I removed the line invalidating the cache source from vy_read_iterator_restore_mem, all tests passed as usual. So while we are at it, let's also add a test case covering this piece of code.
-
Vladimir Davydov authored
A few changes to make the function more straightforward: - Move bloom checking and LSN filtering out of 'do_seek' helper. Make the helper do just one simple task - lookup the first one in a series of statements matching the given search criteria. - Fold iterator type and key substitution in 'seek' method, similarly to how we did it for other iterators. - Cleanup EQ checks. Use the original iterator type and key where appropriate to remove extra checks in calling methods. Don't check EQ in 'seek' method in case it was checked by 'do_seek'. - Add some comments.
-
Vladimir Davydov authored
It's equivalent to (itr->search_started && itr->curr_stmt == NULL).
-
Vladimir Davydov authored
Currently, vy_cache_iterator->curr_stmt is updated by top-level iterator functions - next, skip, restore - which results in code duplication and spreads core logic among multiple places. To reduce the amount of code and make it generally easier to follow, this patch moves the updates to low level functions - step, seek. It also makes the seek method return the stop flag, which makes it similar to step, thus making the code more consistent.
-
Vladimir Davydov authored
We substitute iterator_type and key before each call to 'seek' method and check eq constraint after it. Let's fold it to reduce code duplication.
-
Vladimir Davydov authored
We substitute iterator_type and key before each call to 'seek' method and check eq constraint after it. Let's fold it to reduce code duplication.
-
Vladimir Davydov authored
We abort transactions when switching to read-only mode, building a new index, or reverting a statement after a failed WAL write. It's no use to allow an aborted transaction to proceed as usual until commit - we should fail it as early as we can to avoid wasting resources. Currently, we do it in a rather abrupt way - by an assertion :-) This patch makes any DML/DQL operation fail gracefully instead. Closes #4070
-
Vladimir Davydov authored
It is supposed to abort an iterator that read a statement reverted after a failed WAL write. However, we will abort such an iterator anyway - see vy_tx_abort_readers(). So let's drop the useless flag. It turned out that we never tested such a case (try grepping "read view is aborted" error). So let's also add a functional test for it.
-
Vladimir Davydov authored
When a few read intervals are merged, we must fix up tx memory stats. Closes #4071
-
Ivan Koptelov authored
Before the patch, collations with no strength set used tertiary strength. But it was not easy to understand it, because box.space._collation:select{} would return ... [1, 'unicode', 1, 'ICU', '', {}] ... for such collations. After the patch default value is set explicitly, so user would observe : ... [1, 'unicode', 1, 'ICU', '', {strength='tertiary'}] ... Closes #3573 @TarantoolBot document Title: default collation strength is explicit tertiary now Before the patch we already have tertiary strength is default strength for collations, but it was implicit: [1, 'unicode', 1, 'ICU', '', {}] After the patch it's just become explicit: 1, 'unicode', 1, 'ICU', '', {'strength' = 'tertiary'}] Also please fix this https://tarantool.io/en/doc/2.1/book/box/data_model/#collations There is line saying: "unicode collation observes all weights, from L1 to Ln (identical)" It was not true and now this fact would just become obvious.
-
Vladislav Shpilevoy authored
swim_upsert_member is a function to add new members or update existing ones using info received from other cluster participants. At this moment it is quite simple and straightforward: either create a new member and return it, or update an existing member and return it. But things are going to change in failure detection and dissemination components. A couple of examples showing that a member should be returned separately from success/error status: * To prevent undead members the failure detection forbids to add dead members. Otherwise a dead member would be added and removed back and forth by different components infinitely. Upsert for such members should return NULL, but it is not an error - it is a normal function of the protocol; * When the dissemination component receives a UUID update of an existing member, but with too old incarnation, it does not apply that update. On the other hand it can not return that old member from upsert because of UUID being different from the one in the 'request'. In such a case it should be ok to return NULL, but do not deem it an error. Part of #3234
-
lenkis authored
-
lenkis authored
-
lenkis authored
Fixes #3362.
-
lenkis authored
-
Konstantin Osipov authored
Speed up txn_is_distributed() check by keeping track of the number of local rows of the transaction.
-
Konstantin Osipov authored
We need to keep count of different kind of rows in a transaction: - rows which already exist in some WAL and are replayed locally. These used to be called n_remote_rows, and renamed to n_applier_rows - rows which were created locally, on this server (previously called n_local_rows, renamed to n_new_rows). Of the latter, we need to distinguish between GROUP_ID=LOCAL rows, i.e. rows which need not be replicated, and GROUP_ID=REPLICA rows, which need to be replicated. For example, a remote transaction can fire local triggers which generate local rows. If these triggers generate rows which need to be replicated, the transaction has to be aborted. In a subsequent patch I plan to add n_local_rows, which tracks the number of new rows with GROUP_ID=local and use it in txn_is_distributed() check.
-
Georgy Kirichenko authored
Disallow changes for non-local spaces during replication stream applying. As we do not support distributed transaction yet we could not provide a transactional replication for such side effects if there are not NOPed. Needed for: #2798 Follow up for: 27283deb
-
Alexander Turenko authored
The primary reason of this change is to keep compatibility of 1.10 release series with tarantool/stat-0.3.1, which expects that each box.stat.net.<...> and box.stat.net().<...> item is a table. This commit changes CONNECTIONS metric to be a table with 'current' field, which in turn contains current number of connections. Fixes #4039. @TarantoolBot document Title: box.stat.net.CONNECTIONS becomes a table The format of box.stat.net.CONNECTIONS and box.stat.net().CONNECTIONS is changed in order to keep all items being tables, because tarantool/stat-0.3.1 expects them to be tables (see [1] for more information). Example of box.stat.net() **before** this commit: ``` tarantool> box.stat.net() --- - SENT: total: 0 rps: 0 CONNECTIONS: 0 RECEIVED: total: 0 rps: 0 ... ``` And after: ``` tarantool> box.stat.net() --- - SENT: total: 0 rps: 0 CONNECTIONS: current: 0 RECEIVED: total: 0 rps: 0 ... ``` Look at the comment to lbox_stat_net_call() (see the linked commit) for meaning of total/rps/current fields. [1]: https://github.com/tarantool/tarantool/issues/4039
-
Kirill Shcherbatov authored
We must use long long unsigned numbers on ffi types build to avoid precision lost. Follow-up f5721edc test: check hints corner cases Closes #4080
-
Alexander Turenko authored
Added necessary cleanup, because the test fails when both memtx and vinyl configurations are run on the same test-run worker. Aside of that disabled the test on vinyl, because it is engine-agnostic and only checks iproto. This is the fix for the commit 7676b2b1 ('sql: make SQL_BIND optional in an iproto request'). Follows up #4077.
-
- Mar 27, 2019
-
-
Georgy Kirichenko authored
Applier fetch incoming rows to form a transaction and then apply it. Rows are fetched and stored on fiber gc region until last transaction row with is_commit was fetched. After fetch a multi row transaction is going to be applied into txn_begin/txn_commit/txn_rolback boundaries. At this time we could not apply single row transaction in such boundaries because of ddl which does not support non auto commit transactions. Closes: #2798 Needed for: #980
-
Kirill Shcherbatov authored
Introduced a new sql_normalize_name routine performing SQL name conversion to case-normal form via unicode character folding. For example, ß is converted to SS. The result is similar to SQL UPPER function. Closes #3931
-
Kirill Shcherbatov authored
Refactored sqlExpr routine as sql_expr_new and reworked it to set diag message in case of memory allocation error. Also performed some additional name refactoring in adjacent places. This change is necessary because the sqlExpr body has a sqlNormalizeName call that will be changed in subsequent patches. After that patch there are basically 2 ways of errors processing and forwarding: - Use diag only. It works for all the places out of src/box/sql, and for some functions inside it. For example, sql_expr_new(); - Use global flags Parse.is_aborted, sql.mallocFailed. It is easy to see, that some places use both of them implicitly. For example, sql_expr_new() and every other place which uses SQLite memory allocators + diag. But it is ok until the former is removed. What is more important, is that at least one of these two methods should be consistent and finished in every functions. And match a declared behaviour. For example, it is incorrect to declare a function as setting flags on an error, but in fact set diag only. Or vice versa, that it throws a diag, but factually sets flags only. Part of #3931
-
Kirill Shcherbatov authored
Refactored triggerSterAllocate routine as sql_trigger_step_new and reworked it to set diag_set in case of memory allocation error. Also performed some additional name refactoring in adjacent places. This change is necessary because the sql_trigger_step_allocate body has a sqlNormalizeName call that will be changed in subsequent patches. Part of #3931
-
Kirill Shcherbatov authored
Refactored sqlNameFromToken routine as sql_name_from_token and reworked it to use diag_set in case of memory allocation error. This change is necessary because the sql_name_from_token body has a sqlNormalizeName call that will be changed in subsequent patches. Part of #3931
-
Kirill Shcherbatov authored
Refactored sqlIdListAppend routine as sql_id_list_append and reworked it to use diag_set in case of memory allocation error. This change is necessary because the sql_id_list_append body has a sqlNameFromToken call that will be changed in subsequent patches. This patch refers to a series of preparatory patches that provide the use of Tarantool errors in the call tree that includes sqlNormalizeName, since this call can later return errors. This patch is not self-sufficient, its sqlNameFromToken call remained to be non-Tarantool (for now). It means, that if sqlNameFromToken fails in sql_id_list_append there is no diag message created. Part of #3931
-