- Nov 19, 2018
-
-
Kirill Shcherbatov authored
The new tuple_field_go_to_path routine is used in function tuple_field_raw_by_path to retrieve data by JSON path from field. We need this routine exported in future to access data by JSON path specified in key_part. Need for #1012
-
Kirill Shcherbatov authored
Introduced a new key_def_parts_are_sequential routine that test, does specified key_def have sequential fields. This would be useful with introducing JSON path as there would be another complex criteria as fields with JSONs can't be 'sequential' in this meaning. Need for #1012
-
Kirill Shcherbatov authored
Refactored key_def_find routine to use key_part as a second argument. Introduced key_def_find_by_fieldno helper to use in scenarios where no key_part exists. New API is more convenient for complex key_part that will appear with JSON paths introduction. Need for #1012
-
Nikita Pettik authored
After patch that introduced "none" collation (a953051e), box.internal.bootstrap() started to fail due to inability to drop mentioned collation. Lets turn off system triggers for _collation space in order to process its complete purification during bootstrap.
-
Vladimir Davydov authored
-
Vladimir Davydov authored
-
Olga Arkhangelskaia authored
box.cfg() updates only those options that have actually changed. However, for replication it is not always true: box.cfg{replication = x} and box.cfg{replication = {x}} are treated differently, and as the result - replication is restarted. The patch fixes such behaviour. Closes #3711
-
- Nov 16, 2018
-
-
Vladimir Davydov authored
Currently, run_count_per_level index option is applied to each LSM tree level. As a result, we may end up storing each key run_count_per_level times at the last level alone, which would result in prohibitive space amplification. To avoid that, let's ignore run_count_per_level for the last level. Note, we have to tweak quite a few vinyl tests, because they implicitly relied on the fact that producing run_count_per_level dumps would never trigger compaction. Closes #3657
-
- Nov 15, 2018
-
-
Kirill Yukhin authored
-
Yaroslav Dynnikov authored
I. Fixes tarantoolctl rocks install hanging in resctricted networks corner-case. A customer configured two rocks servers: 1. offline (file:///path/to/rocks) 2. and default online (rocks.tarantool.org) He tries to do `rocks install http 1.0.5-1`. Online server is unavailable due to his local network policy, but the rock is available offline. Despite anything, luarocks still tries to fetch manifest online, which results in 30 sec hang since network access is restricted. This change aborts scanning when exact match is found II. Remove cyclic dependencies This is required to embed luarocks into tarantool, as current tarantool preloader can't preload cyclic dependencies. There should be a unidirectional dependency graph and predictable order. Note: as a consequence of this patch, operating systems other that unix-compatible ones are no longer supported. This is because I had to manually resolve dependency graph for predictable require() order. III. Use digest.md5_hex to compute md5 digests instead of openssl luarocks has support for calculating md5 with 'md5' rock if it's present, but we don't have it in tarantool, and instead have the 'digest' module. That's why luarocks falls back to 'openssl' binary to calculate md5 digests. This patch will allow luarocks to use our internal digests module.
-
Vladimir Davydov authored
-
Vladimir Davydov authored
-
Mergen Imeev authored
Before this patch region wasn't truncated after box.snapshot() so some tests failed due to "memory leak". To fix that, let's truncate region in vinyl and use malloc() in memtx (we can't cleanup region there as objects allocated when checkpoint is started are used until checkpoint is completed). The latter isn't a big deal, because box.snapshot() isn't a hot path and we do other allocations with malloc() anyway. Closes #3732
-
Nikita Pettik authored
Before this patch our SQL implementation relied on strange rules when comparing strings with different collations: - if either operand has an explicit collating function assignment using the postfix COLLATE operator, then the explicit collating function is used for comparison, with precedence to the collating function of the left operand; - if either operand is a column, then the collating function of that column is used with precedence to the left operand. The main concern in this implementation is that collation of the left operand is forced over right one (even if both operands come with explicit COLLATE clause). This contradicts ANSI SQL and seems to be quite misleading, since if user simple swap LHS and RHS - result of query may change. Lets introduce restrictions concerning collations compatibilities. Collations of LHS and RHS are compatible (i.e. no "Illegal mix of collations" is thrown) if: - one of collations is mentioned alongside with explicit COLLATE clause, which forces this collation over another one. It is allowed to have the same forced collations; - both collations are derived from table columns and they are the same; - one collation is derived from table column and another one is not specified (i.e. COLL_NONE). The compound SELECT operators UNION, INTERSECT and EXCEPT perform implicit comparisons between values. Hence, all new rules stated above are applied to parts of compound SELECT. Otherwise, an error is raised. In other words, before current patch queries like below were allowed: SELECT 'abc' COLLATE binary UNION SELECT 'ABC' COLLATE "unicode_ci"; --- - - ['ABC'] - ['abc'] ... If we swap collations, we will get another result: SELECT 'ABC' COLLATE "unicode_ci" UNION SELECT 'abc' COLLATE BINARY --- - - ['abc'] ... Now such queries are illegal. Closes #3185
-
Nikita Pettik authored
This patch introduces two new collation sequences: "none" and "binary". Despite the fact that they use the same comparing algorithm (simple byte-by-byte comparison), they don't mean the same. "binary" collation get to the format if user explicitly points it: either specifies this collation in space format manually or adds <COLLATE BINARY> clause to column definition within CREATE TABLE statement. "none" collation is used when user doesn't specify any collation at all. "none" collation always comes with id == 0 and it can't be changed (since its id vastly used under the hood as an indicator of absence of collation). Difference between these collations is vital for ANSI SQL: mixing "binary" with other collations is prohibited, meanwhile "none" collation can be used alongside with others. In this respect current patch extends list of available collations: now not only ICU collations are allowed, but also BINARY. Note, that in SQL some queries have changed their query plan. That occurred due to the fact that our parser allows using <COLLATE> clause with numeric fields: CREATE TABLE (id INT PRIMARY KEY); SELECT id COLLATE "binary" ... In the example collation of LHS (id column) is NULL, but collation of RHS is "binary". Before this patch both collations were NULL. Hence, usage of certain indexes may not be allowed by query planner. On the other hand, this feature is obviously broken, so that doesn't seem to be big deal. Needed for #3185
-
Nikita Pettik authored
Since now we don't have real binary collation, it was allowed to use its name in different cases. For instance: BiNaRY, "bInArY", "BINARY" etc. All these names were valid and accounted for binary collation. However, we are going to introduce real entry in _collation space to represent binary collation. Thus, for now we allow using only standard "binary" name. Needed for #3185
-
Nikita Pettik authored
We don't need to add explicit COLLATE "BINARY" clause since binary collation is anyway default. On the other hand, it may confuse due to ambiugty when comparing two terms. Needed for #3185
-
Nikita Pettik authored
changes() is legacy function which came from SQLite. However, its name is not perfect since it's "number of rows affected", not"number of rows changed". Moreover, to make it real row count, we are nullifying counter representing it in case operation is not accounted for DML or DDL. For example, before this patch behavior was like: INSERT INTO t VALUES (1), (2), (3); START TRANSACTION; SELECT changes(); --- - - [3] ... As one can see, row counter remained unchanged from last INSERT operation. However, now START TRANSACTION set it to 0 and as a consequence row_count() also returns 0. Closes #2181
-
Nikita Pettik authored
Part of #2181
-
Nikita Pettik authored
In our SQL implementation REPLACE acts as DELETE + INSERT, so we should account it as two row changes. Needed for #2181
-
Nikita Pettik authored
We have agreement that each successful DDL operation returns 1 (one) as a row count (via IProto protocol or changes() SQL function), despite the number of other created objects (e.g. indexes, sequences, FK constraints etc). Needed for #2181
-
N.Tatunov authored
GLOB is a legacy extension for LIKE from SQLite. As we want our SQL to be close to ANSI SQL & LIKE to depend on collations, we do not want to support it. This patch totally removes it from Tarantool along with any mentions of it. Part of #3589 Part of #3572
-
N.Tatunov authored
Currently function that compares pattern and string for GLOB & LIKE operators doesn't work properly. It uses ICU reading function which was assumed having other return codes and the implementation for the comparison ending isn't paying attention to some special cases, hence in those cases it works improperly. With the patch applied an error will be returned in case there's an invalid UTF-8 symbol in pattern & pattern containing only valid UTF-8 symbols will not be matched with the string that contains invalid symbol. Сloses #3251 Сloses #3334 Part of #3572
-
- Nov 14, 2018
-
-
Nikita Pettik authored
After introducing separate method in space's vtab to fetch next rowid value, lets use it in SQL internals. This allows us to fix incorrect results of queries involving storing equal tuples in ephemeral spaces. Closes #3297
-
Nikita Pettik authored
Ephemeral space are extensively used in SQL to store intermediate results of query processing. To keep things simple, they feature only one unique index (primary) which covers all fields. However, ephemeral space can be used to store non-unique entries. In this case, one additional field added to the end if stored data: [field1, ... fieldn, rowid] Note that it can't be added to the beginning of tuple since data in ephemeral space may be kept as sorted. Previously, to generate proper rowid index_max() was used. However, it is obviously wrong way to do it. Hence, lets add simple integer counter to memtx space (ephemeral spaces are valid only for memtx engine) and introduce method in vtab to fetch next rowid value. Needed for #3297
-
- Nov 13, 2018
-
-
Georgy Kirichenko authored
Return valid lua error if something fails during sql function creation. Closes #3724
-
Serge Petrenko authored
In box_cfg() we have a call to gc_set_wal_watcher(), which creates pipes between 'wal' and 'tx' under the hood using cbus_pair(). While pipes are being created, the fiber calling gc_set_wal_watcher() will process all the messages coming to 'tx' thread from iproto. This is wrong, since we have a separate fiber pool to handle iproto messages, and background fibers shouldn't participate in these messages processing. For example, this causes occasional credential corruption in the fiber executing box_cfg(). Since tx fiber pool is already created at the time gc_set_wal_watcher() is called, we may forbid message processing for the fiber which calls the function, one of the tx fiber pool fibers will wake us up when the pipes are created. Closes #3779
-
- Nov 09, 2018
-
-
Mergen Imeev authored
Test sql/iproto.test.lua fails due to its result-file being a bit outdated. Follow up #2618
-
Vladislav Shpilevoy authored
It makes no sense to store it here. SQL transaction specific things shall be taken from global txn object, as any transaction specific things. Follow up #2618
-
Mergen Imeev authored
According to documentation some JDBC functions have an ability to return all ids that were generated in executed INSERT statement. This patch gives a way to implement such functionality. After this patch all ids autogenerated during VDBE execution will be saved and returned through IPROTO. Closes #2618
-
Vladislav Shpilevoy authored
Now txn_commit is judge, jury and executioner. It both commits and rollbacks data, and collects it calling fiber_gc, which destroys the region. But SQL wants to use some transactional data after commit. It is autogenerated identifiers - a list of sequence values generated for autoincrement columns and explicit sequence:next() calls. It is possible to store the list on malloced mem inside Vdbe, but it complicates deallocation. Much more convenient to store all transactional data on the transaction memory region, so it would be freed together with fiber_gc. After this patch applied, Vdbe takes care of txn memory deallocation in a finalizer routine. Between commit and finalization transactional data can be serialized wherever. Needed for #2618
-
- Nov 07, 2018
-
-
Sergei Voronezhskii authored
Closes: #3761
-
- Nov 05, 2018
-
-
Vladislav Shpilevoy authored
Closes #3790
-
- Nov 03, 2018
-
-
Vladimir Davydov authored
-
Vladimir Davydov authored
-
Alexander Turenko authored
FreeBSD 10.4 has no libdl.so. Fixes #3750.
-
Alexander Turenko authored
FreeBSD does not include headers recursively, so we need to include it explicitly at least for using IPPROTO_UDP macro. Thanks Po-Chuan Hsieh (@sunpoet) for the fix proposal (PR #3739). Fixes #3677.
-
Alexander Turenko authored
-
- Nov 02, 2018
-
-
Mergen Imeev authored
If field isn't defined by space format, than in case of multiple indexes field option is_nullable was the same as it was for last index that defines it. This is wrong as it should be 'true' only if it is 'true' for all indexes that defines it. Closes #3744.
-
Nikita Pettik authored
Lets evaluate an expression type during processing of expression's AST and code generation. It allows to calculate resulting columns data types and export them as IProto meta alongside with columns' names. Also, correct types are also returned for binding parameters as well. Note that NULL literal has type "BOOLEAN". It was made on purpose - different DBs interpret NULL's type in different ways: some of them use INT; others - VARCHAR; still others - UNKNOWN. We've decided that NULL is rather of type "BOOLEAN", since NULL is kind if subset of "BOOLEAN" values: any comparison with NULL results in neither TRUE nor FALSE, but in NULL. Part of #2620
-