- Aug 23, 2019
-
-
Alexander Turenko authored
app-tap.tarantoolctl.test.lua fails after 17df9edf ('tarantoolctl: allow to start instances with delayed box.cfg{}'). The commit fixes the test case that did check that an error is reported if box.cfg() was not called in an instance script. Follows up #4435. Fixes #4448.
-
- Aug 22, 2019
-
-
Yaroslav Dynnikov authored
This change is related to #4391. The objective was to collect additional information about modules, but it's hard to do without changing API. This patch will allow to monkey-patch report generation and achieve the same results without interfering the daemon behavior.
-
Max Melentiev authored
There is a problem with calculating .msg_namelen field of msghdr struct. Instead of .msg_name = &sa, .msg_namelen = sizeof(sa.sun_family) + strlen(sd_unix_path), it must set as .msg_namelen = sizeof(sa) // larger value than current invalid one It works on linux but when I tried to enable this feature for macOS it didn't (maybe because of different order of fields in the struct). Instead of fixing calculation, I've replaced original sendmsg call with sendto, because it's a convenient shortcut which simplifies code and can prevent such mistakes. Required for #4436
-
Max Melentiev authored
`tarantoolctl start` patches box.cfg two times: 1) before the init script to set default values and enforce some others, 2) after the init script to prevent changing a pid_file in runtime. The second patching fails if an init file does not call box.cfg{} before it's finished. This can take a place in apps with managed instances which receive configuration from external server. This patch moves the second patching into the box.cfg wrapper created during the first patching. So the second patching is performed only after box.cfg{} was invoked, so it does not fail anymore. However there is relatively minor flaw for applications that invoke box.cfg{} after init script is finished: `tarantoolctl start` goes to background only when box.cfg{} is called. Though this is not the case for daemon management systems like systemd, as they handle backgrounding on their side Fixes #4435 @TarantoolBot document Title: tarantoolctl allows to start instances without a box.cfg{} tarantoolctl now works for instances without box.cfg{} or with delayed box.cfg{} call. This can be managed instances which receive configuration from external server. For such instances `tarantoolctl start` goes to background when box.cfg{} is called, so it will wait until options for box.cfg are received. However this is not the case for daemon management systems like systemd, as they handle backgrounding on their side.
-
Nikita Pettik authored
This function implements common way of precise comparison between unsigned integer and floating point values (doubles). Currently, it is used in tuple comparators, but we need the same thing in SQL. Hence, let's move it to header containing set of utilities.
-
Nikita Pettik authored
To compare floating point values and integers in SQL functions compare_uint_float() and compare_int_float() are used. Unfortunately, they contain bug connected with checking border case: that's not correct to cast UINT64_MAX (2^64 - 1) to double. Proper way is to use exp2(2^64) or predefined floating point constant. To not bother fixing function which in turn may contain other tricky places, let's use instead already verified double_compare_uint64(). So that we have unified way of integer<->float comparison.
-
Alexander Turenko authored
`apt-get update <...>` fails on Debian Buster on docker_bootstrap goal (see #4331 for the similar issue). Added a description how to change dependencies in .travis.mk.
-
Nikita Pettik authored
Closes #4422 @TarantoolBot document Title: Introduce <WITH ENGINE> clause for CREATE TABLE statement To allow user to specify engine as per table option, CREATE TABLE statement has been extended with optional <WITH ENGINE = engine_name> clause. This clause comes at the end of CREATE TABLE statement. For instance: CREATE TABLE t_vinyl (id INT PRIMARY KEY) WITH ENGINE = 'vinyl'; Name of engine is considered to be string literal ergo should be enclosed in single quotation marks and be lower-cased. Note that engine specified in WITH ENGINE clause overwrites default engine, which is set via 'pragma sql_default_engine'.
-
Nikita Pettik authored
Error logging in engine_find() seems to be redundant: error message is displayed twice (since its callers always push error on the top). Let's remove this duplication.
-
Nikita Pettik authored
Name duplicates are allowed for savepoints (both in our SQL implementation and in ANSI specification). ANSI SQL states that previous savepoint should be deleted. What is more, our doc confirms this fact and says that "...it is released before the new savepoint is set." Unfortunately, it's not true - currently old savepoint remains in the list. For instance: SAVEPOINT t; SAVEPOINT t; RELEASE SAVEPOINT t; RELEASE SAVEPOINT t; -- no error is raised Let's fix this and remove old savepoint from the list.
-
Nikita Pettik authored
This allows us to completely remove SQL specific struct Savepoint and use instead original struct txn_savepoint.
-
Nikita Pettik authored
This procedure is processed in several steps. Firstly, we add name to struct txn_savepoint since we should be capable of operating on named savepoints (which in turn is SQL feature). Still, anonymous (in the sense of name absence) savepoints are also valid. Then, we add list (as implementation of stailq) of savepoints to struct txn: it allows us to find savepoint by its name. Finally, we patch rollback to/release savepoint routines: for rollback tail of the list containing savepoints is cut (but subject of rollback routine remains in the list); for release routine we cut tail including node being released.
-
Nikita Pettik authored
We are going to merge struct psql_txn with struct txn as a part of SQL integration into NoSQL, so let's move counter of deferred foreign key violations directly to struct txn.
-
Serge Petrenko authored
Closes #4413 @TarantoolBot document Title: update operations on decimal fields. tuple:update and space:update now support deicmal operands for arithmetic operations ('+' and '-'). The syntax is as usual: ``` d = box.tuple.new(decimal.new('1')) --- ... d:update{{'+', 1, decimal.new('0.5')}} --- - [1.5] ... ``` Insertion ('!') and assignment ('=') are also supported: ``` a = decimal.new('1') --- ... b = decimal.new('1e10') --- ... c = decimal.new('1e-10') --- ... d = box.tuple.new{5, a, 6, b, 7, c, "string"} --- ... d --- - [5, 1, 6, 10000000000, 7, 0.0000000001, 'string'] ... d:update{{'!', 3, dec.new('1234.5678')}} --- - [5, 1, 1234.5678, 6, 10000000000, 7, 0.0000000001, 'string'] ... d:update{{'=', -1, dec.new('0.12345678910111213')}} --- - [5, 1, 6, 10000000000, 7, 0.0000000001, 0.12345678910111213] When performing an arithmetic operation ('+', '-'), where either the updated field or the operand is decimal, the result will be decimal. When both the updated field and the operand are decimal, the result will, of course, be decimal. ... ```
-
Serge Petrenko authored
Closes #4333 @TarantoolBot document Title: Document decimal field type. Decimals may now be stored in spaces. A corresponding field type is introduced: 'decimal'. Decimal values are also allowed in 'scalar', 'any' and 'number' fields. 'decimal' field type is appropriate for both memtx HASH and TREE indices, as well as for vinyl TREE index. ``` To create an index 'pk' over a decimal field, say ``` tarantool> box.space.test:create_index('pk', {parts={1, 'decimal'}}) --- - unique: true parts: - type: decimal is_nullable: false fieldno: 1 id: 0 space_id: 512 type: TREE name: pk ... ``` Now you can insert some decimal values: ``` tarantool> for i = 1,10 do > box.space.test:insert{decimal.new((i-5)/10)} > end --- ... ``` tarantool> box.space.test:select{} --- - - [-0.4] - [-0.3] - [-0.2] - [-0.1] - [0] - [0.1] - [0.2] - [0.3] - [0.4] - [0.5] ... ``` Decimals may alse be inserted into `scalar` and `number` fields. In this case all the number values are sorted correctly: ``` tarantool> box.schema.space.create('test') tarantool> box.space.test:create_index('pk', {parts={1, 'number'}}) tarantool> box.space.test:insert{-1.0001, 'number'} --- - [-1.0001, 'number'] ... tarantool> box.space.test:insert{decimal.new(-1.00001), 'decimal'} --- - [-1.00001, 'decimal'] ... tarantool> box.space.test:insert{-1, 'number'} --- - [-1, 'number'] ... tarantool> box.space.test:insert{decimal.new(-0.999), 'decimal'} --- - [-0.999, 'decimal'] ... tarantool> box.space.test:insert{-0.998, 'number'} --- - [-0.998, 'number'] ... tarantool> box.space.test:insert{-0.9, 'number'} --- - [-0.9, 'number'] ... tarantool> box.space.test:insert{-0.95, 'number'} --- - [-0.95, 'number'] ... tarantool> box.space.test:insert{decimal.new(-0.92), 'decimal'} --- - [-0.92, 'decimal'] ... tarantool> box.space.test:insert{decimal.new(-0.971), 'decimal'} --- - [-0.971, 'decimal'] ... tarantool> box.space.test:select{} --- - - [-1.0001, 'number'] - [-1.00001, 'decimal'] - [-1, 'number'] - [-0.999, 'decimal'] - [-0.998, 'number'] - [-0.971, 'decimal'] - [-0.95, 'number'] - [-0.92, 'decimal'] - [-0.9, 'number'] ... ``` Uniqueness is also preserved between decimals and other number types: ``` tarantool> box.space.test:insert{-0.92} --- - error: Duplicate key exists in unique index 'pk' in space 'test' ... tarantool> box.space.test:insert{decimal.new(-0.9)} --- - error: Duplicate key exists in unique index 'pk' in space 'test' ... ``` You can also set decimal fields in space format: ``` tarantool> _ = box.schema.space.create('test') --- ... tarantool> _ = box.space.test:create_index('pk') --- ... tarantool> box.space.test:format{{name='id', type='unsigned'}, {name='balance', type='decimal'}} --- ... tarantool> box.space.test:insert{1} --- - error: Tuple field 2 required by space format is missing ... tarantool> box.space.test:insert{1, 'string'} --- - error: 'Tuple field 2 type does not match one required by operation: expected decimal' ... tarantool> box.space.test:insert{1, 1.2345} --- - error: 'Tuple field 2 type does not match one required by operation: expected decimal' ... tarantool> box.space.test:insert{1, decimal.new('1337.420')} --- - [1, 1337.420] ... ```
-
Serge Petrenko authored
Update decNumber library, add methods to convert decimals to uint64_t and int64_t, add unit tests. Also replace decimal_round() function with decimal_round_with_mode() to allow setting rounding mode. We need to round with mode DEC_ROUND_DOWN in to_int64 conversions in order to be consistent with double to int conversions. It will be needed to compute hints for decimal fields. Prerequisite #4333
-
Serge Petrenko authored
This patch adds the methods necessary to encode and decode decimals to MsgPack. MsgPack EXT type (MP_EXT) together with a new extension type MP_DECIMAL is used as a record header. The decimal MsgPack representation looks like this: +--------+-------------------+------------+===============+ | MP_EXT | length (optional) | MP_DECIMAL | PackedDecimal | +--------+-------------------+------------+===============+ The whole record may be encoded and decoded with mp_encode_decimal() and mp_decode_decimal(). This is equivalent to performing mp_encode_extl()/mp_decode_extl() on the first 3 fields and decimal_pack/unpack() on the PackedDecimal field. It is also possible to decode and encode decimals to msgpack from lua, which means you can insert decimals into spaces, but only into unindexed fields for now. Follow up #692 Part of #4333
-
- Aug 21, 2019
-
-
Serge Petrenko authored
Prior to this patch format checking was broken for 'i' (integer) and 'N' (big-endian integer). pickle.pack() rejected negative integers with these formats. Fix this
-
Serge Petrenko authored
When a number having a positive exponent is encoded, the internal decPackedFromNumber function returns a negative scale, which differs from the scale, returned by decimal_scale(). This leads to errors in decoding. Account for negative scale in decimal_pack() and decimal_unpack(). Follow-up #692
-
Serge Petrenko authored
Previously decimal comparison with nil failed with following error: `expected decimal, number or string as 2 argument`. Fix this. Throw a more verbose error in case of '>', '<', '>=', '<=' and fix equality check. Follow-up #692
-
Mergen Imeev authored
Hold libcurl-7.65.3. This version is not affected by the following issues: * #4180 ('httpc: redirects are broken with libcurl-7.30 and older'); * #4389 ('libcurl memory leak'); * #4397 ('HTTPS seem to be unstable'). After this patch libcurl will be statically linked when ENABLE_BUNDLED_LIBCURL option is set. This option is set by default. Closes #4318 @TarantoolBot document Title: Tarantool dependency list was changed * Added build dependencies: autoconf, automake, libtool, zlib-devel (zlib1g-dev on Debian). * Added runtime dependencies: zlib (zlib1g on Debian). * Removed build dependencies: libcurl-devel (libcurl4-openssl-dev on Debian). * Removed runtime dependencies: curl. The reason is that now we use compiled-in libcurl: so we don't depend on a system libcurl, but inherit its dependencies.
-
Alexander Turenko authored
This is a workaround for systemd-nss issue: https://github.com/systemd/systemd/issues/9585 The following error is observed on app-tap/pwd.test.lua on Fedora 29 (glibc-2.28-26.fc29, systemd-239-12.git8bca462.fc29) when tarantool is linked with libcurl w/o GSS-API support: | builtin/pwd.lua:169: getpwall failed [errno 2]: No such file or directory Such tarantool build lacks of libselinux.so.1 transitive dependency (tarantool -> libcurl.so.4 -> libgssapi_krb5.so.2 -> libkrb5support.so.0 -> libselinux.so.1) and strace shows the following calls when pwd.getpwall() is invoked first time: | openat(AT_FDCWD, "/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 7A | <...> | access("/etc/selinux/config", F_OK) = -1 ENOENT (No such file or directory) It looks like a part of libselinux initialization code and is invoked during execution of a last ffi.C.getpwent() call that returns `nil` as a result and left errno set to ENOENT. Our pwd module set errno to zero before getpwent() call and expects that it will be preserved if no unrecoverable errors occur. It seems that this expectation is not meet due to the systemd-nss issue linked above. Second and next getpwall() calls will succeed, so the commit adds an extra getpwall() during pwd module load. This workaround is disabled on FreeBSD due to another issue: #4428 ('getpwall() hangs on FreeBSD 12'). See also the previous related commit: efccac69 ('lua: fix error handling in getpwall and getgrall'). Follows up #3766. Part of #4318.
-
Alexander V. Tikhonov authored
Added static build using Dockerfile on Centos 7 for release commit criteria only. Added the cleanup for cmake generating CMakeCache.txt files and CMakeFiles directories to avoid of cmake localy created setup failing inside the docker after the whole tarantool path was copied into it. Added testing into the static build, running only when RUN_TESTS environment variable set to non empty value, used in gitlab-ci job to run the testing after the build. Closes #3668
-
- Aug 20, 2019
-
-
Vladimir Davydov authored
We remove an LSM tree from the scheduler queues as soon as it is dropped, even though the tree may hang around for a while after that, e.g. because it is pinned by an iterator. As a result, once an index is dropped, it won't be dumped anymore - its memory level will simply disappear without a trace. This is okay for now, but to implement snapshot iterators we must make sure that an index will stay valid as long as there's an iterator that references it. That said, let's delay removal of an index from the scheduler queues until it is about to be destroyed.
-
Vladimir Davydov authored
There's no reason to use a special method instead of the generic space_execute_dml for applying rows received from a master during the initial join stage. Moreover, using the special method results in not running space.before_replace trigger, which makes it impossible to, for example, update space engine on a replica, see the on_schema_init test of the replication test suite. So this patch removes the special method altogether and makes the code that used it switch to space_execute_dml. Closes #4417
-
Vladimir Davydov authored
We must enable SMALL_DELAYED_FREE_MODE to safely use a memtx snapshot iterator. Currently, we do that in checkpoint related callbacks, but if we want to reuse snapshot iterators for other purposes, e.g. feeding a read view to a newly joined replica, we better hide this code behind snapshot iterator constructors.
-
Vladimir Davydov authored
Currently, to prevent an index from going away while it is being written to a snapshot, we postpone memtx_gc_task's free() invocation until checkpointing is complete, see commit 94de0a08 ("Don't take schema lock for checkpointing"). This works fine, but makes it rather difficult to reuse snapshot iterators for other purposes, e.g. feeding a consistent read view to a newly joined replica. Let's instead use index reference counting for pinning indexes for checkpointing. A reference is taken in a snapshot iterator constructor and released when the snapshot iterator is destroyed.
-
Vladimir Davydov authored
This fake LSN counter, which is used for assigning LSNs to Vinyl statements during the initial join stage, was introduced a long time ago, when LSNs were used as identifiers for lsregion allocations and hence were supposed to grow strictly monotonically with each new transaction. Later on, they were reused for assigning unique LSNs to identify indexes in vylog. These days, we don't need initial join LSNs to be unique, as we switched to generations for lsregion allocations while in vylog we now use LSNs only as an incarnation counter, not as a unique identifier. That said, let's zap vy_env::join_lsn and simply assign 0 to all statements received during the initial join stage. To achieve that, we just need to relax an assertion in vy_tx_commit() and remove the assumption that an LSN can't be zero in the write iterator implementation.
-
Vladimir Davydov authored
vinyl_iterator keeps a reference to the LSM tree it was created for until it is destroyed, which may take indefinitely long in case the iterator is used in Lua. Actually, we don't need to keep a reference to the index for the whole iterator lifetime, because iterator_next() wrapper guarantees that iterator->next won't be called for a dropped index. What we need to do is keep a reference while we are yielding on disk read, similarly to vinyl_index_get(). Currently, pinning an index for indefinitely long is harmless, because an LSM tree is exempted from dump/compaction as soon as it is dropped so we just pin some memory, that's all. However, following patches are going to enable dump/compaction for dropped but pinned indexes in order to implement snapshot iterator so we better relax the dependency of an iterator on an index know. While we are at it, let's remove env and lsm members of vinyl_iterator struct: lsm can be accessed via vy_read_iterator embedded in the struct while env is only needed to access iterator_pool so we better store a pointer to the pool in vinyl_iterator instead.
-
Kirill Shcherbatov authored
The SQL_FUNC_SLOCHNG flag was useful for datetime function that are currently not supported. So it could be removed. Needed for #2200, #4113, #2233
-
Kirill Shcherbatov authored
Renamed OP_Function opcode to OP_BuiltinFunction to introduce a new OP_Function operation in a new meaning: a new OP_Function would call Tarantool's function with new port-based API while legacy OP_BuiltinFunction is an efficient implementation of SQL Builtins functions. Needed for #2200, #4113, #2233
-
Kirill Shcherbatov authored
Tarantool's SQL engine generates a different VDBE bytecode for ..COUNT(*).. and ..COUNT(fieldname).. operations: the first one produces a lightweight OP_Count operation that uses native mechanism to report the count of record in index while the second one pessimistically opens a space read iterator and uses Count aggregate function. A helper routine is_simple_count decides whether such optimisation is correct. It used to use SQL_FUNC_COUNT flag to mark a dummy (non-functional) function entry with 0 arguments. This patch changes SQL_FUNC_COUNT semantics: now it is a marker of any COUNT function, while is_simple_count relies on count of arguments to distinguish aggregate and non-aggregate functions. Needed for #2200, #4113, #2233
-
Kirill Shcherbatov authored
A new dispatcher function trim_func calls corresponding trim_ function implementation in relation with number of argc - a count of arguments. This is an important step to get rid of function's name overloading required for replace FuncDef cache with Tarantool's function cache. Needed for #2200, #4113, #2233
-
Kirill Shcherbatov authored
This patch does two things: renames existing scalar min/max functions and reserves names for them in NoSQL cache. Moreover it is an important step to get rid of function's name overloading required for replace FuncDef cache with Tarantool's function cache. Closes #4405 Needed for #2200, #4113, #2233 @TarantoolBot document Title: Scalar functions MIN/MAX are renamed to LEAST/GREATEST The MIN/MAX functions are typically used only as aggregate functions in other RDBMS(MSSQL, Postgress, MySQL, Oracle) while Tarantool's SQLite legacy code use them also in meaning GREATEST/LEAST scalar function. Now it fixed.
-
Kirill Shcherbatov authored
The SQL_PreferBuiltin flag is redundant (because builtin names are forbidden for UDFs) so we may to remove it. Needed for #4113, #2200, #2233
-
Kirill Shcherbatov authored
Vdbe field ref is a dynamic index over tuple fields storing offsets to each field and filling the offset array on demand. It is highly used in SQL, because it strongly relies on fast and repetitive access to any field, not only indexed. There is an optimisation for the case when a requested field fieldno is indexed, and the tuple itself stores offset to the field in its own small field map, used by indexes. vdbe_field_ref then uses that map to retrieve offset value without decoding anything. But when SQL requests any field > fieldno, then vdbe_field_ref decodes the tuple from the beginning in the worst case. Even having previously accessed fieldno. But it could start decoding from the latter. An updated vdbe_field_ref fetcher class uses a bitmask of initialized slots to use pre-calculated offsets when possible. This speed-ups SQL in some corner case and doesn't damage performance in general scenarios. Closes #4267
-
- Aug 19, 2019
-
-
Mons Anderson authored
-
- Aug 16, 2019
-
-
Konstantin Osipov authored
Before this patch, snapshot interval was set randomly within checkpoint_interval period. However, after box.snapshot(), the next snapshot was scheduled exactly checkpoint_interval from the current time. Many orchestration scripts snapshot entire cluster right after deployment, to take a backup. This kills randomness, since all instances begin to count the next checkpoint time from the current time. Randomize the next checkpoint time after a manual snapshot as well. Fixes gh-4432
-
Alexander Turenko authored
pretest_clean: preserve GREATEST and LEAST built-in functions. Needed for #4405.
-