- Feb 21, 2020
-
-
Alexander V. Tikhonov authored
Our S3 based repositories now reflect packagecloud.io repositories structure. It will allow us to migrate from packagecloud.io w/o much complicating redirection rules on a web server serving download.tarantool.org. Deploy source packages (*.src.rpm) into separate 'SRPM' repository like packagecloud.io does. Changed repository signing key from its subkey to public and moved it to gitlab-ci environment. Follows up #3380
-
- Feb 20, 2020
-
-
Cyrill Gorcunov authored
To look similar to txn_complete. Acked-by:
Konstantin Osipov <kostja.osipov@gmail.com> Acked-by:
Nikita Pettik <korablev@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
Cyrill Gorcunov authored
Instead of on_done use on_complete prefix since done it too general while we're trying to complete write procedue. Also it is more consistent with txn_complete name. Acked-by:
Konstantin Osipov <kostja.osipov@gmail.com> Acked-by:
Nikita Pettik <korablev@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
Cyrill Gorcunov authored
We're returning int64_t with values 0 or -1 by now, there is no need for such big return type, plain integer is enough. Acked-by:
Nikita Pettik <korablev@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
Cyrill Gorcunov authored
Using void explicitly in functions which take no arguments allows to optimize code a bit and don't assume if there might be variable args. Moreover in commit e070cc4d we dropped arguments from txn_begin but didn't update vy_scheduler.c. The compiler didn't complain because it assumed there are vargs. Acked-by:
Konstantin Osipov <kostja.osipov@gmail.com> Acked-by:
Nikita Pettik <korablev@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
Alexander V. Tikhonov authored
Found that on 19.02.2020 APT repositories with packages for Ubuntu 18.10 Cosmic were removed from Ubuntu archive: E: The repository 'http://security.ubuntu.com/ubuntu cosmic-security Release' does not have a Release file. E: The repository 'http://archive.ubuntu.com/ubuntu cosmic Release' does not have a Release file. E: The repository 'http://archive.ubuntu.com/ubuntu cosmic-updates Release' does not have a Release file. E: The repository 'http://archive.ubuntu.com/ubuntu cosmic-backports Release' does not have a Release file. Also found the half a year old message about Ubuntu 18.10 Cosmic EOL: https://fridge.ubuntu.com/2019/07/19/ubuntu-18-10-cosmic-cuttlefish-end-of-life-reached-on-july-18-2019/ Removed the Ubuntu 18.10 Cosmic from gitlab-ci and travis-ci testings.
-
Vladislav Shpilevoy authored
The server used to crash when any option argument was passed with a value concatenated to it, like this: '-lvalue', '-evalue' instead of '-l value' and '-e value'. However this is a valid way of writing values, and it should not have crashed regardless of its validity. The bug was in usage of 'optind' global variable from getopt() function family. It is not supposed to be used for getting an option's value. It points to a next argv to parse. Next argv != value of current argv, like it was with '-lvalue' and '-evalue'. For getting a current value there is a variable 'optarg'. Closes #4775
-
Vladislav Shpilevoy authored
There was a bug that float +/- float could result into infinity even if result fits a double. It was fixed by storing double or float depending on a result value. But it didn't take result field type into account. That led to a bug when a double field +/- a value fit the float range, and could be stored as float resulting into an error at attempt to create a tuple. Now if a field type is double in the tuple format, it will store double always, even if it fits the float range. Follow-up #4701
-
Vladislav Shpilevoy authored
Currently xrow_update sizeof + store are used to calculate the result tuple's size, preallocate it as a monolithic memory block, and save the update tree into it. Sizeof was expected to return the exact memory size needed for the tuple. But this is not useful when result size of a field depends on its format type, and therefore on its position in the tuple. Because in that case sizeof would need to care about tuple format, and walk format tree just like store does now. Or it would be needed to save the found json tree nodes into struct xrow_update_op during sizeof calculation. All of this would make sizeof code more complex. The patch makes it possible for sizeof to return the maximal needed size. So, for example, a floating point field size now returns size needed for encoding of a double. And then store can either encode double or float. Follow-up #4701
-
Vladislav Shpilevoy authored
Tuple format now is passed to xrow_update routines. It is going to be used for two purposes: - Find field types of the result tuple fields. It will be used to decide whether a floating point value should be saved with single or double precision; - In future use the format and tuple offset map to find target fields for O(1), without decoding anything. May be especially useful for JSON updates of indexed fields. For the types the format is passed to *_store() functions. Types can't be calculated earlier, because '!' and '#' operations change field order. Even if a type would be calculated during operations appliance for field N, an operation '!' or '#' on field < N would make this calculation useless. Follow-up #4701
-
Vladislav Shpilevoy authored
Before the patch there were the rules: * float +/- double = double * double +/- double = double * float +/- float = float The rules were applied regardless of values. That led to a problem when float + float exceeding maximal float value could fit into double, but was stored as an infinity. The patch makes so that if a floating point arithmetic operation result fits into float, it is stored as float. Otherwise as double. Regardless of initial types. This alongside saves some memory for cases when doubles can be stored as floats, and therefore takes 4 less bytes. Although these cases are rare, because any not integer value stored in a double may have a long garbage tail in its fraction. Closes #4701
-
- Feb 19, 2020
-
-
Vladislav Shpilevoy authored
os.setenv() and os.environ() are Lua API for extern char **environ; int setenv(); The Open Group standardized access points for environment variables. But there is no a word about that environ never changes. Programs can't relay on that. For example, addition of a new variable may cause realloc of the whole environ array, and therefore change of its pointer value. That was exactly the case in os.environ() - it was using value of environ array remembered when Tarantool started. And os.setenv() could realloc the array and turn the saved pointer into garbage. Closes #4733
-
Kirill Yukhin authored
Revert "build: introduce LUAJIT_ENABLE_PAIRSMM flag" Related to #4770
-
- Feb 18, 2020
-
-
Alexander V. Tikhonov authored
Enabled Tarantool performance testing on Gitlab-CI for release/master branches and "*-perf" named branches. For this purpose 'perf' and 'cleanup' stages were added into Gitlab-CI pipeline. Performance testing support next benchmarks: - cbench - linkbench - nosqlbench (hash and tree Tarantool run modes) - sysbench - tpcc - ycsb (hash and tree Tarantool run modes) Benchmarks use scripts from repository: http://github.com/tarantool/bench-run Performance testing uses docker images, built with docker files from bench-run repository: - perf/ubuntu-bionic:perf_master -- parent image with benchmarks only - perf_tmp/ubuntu-bionic:perf_<commit_SHA> -- child images used for testing Tarantool sources @Totktonada: Harness and workloads are to be reviewed.
-
Oleg Babin authored
After 7fd6c809 (buffer: port static allocator to Lua) uri started to use static_allocator - cyclic buffer that also is used in several modules. However situation when uri.format output is zero-length string was not handled properly and ffi.string could return data that was previously written in static buffer because use as string terminator the first zero byte. To prevent such situation let's pass result length explicitly. Closes #4779
-
- Feb 17, 2020
-
-
Oleg Babin authored
Some of our users want to have a native method to check is specified value 'decimal' or not This patch introduces 'is_decimal' check in 'decimal' module Closes #4623 @TarantoolBot document Title: decimal.is_decimal is_decimal check function returns "true" if specified value is decimal and "false" otherwise
-
- Feb 15, 2020
-
-
Olga Arkhangelskaia authored
When json.decode is used with 2 arguments, 2nd argument seeps out to the json configuration of the instance. Moreover, due to current serializer.cfg implementation it remains invisible while checking settings using json.cfg table. This fixes commit 6508ddb7 ('json: fix stack-use-after-scope in json_decode()'). Closes #4761
-
Vladislav Shpilevoy authored
box_process_call/eval() in the end check if there is an active transaction. If there is, it is rolled back, and an error is set. But rollback is not needed anymore, because anyway in the end of the request the fiber is stopped, and its not finished transaction is rolled back. Just setting of the error is enough. Follow-up #4662
-
Vladislav Shpilevoy authored
Fiber.storage was not deleted when created in a fiber started from the thread pool used by IProto requests. The problem was that fiber.storage was created and deleted in Lua land only, assuming that only Lua-born fibers could have it. But in fact any fiber can create a Lua storage. Including the ones used to serve IProto requests. Not deletion of the storage led to a possibility of meeting a non-empty fiber.storage in the beginning of an iproto request, and to not deletion of the memory caught by the storage until its explicit nullification. Now the storage destructor works for any fiber, which managed to create the storage. The destructor unrefs and nullifies the storage. For destructor purposes the fiber.on_stop triggers were reworked. Now they can be called multiple times during fiber's lifetime. After every request done by that fiber. Closes #4662 Closes #3462 @TarantoolBot document Title: Clarify fiber.storage lifetime Fiber.storage is a Lua table created when it is first accessed. On the site it is said that it is deleted when fiber is canceled via fiber:cancel(). But it is not the full truth. Fiber.storage is destroyed when the fiber is finished. Regardless of how is it finished - via :cancel(), or the fiber's function did 'return', it does not matter. Moreover, from that moment the storage is cleaned up even for pooled fibers used to serve IProto requests. Pooled fibers never really die, but nonetheless their storage is cleaned up after each request. That makes possible to use fiber.storage as a full featured request-local storage. Fiber.storage may be created for a fiber no matter how the fiber itself was created - from C, from Lua. For example, a fiber could be created in C using fiber_new(), then it could insert into a space, which had Lua on_replace triggers, and one of the triggers could create fiber.storage. That storage will be deleted when the fiber is stopped. Another place where fiber.storage may be created - for replication applier fiber. Applier has a fiber from which it applies transactions from a remote instance. In case the applier fiber somehow creates a fiber.storage (for example, from a space trigger again), the storage won't be deleted until the applier fiber is stopped.
-
- Feb 14, 2020
-
-
Vladislav Shpilevoy authored
Fiber.storage is a table, available from anywhere in the fiber. It is destroyed after fiber function is finished. That provides a reliable fiber-local storage, similar to thread-local in C/C++. But there is a problem that the storage may be created via one struct lua_State, and destroyed via another. Here is an example: function test_storage() fiber.self().storage.key = 100 end box.schema.func.create('test_storage') _ = fiber.create(function() box.func.test_storage:call() end) There are 3 struct lua_State: tarantool_L - global always alive state; L1 - Lua coroutine of the fiber, created by fiber.create(); L2 - Lua coroutine created by that fiber to execute test_storage(). Fiber.storage is created on stack of L2 and referenced by global LUA_REGISTRYINDEX. Then it is unreferenced from L1 when the fiber is being destroyed. That is generally ok as soon as the storage object is always in LUA_REGISTRYINDEX, which is shared by all Lua states. But soon during destruction of the fiber.storage there will be only tarantool_L and the original L2. Original L2 may be already deleted by the time the storage is being destroyed. So this patch makes unref of the storage via reliable tarantool_L. Needed for #4662
-
Cyrill Gorcunov authored
Every new error introduced into error engine cause massive update in test even if only one key is introduced. To minimize diff output better print them in sorted order. Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by:
Vladislav Shpilevoy <v.shpilevoy@tarantool.org> Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-
Sergey Kaplun authored
We already have 12Kb thread-safe static buffer in `lib/small/small/static.h`, that can be used instead of 16Kb bss buffer in `src/lib/core/backtrace.cc` for backtrace payload. Closes #4650
-
- Feb 12, 2020
-
-
Nikita Pettik authored
During value decoding fetched from space's field FP representation was forced in case type of field was NUMBER. It was so since NUMBER used to substitute DOUBLE field type (in fact NUMBER mimicked DOUBLE type). Since now DOUBLE is a separate field type, there's no such necessity. Hence from now integers from NUMBER field are treated as integers. Implemented by Mergen Imeev <imeevma@gmail.com> Closes #4233 @TarantoolBot document Title: NUMBER column type changes From now NUMBER behaves in the same way as in NoSQL Tarantool. Previously, NUMBER was rather synonym to what now DOUBLE means: it used to force floating point representation of values, even if they were integers. A few examples: 1) CAST operation: Obsolete behaviour: SELECT CAST(922337206854774800 AS NUMBER), CAST(5 AS NUMBER) / 10; --- rows: - [922337206854774784, 0.5] New behaviour: SELECT CAST(922337206854774800 AS NUMBER), CAST(5 AS NUMBER) / 10; --- rows: - [922337206854774800, 0] Obsolete behaviour: SELECT CAST(true AS NUMBER); --- - null - 'Type mismatch: can not convert TRUE to number' ... New behaviour: SELECT CAST(true AS NUMBER); --- rows: - [1] ... CAST boolean to NUMBER is allowed since it is allowed to convert booleans to integers; in turn NUMBER comprises integer type. 2) Preserving integer representation: Obsolete behaviour: CREATE TABLE t (n NUMBER PRIMARY KEY); INSERT INTO t VALUES (3), (-4), (5.0); SELECT n, n/10 FROM t; --- rows: - [-4, -0.4] - [3, 0.3] - [5, 0.5] New behaviour: SELECT n, n/10 FROM t; --- rows: - [-4, 0] - [3, 0] - [5, 0.5]
-
Nikita Pettik authored
NUMBER type is supposed to include values of both integer and FP types. Hence, if numeric value is casted to NUMBER it remains unchanged. Before this patch cast to NUMBER always resulted in forcing floating point representation. Furthermore, CAST of blob values to NUMBER always led the floating point result, even if blob value had precise integer representation. Since now NUMBER doesn't imply only FP values, let's fix this and use vdbe_mem_numerify() which provides unified way of casting to NUMBER type. Part of #4233 Closes #4463
-
Nikita Pettik authored
Fix codestyle and comment; allow conversion from boolean to number (since it is legal to convert boolean to integer, and in turn number type completely includes integer type). Note that currently sqlVdbeMemNumerify() is never called, so changes applied to it can't be tested. It is going to be used in the further patches. Part of #4233
-
Nikita Pettik authored
Arithmetic operations are implemented by OP_Add, OP_Substract etc VDBE opcodes which consist of almost the same internal logic: depending on type of operands (integer of FP) execution flow jumps to the one of two branches. At this point branch which is responsible for floating point operations finishes with next code: 1668 if (((type1|type2)&MEM_Real)==0 && !bIntint) { 1669 mem_apply_integer_type(pOut); 1670 } At least one type of type1 and type2 is supposed to be MEM_Real. Otherwise, execution flow either hits branch processing integer arithmetic operations or VDBE execution is aborted with ER_SQL_TYPE_MISMATCH Thus, condition under 'if' clause is always evaluated to 'false' value ergo mem_apply_integer_type() is never called. Let's remove this dead code. Implemented by Mergen Imeev <imeevma@tarantool.org>
-
- Feb 06, 2020
-
-
Chris Sosnin authored
We should first check that primary key is not NULL. Closes #4745
-
Nikita Pettik authored
Names of bindings are stored in the array indexed from 1 (see struct Vdbe->pVList). So to get name of i-th values to be bound, one should call sqlVListNumToName(list, i+1) not sqlVListNumToName(list, i). For this reason, names of binding parameters returned in meta-information in response to :prepare() call are shifted by one. Let's fix it and calculate position of binding parameter taking into consideration 1-based indexing. Closes #4760
-
- Feb 05, 2020
-
-
Leonid Vasiliev authored
LuaJIT records traces while interpreting Lua bytecode (considering it's hot enough) in order to compile the corresponding execution flow to a machine code. A Lua/C call aborts trace recording, but an FFI call does not abort it per se. If code inside an FFI call yields to another fiber while recording a trace and the new current fiber interpreting a Lua bytecode too, then unrelated instructions will be recorded to the current trace. In short, we should not yield a current fiber inside an FFI call. There is another problem. Machine code of a compiled trace may sink a value from a Lua state down to a host register, change it and write back only at trace exit. So the interpreter state may be outdated during the compiled trace execution. A Lua/C call aborts a trace and so the code inside a callee always see an actual interpreter state. An FFI call however can be turned into a single machine's CALL instruction in the compiled code and if the callee accesses a Lua state, then it may see an irrelevant value. In short, we should not access a Lua state directly or reenter to the interpreter from an FFI call. The box.rollback_to_savepoint() function may yield and another fiber will be scheduled for execution. If this fiber touches a Lua state, then it may see an inconsistent state and the behaviour will be undefined. Noted that <struct txn>.id starts from 1, because we lean on this fact to use luaL_toint64(), which does not distinguish an unexpected Lua type and cdata<int64_t> with zero value. It seems that this assumption already exists: the code that prepare arguments for 'on_commit' triggers uses luaL_toint64() too (see lbox_txn_pairs()). Fixes #4427 Co-authored-by:
Alexander Turenko <alexander.turenko@tarantool.org> Reviewed-by:
Igor Munkin <imun@tarantool.org>
-
- Feb 04, 2020
-
-
Alexander V. Tikhonov authored
We're going to use S3 compatible storage for Deb and RPM repositories instead of packagecloud.io service. The main reason is that packagecloud.io provides a limited amount of storage, which is not enough for keeping all packages (w/o regular pruning of old versions). Note: At the moment packages are still pushed to packagecloud.io from Travis-CI. Disabling this is out of scope of this patch. This patch implements saving of packages on an S3 compatible storage and regeneration of a repository metadata. The layout is a bit different from one we have on packagecloud.io. packagecloud.io: | - 1.10 | - 2.1 | - 2.2 | - ... S3 compatible storage: | - live | - 1.10 | - 2.1 | - 2.2 | - ... | - release | - 1.10 | - 2.1 | - 2.2 | - ... Both 'live' and 'release' repositories track release branches (named as <major>.<minor>) and master branch. The difference is that 'live' is updated on every push, but 'release' is only for tagged versions (<major>.<minor>.<patch>.0). Packages are also built on '*-full-ci' branches, but only for testing purposes: they don't pushed anywhere. The core logic is in the tools/update_repo.sh script, which implements the following flow: - create metadata for new packages - fetch relevant metadata from the S3 storage - push new packages to the S3 storage - merge and push the updated metadata to the S3 storage The script uses 'createrepo' for RPM repositories and 'reprepro' for Deb repositories. Closes #3380
-
- Jan 29, 2020
-
-
Mergen Imeev authored
This patch makes the INSTEAD OF DELETE trigger work for every row in VIEW. Prior to this patch, it worked only once for each group of non-unique rows. Also, this patch adds tests to check that the INSTEAD OF UPDATE trigger work for every row in VIEW. Closes #4740
-
Kirill Yukhin authored
Revert "Free all slabs on region reset" commit. Closes #4736
-
- Jan 24, 2020
-
-
Serge Petrenko authored
Update decNumber library to silence the build warning produced on too long integer constant.
-
- Jan 21, 2020
-
-
Cyrill Gorcunov authored
Test multireturn in lua output mode and lack of a parameter in '\set output <...>' command. Co-developed-by:
Alexander Turenko <alexander.turenko@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-
Cyrill Gorcunov authored
In case if output format is not specified we should exit with more readable error message. Fixes #4638 Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-
Cyrill Gorcunov authored
Currently we handle only first member of multireturn statement. Fix it processing each element separately. n.b.: While at this file add vim settings. | tarantool> \set output lua | true; | tarantool> 1,2,3,4 | 1, 2, 3, 4; Fixes #4604 Reported-by:
Alexander Turenko <alexander.turenko@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-
Vladislav Shpilevoy authored
Transaction adds a redo log for each statement. The log is an xrow header. Some requests don't have a header (local requests), some do (from remote client, from replication). When a request had a header, it was written as is to WAL. But requests from remote client have an xrow header, however barely filled. Most of its fields are default values, usually 0. Including group id. Indeed, remote clients should not care about setting such deep system fields. That led to a problem when a space had group id local (!= 0), but it was ignored because in a request header from a remote client the group id was default (== 0). On the summary, it was possible to force Tarantool to replicate a replica-local space. Now group id setting is server-authoritative. Box always sets it regardless of what is present in an xrow header received from a client. Thanks Kostja Osipov (@kostja) for the diagnostics and the solution. Closes #4729
-
- Jan 20, 2020
-
-
Nikita Pettik authored
Accidentally, assert in sql_type_result() checking argument types was changed in 64745b10, so that now it may fail even on correct values. Let's revert this change. Closes #4728
-
- Jan 17, 2020
-
-
Chris Sosnin authored
'pragma collation_list' uses _collation space, although user may have no access to it. Thus, we replace it with the corresponding view. Closes #4713
-
- Jan 16, 2020
-
-
Oleg Babin authored
Usually functions return pair `nil, err` and expected that err is string. Let's make the behaviour of error object closer to string and define __concat metamethod. The case of error "error_mt.__concat(): neither of args is an error" is not covered by tests because of #4723 Closes #4489
-