- May 21, 2019
-
-
Vladimir Davydov authored
The autoincrement code was written when there were no nested field. Now, it isn't enough to just skip to the autoincrement field - we also need to descend deeper if key_part->path is set. Note, the code expects the nested field to be present and set to NULL. That is, if field path is [1].a.b, the tuple must have all intermediate fields set: {{a = {b = box.NULL}}} (usage of box.NULL is mandatory to create a tuple like that in Lua). Closes #4210
-
Vladimir Davydov authored
Closes #4009 @TarantoolBot document Title: Sequence can now be set for an index part other than the first Initially one could attach a sequence (aka autoincrement) only to the first index part. Now it's possible to attach a sequence to any primary index part. The part still must be integer though. Syntax: ``` box.schema.space.create('test') box.space.test:create_index('primary', { parts = {{1, 'string'}, {2, 'unsigned'}, {3, 'unsigned'}}, sequence = true, sequence_part = 2 }) box.space.test:insert{'a', box.null, 1} -- inserts {'a', 1, 1} ``` Note, `sequence_part` option is 1-base. If `sequence_part` is omitted, 1 is used, which assures backward compatibility with the original behavior. One can also attach a sequence to another index part using `index.alter` (the code below continues the example above): ``` box.space.test.index.primary:alter{sequence_part = 3} box.space.test:insert{'a', 1, box.null, 'x'} -- inserts {'a', 1, 2, 'x'} ```
-
Vladimir Davydov authored
A check was missing in index.alter. This resulted in an attempt to drop the sequence attached to the altered index even if the sequence was not modified. Closes #4214
-
Vladimir Davydov authored
When schema.lua was introduced, there was no such thing as space format and we had to access tuple fields by no. Now we can use human readable names. Let's do it - this should improve code readability. A note about box/alter.test.lua: for some reason it clears format of _space and _index system spaces, which apparently breaks our assumption about field names. Let's zap those pointless test cases.
-
- May 20, 2019
-
-
Alexander Turenko authored
Updated small submodule with the corresponding fix.
-
Vladislav Shpilevoy authored
Background. Coio provides a way to schedule arbitrary tasks execution in worker threads. A task consists of a function to execute, and a custom destructor. To push a task the function coio_task_post(task, timeout) was used. When the function returns 0, a caller can obtain a result and should free the task manually. But the trick is that if timeout was 0, the task was posted in a detached state. A detached task frees its memory automatically despite coio_task_post() result, and does not even yield. Such a task object can't be accessed and so much the more freed manually. coio_getaddrinfo() used coio_task_post() and freed the task when the latter function returned 0. It led to double free when timeout was set 0. The bug was introduced here 800cec73 in an attempt to do not yield in say_logrotate, because it is not fiber-safe. Now there are two functions: coio_task_execute(task, timeout), which never detaches a task completed successfully, and coio_task_post(task), which posts a task in a detached state. Closes #4209
-
Vladislav Shpilevoy authored
According to the standard by Open Group, getaddrinfo() hints argument is optional - it can be NULL. When it is NULL, hints is assumed to have 0 in ai_flags, ai_socktype, and ai_protocol; AF_UNSPEC in ai_family. See The Open Group Base Specifications.
-
Alexander Turenko authored
The result file of the test app-tap/init_script.test.lua was not updated in 549140b3 ('box/memtx: Allow to skip tuple memory from coredump'). Follow up #3509.
-
Alexander V. Tikhonov authored
Made fixes: - Added CMAKE_EXTRA_PARAMS environment to docker's container runs to enable -DENABLE_LTO=ON/OFF cmake option. - Added CC/CXX environment to docker's container runs to set clang for cmake. Also the additional environment variables {CC,CXX}_FOR_BUILD were postponed, because we didn't run cross-compilation at the moment, for more info check: https://docs.travis-ci.com/user/languages/cpp/#choosing-compilers-to-test-against - Changed LTO docker's image to 'debian-buster' due to LTO needed higher versions of packages, check for more information commit: f9e28ce4 ('Add LTO support') - Fixed sources to avoid of failures on builds by GCC with LTO: 1) src/box/memtx_rtree.c: In function ‘mp_decode_rect’: src/box/memtx_rtree.c:86:24: error: ‘c’ may be used uninitialized in this function [-Werror=maybe-uninitialized] rect->coords[i * 2] = c; ^ src/box/memtx_rtree.c:74:10: note: ‘c’ was declared here coord_t c; ^ 2) src/box/sql/func.c: In function ‘quoteFunc’: src/box/sql/func.c:1103:3: error: ‘b’ may be used uninitialized in this function [-Werror=maybe-uninitialized] sql_result_text(context, sql_value_boolean(argv[0]) ? ^ src/box/sql/vdbeapi.c:217:7: note: ‘b’ was declared here bool b; ^ 3) src/box/tuple_update.c: In function ‘update_read_ops’: src/box/tuple_update.c:1022:4: error: ‘field_no’ may be used uninitialized in this function [-Werror=maybe-uninitialized] diag_set(ClientError, ER_NO_SUCH_FIELD_NO, field_no); ^ src/box/tuple_update.c:1014:11: note: ‘field_no’ was declared here int32_t field_no; ^ 4) src/httpc.c: In function ‘httpc_set_verbose’: src/httpc.c:267:2: error: call to ‘_curl_easy_setopt_err_long’ declared with attribute warning: curl_easy_setopt expects a long argument for this option [-Werror] curl_easy_setopt(req->curl_request.easy, CURLOPT_VERBOSE, curl_verbose); ^ 5) src/lua/httpc.c: In function ‘luaT_httpc_request’: src/lua/httpc.c:128:64: error: ‘MEM[(int *)&parser + 20B]’ may be used uninitialized in this function [-Werror=maybe-uninitialized] lua_pushinteger(L, (parser.http_minor > 0) ? parser.http_minor: 0); ^ src/lua/httpc.c:67:21: note: ‘MEM[(int *)&parser + 20B]’ was declared here struct http_parser parser; ^ src/lua/httpc.c:124:64: error: ‘MEM[(int *)&parser + 16B]’ may be used uninitialized in this function [-Werror=maybe-uninitialized] lua_pushinteger(L, (parser.http_major > 0) ? parser.http_major: 0); ^ src/lua/httpc.c:67:21: note: ‘MEM[(int *)&parser + 16B]’ was declared here struct http_parser parser; ^ Close #4215
-
Cyrill Gorcunov authored
In case if there are huge amount of tuples the whole memory goes to coredump file even if we don't need it for problem investigation. In result coredump may blow up to gigabytes in size. Lets allow to exclude this memory from dumping via box.cfg::strip_core boolean parameter. Note that the tuple's arena is used not only for tuples themselves but for memtx->index_extent_pool and memtx->iterator_pool as well, so they are affected too. Fixes #3509 @TarantoolBot document Title: Document box.cfg.strip_core When Tarantool runs under a heavy load the memory allocated for tuples may be very huge in size and to eliminate this memory from being present in `coredump` file the `box.cfg.strip_core` parameter should be set to `true`. The default value is `false`.
-
Vladislav Shpilevoy authored
Negative size led to an assertion. The commit adds a check if size is negative. Closes #4224
-
Alexander Turenko authored
box_process_join() and box_process_subscribe() use coio_write_xrow(), which calls coio_writev_timeout() under hood. If a socket will block at write() the function calls ev_io_start() to wake the fiber up when the socket will be ready to write. This code assumes that the watcher (struct ev_io) is initialized as coio watcher, i.e. coio_create() has been called. The reason why the code works before is that coio_write_xrow() in box_process_{join,subscribe}() writes a small piece of data and so the situation when a socket write buffer has less free space then needed is rare. Fixes #4110.
-
- May 18, 2019
-
-
Georgy Kirichenko authored
-
- May 17, 2019
-
-
Mergen Imeev authored
This patch replaces schema_find_id() with box_space_id_by_name() in SQL. The box_space_id_by_name() is more specialized. In addition, it checks if the user has sufficient rights, unlike schema_find_id(). Closes #3570
-
Mergen Imeev authored
This patch stops the parser if any error occurs. Prior to this patch, it was possible to replace the error with another one, since the parser may continue to work, even if an error occurred. For example: box.execute("insert into not_exist values(1) a") The first error is "Space 'NOT_EXIST' does not exist", but "Syntax error near 'a'" is displayed. After this patch, the first error will be displayed. Closes #3964 Closes #4195
-
Alexander Turenko authored
Fixes #4194.
-
Georgy Kirichenko authored
As we enforced applier row order so we don't need to reacquire schema latch after a ddl statement. Followup for: 056deb2c
-
- May 16, 2019
-
-
Alexander Turenko authored
Support more then 60 parallel jobs (#82, PR #171).
-
- May 15, 2019
-
-
Vladislav Shpilevoy authored
crypto.lua is a public module using OpenSSL directly. But now lib/crypto encapsulates OpenSSL with additional checks and similar but more conforming API. It allows to replace OpenSSL cipher in crypto.lua with lib/crypto methods.
-
Vladislav Shpilevoy authored
OpenSSL API is quite complex and hard to follow, additionally it is very unstable. Encoding/decoding via OpenSSL methods usually consists of multiple calls of a lot of functions. This patch wraps OpenSSL API with one more easy to use and conforming Tarantool code style in scope of crypto library. The traditional OpenSSL API is wrapped as well in a form of crypto_stream object, so OpenSSL API is not cut off. Besides struct crypto_stream the library provides struct crypto_codec which encapsulates all the steps of encryption logic in two short functions: crypto_codec_encrypt/decrypt(iv, in, in_size, out, out_size) A caller can create a needed codec via crypto_codec_new, which now supports all the same algorithms as crypto.lua module. Needed for #3234
-
Vladislav Shpilevoy authored
Tarantool has a strict rule for naming methods of libraries - use the library name as a prefix. For crypto lib methods it should be 'crypto_', not 'tnt_'.
-
Vladislav Shpilevoy authored
Crypto in Tarantool core was implemented and used very poorly uintil now. It was just a one tiny file with one-line wrappers around OpenSSL API. Despite being small and simple, it provided a powerful interface to the Lua land used by Lua 'crypto' public and documented module. Now the time comes when OpenSSL crypto features are wanted on lower level and with richer API, in core library SWIM written in C. This patch moves crypto wrappers into a separate library in src/lib, and drops some methods from the header file because they are never used from C, and are needed for exporting only. Needed for #3234
-
Vladislav Shpilevoy authored
-
Vladislav Shpilevoy authored
msgpack.decode() internally uses 'const char *' variable to decode msgpack, but somewhy expects only 'char *' as input. This commit allows to pass 'const char *' as well.
-
Vladislav Shpilevoy authored
Before the patch msgpack Lua module provided a method encode() able to take a custom buffer to encode into. But it should be of type 'struct ibuf', what made it impossible to use buffer.IBUF_SHARED as a buffer, because its type is 'struct ibuf *'. Strangely, but FFI can't convert these types automatically. This commit allows to use 'struct ibuf *' as well, and moves this functionality into a function in utils.h. Now both msgpack and merger modules can use ibuf directly and by pointer.
-
Vladislav Shpilevoy authored
swim_quit() notifies all the members that this instance has left the cluster. Strangely, except self. It is not a real bug, but showing 'left' status in self struct swim_member would be more correct than 'alive', obviously. It is possible, that self struct swim_member was referenced by a user - this is how 'self' can be available after SWIM instance deletion. Part of #3234
-
Vladislav Shpilevoy authored
SWIM internally tries to avoid unnecessary close+socket+bind calls on reconfiguration if a new URI is the same as an old one. SWIM transport compares <IP, port> couples and if they are equal, does nothing. But if a port is 0, it is not a real port, but a sign to the kernel to find any free port on the IP address. In such a case SWIM transport after bind() retrieves and saves a real port. When the same URI is specified again, the transport compares two addresses: old <IP, auto found port>, new <IP, 0>, sees they are 'different', and rebinds. It is not necessary, obviously, because the new URI covers the old one. This commit avoids rebind, when new IP == old IP, and new port is 0. Part of #3234
-
Vladislav Shpilevoy authored
uint16 was used in public SWIM C API as a type for payload size to emphasize its small value. But it is not useful in Lua, because Lua API should explicitly check if a number overflows uint16 maximal value, and return the same error as in case it is < uint16_max, but > payload_size_max. So main motivation of the patch is to avoid unnecessary checks in Lua and error message duplication. Internally payload size is still uint16.
-
Vladislav Shpilevoy authored
Swim_info() was a function to dump SWIM instance info to a Lua table without explicit usage of Lua. But now all the info can be taken from 1) self member and member API, 2) cached cfg options as a Lua table in a forthcoming Lua API - this is how box.cfg.<index> works.
-
Alexander Turenko authored
- Fix killing of servers at crash (PR #167). - Show logs for a non-default server failed at start (#159, PR #168). - Fix TAP13 hung test reporting (#155, PR #169). - Fix false positive internal error detection (PR #170).
-
- May 14, 2019
-
-
Ilya Konyukhov authored
Right now there is only one option which is configurable for http client. That is CURLMOPT_MAXCONNECTS. It can be setup like this: > httpc = require('http.client').new({max_connections = 16}) Basically, this option tells curl to maintain this many connections in the cache during client instance lifetime. Caching connections are very useful when user requests mostly same hosts. When connections cache is full and all of them are waiting for response and new request comes in, curl creates a new connection, starts request and then drops first available connection to keep connections cache size right. There is one side effect, that when tcp connection is closed, system actually updates its state to TIME_WAIT. Then for some time resources for this socket can't be reused (usually 60 seconds). When user wants to do lots of requests simultaneously (to the same host), curl ends up creating and dropping lots of connections, which is not very efficient. When this load is high enough, sockets won't be able to recover from TIME_WAIT because of timeout and system may run out of available sockets which results in performance reduce. And user right now cannot control or limit this behaviour. The solution is to add a new binding for CURLMOPT_MAX_TOTAL_CONNECTIONS option. This option tells curl to hold a new connection until there is one available (request is finished). Only after that curl will either drop and create new connection or reuse an old one. This patch bypasses this option into curl instance. It defaults to -1 which means that there is no limit. To create a client with this option setup, user needs to set max_total_connections option like this: > httpc = require('http.client').new({max_connections = 8, max_total_connections = 8}) In general this options usually useful when doing requests mostly to the same hosts. Other way, defaults should be enough. Option CURLMOPT_MAX_TOTAL_CONNECTIONS was added from 7.30.0 version, so if curl version is under 7.30.0, this option is simply ignored. https://curl.haxx.se/changes.html#7_30_0 Also, this patch adjusts the default for CURLMOPT_MAX_CONNECTS option to 0 which means that for every new easy handle curl will enlarge its max cache size by 4. See this option docs for more https://curl.haxx.se/libcurl/c/CURLMOPT_MAXCONNECTS.html Fixes #3945
-
- May 13, 2019
-
-
Vladislav Shpilevoy authored
See details in the small repository commit. In the summary, looks like a GCC bug. Fixed with a workaround.
-
Vladislav Shpilevoy authored
SIO library provides a wrapper for getnameinfo able to stringify Unix socket addresses. But it does not care about limited Tarantool stack and allocates buffers for getnameinfo() right on it - ~1Kb. Besides, after successful getnameinfo() the result is copied onto another static buffer. This patch optimizes sio_strfaddr() for the most common case - AF_INET, when 32 bytes is more than enough for any IP:Port pair, and writes the result into the target buffer directly. The main motivation behind this commit is that SWIM makes active use of sio_strfaddr() for logging - for each received/sent message it writes a couple of addresses into a log. It does it in verbose mode, but the say() function arguments are still calculated even when the active mode is lower than verbose.
-
Vladislav Shpilevoy authored
This patch harnesses freshly introduced static memory allocator to eliminate wasteful usage of BSS memory. This commit frees 11Kb per each thread.
-
Vladislav Shpilevoy authored
Before the patch Tarantool had a thread- and C-file- local array of 4 static buffers, each 1028 bytes. It provided an API tt_static_buf() allowing to return them one by one in a cycle. Firstly, it consumed totally 200Kb of BSS memory in summary over all C-files using these buffers. Obviously, it was a bug and was not made intentionally. The buffers were supposed to be a one process-global array. Secondly, even if the bug above had been fixed somehow, sometimes it would have been needed to obtain a bit bigger buffer. For example, to store a UDP packet - ~1.5Kb. This commit replaces these 4 buffers with small/ static allocator which does basically the same, but in more granulated and manoeuvrable way. This commit frees ~188Kb of BSS section. A main motivation for this commit is a wish to use a single global out-of-stack buffer to read UDP packets into it in the SWIM library, and on the other hand do not pad out BSS section with a new SWIM-special static buffer. Now SWIM uses stack for this and in the incoming cryptography SWIM component it will need more.
-
Vladimir Davydov authored
Currently, we set multikey_idx to multikey_frame->idx for the field corresponding to the multikey_frame itself. This is wrong, because this field doesn't have any indirection in the field map - we simply store offset to the multikey array there. It works by a happy coincidence - the frame has index -1 and we treat -1 as no-multikey case, see MULTIKEY_NONE. Should we change MULTIKEY_NONE to e.g. -2 or INT_MAX, we would get a crash because of it. So let's move the code setting multikey_idx before initializing multikey_frame in tuple_format_iterator_next().
-
Vladimir Davydov authored
Solely to improve code readability. No functional changes. Suggested by @kostja.
-
Vladimir Davydov authored
In case of multikey indexes, we use vy_entry.hint to store multikey array entry index instead of a comparison hint. So all we need to do is patch all places where a statement is inserted so that in case the key definition is multikey we iterate over all multikey indexes and insert an entry for each of them. The rest will be done automatically as vinyl stores and compares vy_entry objects, which have hints built-in, while comparators and other generic functions have already been patched to treat hints as multikey indexes. There are just a few places we need to patch: - vy_tx_set, which inserts a statement into a transaction write set. - vy_build_insert_stmt, which is used to fill the new index on index creation and DDL recovery. - vy_build_on_replace, which forwards modifications done to the space during index creation to the new index. - vy_check_is_unique_secondary, which checks a secondary index for conflicts on insertion of a new statement. - vy_tx_handle_deferred_delete, which generates deferred DELETE statements if the old tuple is found in memory or in cache. - vy_deferred_delete_on_replace, which applies deferred DELETEs on compaction. Plus, we need to teach vy_get_by_secondary_tuple to match a full multikey tuple to a partial multikey tuple or a key, which implies iterating over all multikey indexes of the full tuple and comparing them to the corresponding entries to the partial tuple. We already have tests that check the functionality for memtx. Enable and tweak it a little so that it can be used for vinyl as well.
-
Vladimir Davydov authored
Currently, we completely ignore vy_entry.hint while writing a run file, because they only contain auxiliary information for tuple comparison. However, soon we will use hints to store multikey offsets, which is mandatory for extracting keys and hence writing secondary run files. So this patch propagates vy_entry.hint as multikey offset to tuple_bloom and tuple_extract_key in vy_run implementation.
-
Vladimir Davydov authored
Currently, we construct a field map for a vinyl surrogate DELETE statement by hand, which works fine as long as field maps don't have extents. Once multikey indexes are introduced, there will be extents hence we must switch to field_map_builder.
-