- Feb 07, 2019
-
-
Serge Petrenko authored
On replica subscribe master checks that replica's cluster id matches master's one, and disallows replication in case of mismatch. This behaviour blocks implementation of anonymous replicas, which shouldn't pollute _cluster space and could accumulate changes from multiple clusters at once. So let's move the check to replica to let it decide which action to take in case of mismatch. Needed for #3186 Closes #3704
-
Stanislav Zudin authored
VDBE returns an error if LIMIT or OFFSET expressions are casted to the negative integer value. If expression in the LIMIT clause can't be converted into integer without data loss the VDBE instead of SQLITE_MISMATCH returns SQL_TARANTOOL_ERROR with message "Only positive integers are allowed in the LIMIT clause". The same for OFFSET clause. Closes #3467
-
- Feb 06, 2019
-
-
Vladimir Davydov authored
Historically, when considering splitting or coalescing a range or updating compaction priority, we use sizes of compressed runs (see bytes_compressed). This makes the algorithms dependent on whether compression is used or not and how effective it is, which is weird, because compression is a way of storing data on disk - it shouldn't affect the way data is partitioned. E.g. if we turned off compression at the first LSM tree level, which would make sense, because it's relatively small, we would affect the compaction algorithm because of this. That said, let's use uncompressed run sizes when considering range tree transformations.
-
Serge Petrenko authored
After the patch which made os.exit() execute on_shutdown triggers (see commit 6dc4c8d7) we relied on on_shutdown triggers to break the ev_loop and exit tarantool. Hovewer, there is an auxiliary event loop which is run in tarantool_lua_run_script() to reschedule the fiber executing chunks of code passed by -e option and executing interactive mode. This event loop is started only to execute interactive mode, and doesn't exist during execution of -e chunks. Make sure we don't start it if os.exit() was already executed in one of the chunks. Closes #3966
-
Serge Petrenko authored
In case a fiber joining another fiber gets cancelled, it stays suspended forever and never finishes joining. This happens because fiber_cancel() wakes the fiber and removes it from all execution queues. Fix this by adding the fiber back to the wakeup queue of the joined fiber after each yield. Closes #3948
-
Serge Petrenko authored
Start showing downstream status for relays in "follow" state. Also refactor lbox_pushrelay to unify code for different relay states. Closes #3904
-
- Feb 05, 2019
-
-
Konstantin Osipov authored
Initially tuple_field_* getters were placed in tuple_format.h to avoid including tuple_format.h in tuple.h. Now we include tuple_format.h in tuple.h anyway, so move the code where it belongs. Besides, there were a bunch of new getters added to tuple.h since then, so the code has rotten a bit. This is a preparation for an overhaul of tuple_field_* getters naming.
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
We use tuple_field_raw_ prefix for other similar members.
-
Konstantin Osipov authored
Use cached tuple data nad format in tuple_hash.c.
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Add a comment explaining the logic behind intermediate lookups in json_tree_lookup_path() function.
-
- Feb 04, 2019
-
-
Konstantin Osipov authored
-
Vladimir Davydov authored
The patch adds missing -fPIC option for clang, without which msgpuck library might fail to compile.
-
Kirill Shcherbatov authored
Implemented a more convenient interface for creating an index by JSON path. Instead of specifying fieldno and relative path it is now possible to pass full JSON path to data. Closes #1012 @TarantoolBot document Title: Indexes by JSON path Sometimes field data could have complex document structure. When this structure is consistent across whole space, you are able to create an index by JSON path. Example: s = box.schema.space.create('sample') format = {{'id', 'unsigned'}, {'data', 'map'}} s:format(format) -- explicit JSON index creation age_idx = s:create_index('age', {{2, 'number', path = "age"}}) -- user-friendly syntax for JSON index creation parts = {{'data.FIO["fname"]', 'str'}, {'data.FIO["sname"]', 'str'}, {'data.age', 'number'}} info_idx = s:create_index('info', {parts = parts}}) s:insert({1, {FIO={fname="James", sname="Bond"}, age=35}})
-
Kirill Shcherbatov authored
tuple_field_by_part looks up the tuple_field corresponding to the given key part in tuple_format in order to quickly retrieve the offset of indexed data from the tuple field map. For regular indexes this operation is blazing fast, however of JSON indexes it is not as we have to parse the path to data and then do multiple lookups in a JSON tree. Since tuple_field_by_part is used by comparators, we should strive to make this routine as fast as possible for all kinds of indexes. This patch introduces an optimization that is supposed to make tuple_field_by_part for JSON indexes as fast as it is for regular indexes in most cases. We do that by caching the offset slot right in key_part. There's a catch here however - we create a new format whenever an index is dropped or created and we don't reindex old tuples. As a result, there may be several generations of tuples in the same space, all using different formats while there's the only key_def used for comparison. To overcome this problem, we introduce the notion of tuple_format epoch. This is a counter incremented each time a new format is created. We store it in tuple_format and key_def, and we only use the offset slot cached in a key_def if it's epoch coincides with the epoch of the tuple format. If they don't, we look up a tuple_field as before, and then update the cached value provided the epoch of the tuple format. Part of #1012
-
Kirill Shcherbatov authored
Introduced has_json_path flag for compare, hash and extract functions templates(that are really hot) to make possible do not look to path field for flat indexes without any JSON paths. Part of #1012
-
Kirill Shcherbatov authored
New JSON indexes allows to index documents content. At first, introduced new key_part fields path and path_len representing JSON path string specified by user. Modified tuple_format_use_key_part routine constructs corresponding tuple_fields chain in tuple_format::fields tree to indexed data. The resulting tree is used for type checking and for alloctating indexed fields offset slots. Then refined tuple_init_field_map routine logic parses tuple msgpack in depth using stack allocated on region and initialize field map with corresponding tuple_format::field if any. Finally, to proceed memory allocation for vinyl's secondary key restored by extracted keys loaded from disc without fields tree traversal, introduced format::min_tuple_size field - the size of tuple_format tuple as if all leaf fields are zero. Example: To create a new JSON index specify path to document data as a part of key_part: parts = {{3, 'str', path = '.FIO.fname', is_nullable = false}} idx = s:create_index('json_idx', {parts = parse}) idx:select("Ivanov") Part of #1012
-
Kirill Shcherbatov authored
Introduced a new function tuple_field_raw_by_path is used to get tuple fields by field index and relative JSON path. This routine uses tuple_format's field_map if possible. It will be further extended to use JSON indexes. The old tuple_field_raw_by_path routine used to work with full JSON paths, renamed tuple_field_raw_by_full_path. It's return value type is changed to const char * because the other similar functions tuple_field_raw and tuple_field_by_part_raw use this convention. Got rid of reporting error position for 'invalid JSON path' error in lbox_tuple_field_by_path because we can't extend other routines to behave such way that makes an API inconsistent, moreover such error are useless and confusing. Needed for #1012
-
Kirill Shcherbatov authored
The msgpack dependency has been updated because the new version introduces the new mp_stack class which we will use to parse tuple without recursion when initializing the field map. Needed for #1012
-
- Jan 30, 2019
-
-
Serge Petrenko authored
Move a call to tarantool_free() to the end of main(). We needn't call atexit() at all anymore, since we've implemented on_shutdown triggers and patched os.exit() so that when exiting not due to a fatal signal (when no cleanup routines are called anyway) control always reaches a call to tarantool_free().
-
Serge Petrenko authored
Make os.exit() call tarantool_exit(), just like the signal handler does. Now on_shutdown triggers are not run only when a fatal signal is received. Closes #1607 @TarantoolBot document Title: Document box.ctl.on_shutdown triggers on_shutdown triggers may be set similar to space:on_replace triggers: ``` box.ctl.on_shutdown(new_trigger, old_trigger) ``` The triggers will be run when tarantool exits due to receiving one of the signals: `SIGTERM`, `SIGINT`, `SIGHUP` or when user executes `os.exit()`. Note that the triggers will not be run if tarantool receives a fatal signal: `SIGSEGV`, `SIGABORT` or any signal causing immediate program termination.
-
Serge Petrenko authored
Add on_shutdown triggers which are run by a preallocated fiber on shutdown and make it possible to register them via box.ctl.on_shutdown() Make use of the new triggers: now dedicate an on_shutdown trigger to break event loop instead of doing it explicitly from signal handler. The trigger is run last, so that all other on_shutdown triggers may yield, sleep and so on. Also make sure we can register lbox_triggers without push_event function in case we don't need one. Part of #1607
-
Stanislav Zudin authored
The "box.sql.execute('values(blob)')" causes an accert in the expression processing, because the parser doesn't distinguish the keyword "BLOB" from the binary value (in the form X'hex'). This fix adds an additional checks in the SQL grammar. Thus the expressions such as "VALUES(BLOB)", "SELECT FLOAT" and so on are treated as a syntax errors. Closes #3888
-
- Jan 29, 2019
-
-
Mergen Imeev authored
Currently, function sql_response_dump() puts data into an already created map. Moving the map creation to sql_response_dump() simplifies the code and allows us to use sql_response_dump() as one of the port_sql methods. Needed for #3505
-
Vladimir Davydov authored
The buffer is defined in a nested {} block. This gives the compiler the liberty to overwrite it once the block has been executed, which would be incorrect since the content of the buffer is used outside the {} block. This results in box/hash and viny/bloom test failures when tarantool is compiled in the release mode. Fix this by moving the buffer definition to the beginning of the function. Fixes commit 0dfd99c4 ("tuple: fix hashing of integer numbers").
-
Vladimir Davydov authored
Integer numbers stored in tuples as MP_FLOAT/MP_DOUBLE are hashed differently from integer numbers stored as MP_INT/MP_UINT. This breaks select() for memtx hash indexes and vinyl indexes (the latter use bloom filters). Fix this by converting MP_FLOAT/MP_DOUBLE to MP_INT/MP_UINT before hashing if the value can be stored as an integer. This is consistent with the behavior of tuple comparators, which treat MP_FLOAT and MP_INT as equal in case they represent the same number. Closes #3907
-
- Jan 25, 2019
-
-
Vladimir Davydov authored
In contrast to TX thread, WAL thread performs garbage collection synchronously, blocking all concurrent writes. We expected file removal to happen instantly so we didn't bother to offload this job to eio threads. However, it turned out that sometimes removal of a single xlog file can take 50 or even 100 ms. If there are a dozen files to be removed, this means a second delay and 'too long WAL write' warnings. To fix this issue, let's make WAL garbage collection fully asynchronous. Simply submit a jobs to eio and assume it will successfully complete sooner or later. This means that if unlink() fails for some reason, we will log an error and never retry file removal until the server is restarted. Not a big deal. We can live with it assuming unlink() doesn't normally fail. Closes #3938
-
Vladimir Davydov authored
We build the checkpoint list from the list of memtx snap files. So to ensure that it is always possible to recover from any checkpoint present in box.info.gc() output, we abort garbage collection if we fail to unlink a snap file. This introduces extra complexity to the garbage collection code, which makes it difficult to make WAL file removal fully asynchronous. Actually, it looks like we are being way too overcautious here, because unlink() doesn't normally fail so an error while removing a snap file is highly unlikely to occur. Besides, even if it happens, it still won't be critical, because we never delete the last checkpoint, which is usually used for backups/recovery. So let's simplify the code by removing that check. Needed for #3938
-
Kirill Yukhin authored
Since under heavy load with SQL queries ephemeral spaces might be extensively used it is possible to run out of tuple_formats for such spaces. This occurs because tuple_format is not immediately deleted when ephemeral space is dropped. Its removel is postponed instead and triggered only when tuple memory is exhausted. As far as there's no way to alter ephemeral space's format, let's re-use them for multiple epehemral spaces in case they're identical. Closes #3924
-
- Jan 24, 2019
-
-
Kirill Yukhin authored
This is trivial patch which sets error kind if epehemeral spaces cannot be created due to Tarantool's backend (e.g. there's no more memory or formats available).
-
Kirill Yukhin authored
Before the patch, when ephemeral space was created flag is_temporary was set after space was actually created. Which in turn lead to corresponding flag of tuple_format being set to `false`. So, having heavy load using ephemeral spaces (almost any SQL query) and snapshotting at the same time might lead to OOM, since tuples of ephemeral spaces were not marked as temporary and were not gc-ed. Patch sets the flag in space definition.
-
Kirill Yukhin authored
There were three extra fields of tuple_format which were setup after it was created. Fix that by extending tuple_format contstructor w/ three new arguments: engine, is_temporary, exact_field_count.
-
Vladimir Davydov authored
Currently, if we encounter an unknown key while parsing a .run, .index, or .vylog file we raise an error. As a result, if we add a new key to either of those entities, we will break forward compatibility although there's actually no reason for that. To avoid that, let's silently ignore unknown keys, as we do in case of xrow header keys.
-
Vladimir Davydov authored
Upon LSM tree dump completion, we iterate over all ranges of the LSM tree to update their priority and the position in the compaction heap. Since typically we need to update all ranges, we better use update_all heap method instead of updating the heap entries one by one.
-
Nikita Pettik authored
SQLite discards type and collation of IN operator when it comes with only one operand. This leads to different results of straight comparison using '=' operator and IN: SELECT x FROM t1 WHERE x IN (1.0); -- Result is empty set SELECT x FROM t1 WHERE x = 1.0; - - ['1'] Lets remove this strange ignorance and always take into consideration types and collations of operands. Closes #3934
-
Alexander Turenko authored
* Fixed wait_vclock() LSN problem with nil handling (#3895). * Enabled HangWatcher under --long. * Show result file for a hang test once at the end. * Show diff against a result file for a hung test.
-
- Jan 16, 2019
-
-
Vladimir Davydov authored
In order to estimate space amplification of a vinyl database, we need to know the size of data stored at the last LSM tree level. So this patch adds such a counter both per index and globablly. Per-index it is reported under disk.last_level, in rows, bytes, bytes after compression, and pages, just like any other disk counter. Globablly it is repoted in bytes only under disk.data_compacted. Note, to be consistent with disk.data, it doesn't include the last level of secondary indexes.
-
Vladimir Davydov authored
This patch adds dump_time and compaction_time to the scheduler section of global vinyl statistics and disk.dump.time and disk.compaction.time to per-index statistics. They report the total time spent doing dump and compaction tasks, respectively and can be useful for estimating average disk write rate, which is required for compaction-aware throttling.
-