- Oct 05, 2018
-
-
Alexander Turenko authored
Fixed warnings from -Wunused-parameter, -Wunused-variable, -Wunused-but-set-variable. Fixed -Wsometimes-uninitialized on clang in where.c. Removed -Wall -Wextra -Werror from SQL CMakeLists.txt, because it is set by tarantool itself and is redundant. Fixes #3238.
-
Kirill Yukhin authored
-
Alexander Turenko authored
Added MAKE_BUILD_TYPE=RelWithDebInfoWError option, which means enabling -DNDEBUG=1, -O2 and -Wall -Wextra -Werror. This ensures we have clean release build without warnings. Fixed found -Wunused-variable and -Wunused-parameter warnings. Part of #3238.
-
AKhatskevich authored
Closes #3696
-
AKhatskevich authored
Part of #3696
-
AKhatskevich authored
Part of #3696
-
Kirill Yukhin authored
Not initialized var led to crashes in non-debug modes.
-
- Oct 04, 2018
-
-
Kirill Yukhin authored
In Tarantool opening and positioning cursor for writing have no sense. So refactor SQL code, to perform: - Creation of ephemeral table w/o any cursors machinery. No op-code returns register which contains plain pointer to the ephemeral space. - OpenRead/OpenWrite opcodes were replaced with single IteratorOpen op-code, which establishes new cursor w/ intention to subsequent read from the space. This opcode accepts both plain pointer (in P4 operand) and register which contains pointer (in P3) to the ephemeral space. - Fix query scheduler and DML routines thoroughly. Closes #3182 Part of #2362
-
- Oct 03, 2018
-
-
Vladislav Shpilevoy authored
Closes #3709
-
Vladimir Davydov authored
-
Olga Arkhangelskaia authored
Patch fixes behavior when replica tries to connect to the same master more than once. In case when it is initial configuration we raise the exception. If it in not initial config we print the error and disconnect the applier. @locker: minor test cleanup. Closes #3610
-
Vladimir Davydov authored
They are only used to set corresponding members of vy_quota, vy_run_env, and vy_scheduler when vy_env is created. No point in keeping them around all the time.
-
Vladimir Davydov authored
Currently, we create a quota object with the limit maximized, and only set the configured limit when local recovery is complete, so as to make sure that no dump is triggered during recovery. As a result, we have to store the configured limit in vy_env::memory, which looks ugly, because this member is never used afterwards. Let's introduce a new method vy_quota_enable to enable quota so that we can set the limit right on quota object construction. This implies that we add a boolean flag to vy_quota and only check the limit if it is set. There's another reason to add such a method. Soon we will implement quota consumption rate limiting. Rate limiting requires a periodic timer that would replenish quota. It only makes sense to start such a timer upon recovery completion, which again leads us to an explicit method for enabling quota. vy_env::memory will be removed by the following patch along with a few other pointless members of vy_env. Needed for #1862
-
Vladimir Davydov authored
Using fiber_cond as a wait queue isn't very convenient, because: - It doesn't allow us to put a spuriously woken up fiber back to the same position in the queue where it was, thus violating fairness. - It doesn't allow us to check whether we actually need to wake up a fiber or it will have to go back to sleep anyway as it needs more memory than currently available. - It doesn't allow us to implement a multi-queue approach where fibers that have different priorities are put to different queues. So let's rewrite the wait queue with plain rlist and fiber_yield. Needed for #1862
-
Vladimir Davydov authored
There's a sanity check in vinyl_engine_prepare, which checks if the transaction size is less than the configured limit and fails without waiting for quota if it isn't. Let's move this check to vy_quota_use, because it's really a business of the quota object. This implies that vy_quota_use has to set diag to differentiate this error from timeout.
-
Vladimir Davydov authored
The refactoring is targeted at facilitating introduction of rate limiting within the quota class. It moves code blocks around, factors out some blocks in functions, and improves comments. No functional changes. Needed for #1862
-
Vladimir Davydov authored
Turned out that throttling isn't going to be as simple as maintaining the write rate below the estimated dump bandwidth, because we also need to take into account whether compaction keeps up with dumps. Tracking compaction progress isn't a trivial task and mixing it in a module responsible for resource limiting, which vy_quota is, doesn't seem to be a good idea. Let's factor out the related code into a separate module and call it vy_regulator. Currently, the new module only keeps track of the write rate and the dump bandwidth and sets the memory watermark accordingly, but soon we will extend it to configure throttling as well. Since write rate and dump bandwidth are now a part of the regulator subsystem, this patch renames 'quota' entry of box.stat.vinyl() to 'regulator'. It also removes 'quota.usage' and 'quota.limit' altogether, because memory usage is reported under 'memory.level0' while the limit can be read from box.cfg.vinyl_memory, and renames 'use_rate' to 'write_rate', because the latter seems to be a more appropriate name. Needed for #1862
-
- Oct 02, 2018
-
-
Vladimir Davydov authored
There are three places where we start the scheduler fiber and enable the configured memory quota limit: local bootstrap, remote bootstrap, and local recovery completion. I'm planning to add more code there so let's factor it out now.
-
Nikita Pettik authored
Each column may feature default value, but this value must be constant. Before this patch, it was allowed to do things like: CREATE TABLE te14 (s1 INT PRIMARY KEY, s2 INT DEFAULT s1); Which results in assertion fault on insertion. Lets prohibit IDs (i.e. columns' names) in <DEFAULT> clause. Closes #3695
-
- Oct 01, 2018
-
-
Kirill Shcherbatov authored
Decreased number of default compound SELECTs to 30 to prevent stack overflow on most clang builds. Introduced new pragma sql_compound_select_limit to configure this option on fly. Closes #3382. @TarantoolBot document Title: new pragma sql_compound_select_limit Now it is allowed to manually set maximum count of compound selects. Default value is 30. Maximum is 500. Processing requests with great amount of compound selects with custom limit may cause stack overflow. Setting sql_compound_select_limit at 0 disables this limit at all. Example: \set language sql pragma sql_compound_select_limit=20
-
- Sep 26, 2018
-
-
Vladimir Davydov authored
When replication is restarted with the same replica set configuration (i.e. box.cfg{replication = box.cfg.replication}), there's a chance that an old relay will be still running on the master at the time when a new applier tries to subscribe. In this case the applier will get an error: main/152/applier/localhost:62649 I> can't join/subscribe main/152/applier/localhost:62649 xrow.c:891 E> ER_CFG: Incorrect value for option 'replication': duplicate connection with the same replica UUID Such an error won't stop the applier - it will keep trying to reconnect: main/152/applier/localhost:62649 I> will retry every 1.00 second However, it will stop synchronization so that box.cfg() will return without an error, but leave the replica in the orphan mode: main/151/console/::1:42606 C> failed to synchronize with 1 out of 1 replicas main/151/console/::1:42606 C> entering orphan mode main/151/console/::1:42606 I> set 'replication' configuration option to "localhost:62649" In a second, the stray relay on the master will probably exit and the applier will manage to subscribe so that the replica will leave the orphan mode: main/152/applier/localhost:62649 C> leaving orphan mode This is very annoying, because there's no need to enter the orphan mode in this case - we could as well keep trying to synchronize until the applier finally succeeds to subscribe or replication_sync_timeout is triggered. So this patch makes appliers enter "loading" state on configuration errors, the same state they enter if they detect that bootstrap hasn't finished yet. This guarantees that configuration errors, like the one above, won't break synchronization and leave the user gaping at the unprovoked orphan mode. Apart from the issue in question (#3636), this patch also fixes spurious replication-py/multi test failures that happened for exactly the same reason (#3692). Closes #3636 Closes #3692
-
Vladimir Davydov authored
First, we print "will retry every XX second" to the log after an error message only for socket and system errors although we keep trying to establish a replication connection after configuration errors as well. Let's print this message for those errors too to avoid confusion. Second, in case we receive an error in reply to SUBSCRIBE command, we log "can't read row" instead of "can't join/subscribe". This happens, because we switch an applier to SYNC/FOLLOW state before receiving a reply to SUBSCRIBE command. Fix this by updating an applier state only after successfully subscribing. Third, we detect duplicate connections coming from the same replica on the master only after sending a reply to SUBSCRIBE command, that is in relay_subscribe rather than in box_process_subscribe. This results in "can't read row" being printed to the replica's log even though it's actually a SUBSCRIBE error. Fix this by moving the check where it actually belongs.
-
- Sep 25, 2018
-
-
Serge Petrenko authored
In some cases no-ops are written to xlog. They have no effect but are needed to bump lsn. Some time ago (see commit 89e5b784) such ops were made bodiless, and empty body requests are not handled in xrow_header_decode(). This leads to recovery errors in special case: when we have a multi-statement transaction containing no-ops written to xlog, upon recovering from such xlog, all data after the no-op end till the start of new transaction will become no-op's body, so, effectively, it will be ignored. Here's example `tarantoolctl cat` output showing this (BODY contains next request data): --- HEADER: lsn: 5 replica_id: 1 type: NOP timestamp: 1536656270.5092 BODY: type: 3 timestamp: 1536656270.5092 lsn: 6 replica_id: 1 --- HEADER: type: 0 ... This patch handles no-ops correctly in xrow_header_decode(). @locker: refactored the test case so as not to restart the server for a second time. Closes #3678
-
Serge Petrenko authored
If space.before_replace returns the old tuple, the operation turns into no-op, but is still written to WAL as IPROTO_NOP for the sake of replication. Such a request doesn't have a body, and tarantoolctl failed to parse such requests in `tarantoolctl cat` and `tarantoolctl play`. Fix this by checking whether a request has a body. Also skip such requests in `play`, since they have no effect, and, while we're at it, make sure `play` and `cat` do not read excess rows with lsn>=to in case these rows are skipped. Closes #3675
-
Vladimir Davydov authored
-
Vladimir Davydov authored
The parser must not deal with internal Tarantool objects, such as space, index, or key_def, directly, because it (a) violates encapsulation, turning the code into a convoluted mess, and (b) makes it impossible to run the parser and VDBE on different instances, which might come in handy for cluster SQL implementation. Instead, it should store plain names in the generated VDBE code. It may use objects the sole purpose of which is to represent object definitions, such as key_part_def or field_def, though. This patch does a tiny step in this direction. It replaces key_def with sql_key_info in VDBE arguments. The new structure is a trivial wrapper around an array of key_part_def. It is ref-countable, just like its predecessor KeyInfo, so as to avoid unnecessary memory duplication. Since key_def spread roots deeply into the parser implementation, the new structure has two extra methods: sql_key_info_new_from_key_def sql_key_info_to_key_def so that it can be converted to/from a key definition. Note, the latter caches the result so as not to create a new key definition on subsequent function calls. This partially undoes the work done by commit 501c6e28 ("sql: replace KeyInfo with key_def"). The reason why I'm doing this now is that I want to dispose of the key_def_set_part function, which is used extensively by the parser, because the latter stores key_def directly in VDBE op. This function has a vague semantic and rather obscures construction of a key definition. It will get especially nasty once JSON path indexes are introduced in Tarantool core. The new struct, sql_key_info, allows us to get rid of most of those calls. Part of #3319
-
- Sep 24, 2018
-
-
Roman Khabibov authored
Closes: #3518.
-
Roman Khabibov authored
Part of #3518
-
- Sep 22, 2018
-
-
Vladimir Davydov authored
There are a few tests that create files in the system tmp directory and don't delete them. This is contemptible - tests shouldn't leave any traced on the host. Fix those tests. Closes #3688
-
Vladimir Davydov authored
fio.rmtree should use lstat instead of stat, otherwise it won't be able to remove a directory if there's a symbolic link pointing to a non-existent file. The test case will be added to app/fio.test.lua by the following commit, which is aimed at cleaning up /tmp directory after running tests.
-
Vladimir Davydov authored
Due to a missing privilege revocation in box/errinj, box/access_sysview fails if executed after it. Fixes commit af6b554b ("test: remove universal grants from tests").
-
Vladimir Davydov authored
Closes #3311
-
Vladimir Davydov authored
Currently, there are two ways of creating a new key definition object apart from copying (key_def_dup): either use key_def_new_with_parts, which takes definition of all key parts and returns a ready to use key_def, or allocate an empty key_def with key_def_new and then fill it up with key_def_set_part. The latter method is rather awkward: because of its existence key_def_set_part has to detect if all parts have been set and initialize comparators if so. It is only used in schema_init, which could as well use key_def_new_with_parts without making the code any more difficult to read than it is now. That being said, let us: - Make schema_init use key_def_new_with_parts. - Delete key_def_new and bequeath its name to key_def_new_with_parts. - Simplify key_def_set_part: now it only initializes the given part while comparators are set by the caller once all parts have been set. These changes should also make it easier to add json path to key_part.
-
- Sep 21, 2018
-
-
Vladimir Davydov authored
This reverts commit ea3a2b5f. Once we finally implement json path indexes, more fields that are calculated at run time will have to be added to struct key_part, like path hash or field offset. So this was actually a mistake to remove key_part_def struct, as it will grow more and more different from key_part. Conceptually having separate key_part_def and key_part is consistent with other structures, e.g. tuple_field and field_def. That said, let's bring key_part_def back. Sorry for the noise.
-
Sergei Voronezhskii authored
Until the bug in #3420 is fixed
-
Vladimir Davydov authored
The only difference between struct key_part_def and struct key_part is that the former stores only the id of a collation while the latter also stores a pointer to speed up tuple comparisons. It isn't worth keeping a separate struct just because of that. Let's use struct key_part everywhere and assume that key_part->coll is NULL if the part is needed solely for storing a decoded key part definition and isn't NULL if it is used for tuple comparisons (i.e. is attached to a key_def).
-
Kirill Shcherbatov authored
Start use tuple_field_by_part(_raw) routine in *extract, *compare, *hash functions. This new function use key_part to retrieve field data mentioned in key_part. Now it is just a wrapper for tuple_field_raw but with introducing JSON paths it would work in other way. Needed for #1012
-
Kirill Shcherbatov authored
To introduce JSON indexes we need changeable key_def containing key_part definition that would store JSON path and offset slot and slot epoch in following patches. Needed for #1012
-
Kirill Yukhin authored
-
Kirill Yukhin authored
-