- Jul 13, 2018
-
-
Kirill Yukhin authored
New commit in third_party/libyaml downgrades required cmake version.
-
Ivan Kosenko authored
-
- Jul 12, 2018
-
-
Kirill Shcherbatov authored
Need to update tests as with fixup in upstrem commit baf636a74b4b6d055d93e2d01366d6097eb82d90 Author: Tina Müller <cpan2@tinita.de> Date: Thu Jun 14 19:27:04 2018 +0200 The closing single quote needs to be indented... if it's on its own line. Closes #3275.
-
Kirill Yukhin authored
Closes #3275.
-
Vladislav Shpilevoy authored
Found by @ImeevMA
-
- Jul 10, 2018
-
-
Konstantin Belyavskiy authored
Next checkpoint time is set by the formula: period = self.checkpoint_interval + offset, where offset is defined as follow: offset = random % self.checkpoint_interval So offset must be calculated again if at least the new interval is less than the old one. Closes #3370
-
Kirill Shcherbatov authored
Now it is possible to specify a number in exponential form via all formats allowed by json standard. json.decode('{"remained_amount":2.0e+3}') json.decode('{"remained_amount":2.0E+3}') json.decode('{"remained_amount":2e+3}') json.decode('{"remained_amount":2E+3}') <-- fixed Closes #3514.
-
- Jul 09, 2018
-
-
Serge Petrenko authored
Schema version is used by both clients and internal modules to check whether there vere any updates in spaces and indices. While clients only need to be notified when there is a noticeable change, e.g. space is removed, internal components also need to be notified when something like space:truncate() happens, because even though this operation doesn't change space id or any of its indices, it creates a new space object, so all the pointers to the old object have to be updated. Currently both clients and internals share the same schema version, which leads to unnecessary updates on the client side. Fix this by implementing 2 separate counters for internal and public use: schema_state gets updated on every change, including recreation of the same space object, while schema_version is updated only when there are noticable changes for the clients. Introduce a new AlterOp to alter.cc to update public schema_version. Now all the internals reference schema_state, while all the clients use schema_version. box.iternal.schema_version() returns schema_version (the public one). Closes: #3414
-
- Jul 05, 2018
-
-
Kirill Shcherbatov authored
Fixed FreeBSD build: there were conflicting types bitset declared in lib/bitset and _cpuset.h that is the part of pthread_np.h used on FreeBSD. Resolves #3046.
-
Kirill Yukhin authored
After read-only flag is dropped, a test space is created successfully and on next launch creation will fail since it is not droppped. Drop the space. Closes #3507
-
- Jul 04, 2018
-
-
Serge Petrenko authored
box.session.su() set effective user to user after its execution, which made nested calls to it not work. Fixed this by saving current effective user and recovering from the save after sudo execution. This opened up a bug in box.schema.user.drop(): it has unnecessary check for privelege PRIV_REVOKE, which never gets granted to anyone but admin. Also fixed this by adding one extra box.session.su() call. Closes #3090, #3492
-
- Jul 03, 2018
-
-
Konstantin Osipov authored
Before this patch, memtx would silently roll back a multi-statement transaction on yield, switching the session to autocommit mode. It would do nothing in case yield happened in a sub-statement in auto-commit mode. This could lead to nasty/painful to debug side-effects in malformed Lua programs. Fix by adding a special transaction state - aborted, and enter this state in case of implicit yield. Check for what happens when a sub-statement yields. Check that yield trigger is removed by a rollback. Fixes gh-2631 Fixes gh-2528
-
- Jun 29, 2018
-
-
Konstantin Osipov authored
fiber->on_yield triggers were not invoked in fiber_call(), which meant that memtx transaction was not rolled back by fiber.create(). Fixes gh-3493
-
- Jun 28, 2018
-
-
Ilya Markov authored
Bug: During parsing http headers, long headers names are truncated to zero length, but values are not ignored. Fix this with adding parameter max_header_name_length to http request. If header name is bigger than this value, header name is truncated to this length. Default value of max_header_name_length is 32. Do some refactoring with renaming long names in http_parser. Closes #3451
-
Ilya Markov authored
Bug: Header parser validates http status line and besides saving http status, saves valid characters to header name, which is wrong. Fix this with skipping status line after validation without saving it as a header. In scope of #3451
-
Vladimir Davydov authored
If tarantool is stopped while writing a snapshot or a vinyl run file, inprogress files will never be removed. Fix this by collecting those files on recovery completion. Original patch by @IlyaMarkovMipt. Reworked by @locker. Closes #3406
-
Ilya Markov authored
In order to log only about files that were actually removed change log messages from "removing <name of file>" to "removed <name of file>" in vy_run_remove_files and xdir_collect_garbage functions. Needed for #3406
-
Konstantin Osipov authored
A minor follow up on the fix for gh-3452 (http.client timeout bug)
-
Ilya Markov authored
Current implementation of http.client relies on fiber_cond which is set after the request was registered and doesn't consider the fact that response may be handled before the set of fiber_cond. So we may have the following situation: 1. Register request in libcurl(curl_multi_add_handle in curl_execute). 2. Receive and process response, fiber_cond_signal on cond_var which no one waits. 3. fiber_cond_wait on cond which is already signaled. Wait until timeout is fired. In this case user have to wait timeout, though data was received earlier. Fix this with adding extra flag in_progress to curl_request struct. Set this flag true before registering request in libcurl and set it false when request is finished before fiber_cond_signal. When in_progress flag is false, don't wait on cond variable. Add 1 error injection. Closes #3452
-
- Jun 27, 2018
-
-
Konstantin Osipov authored
schema_version must be passed to perform_request in 1.9
-
Vladislav Shpilevoy authored
When a connection is closed, some of long-poll requests still may by in TX thread with non-discarded input. If a connection is closed, and then an input is discarded, then connection must not try to read new data. The bug was introduced here: f4d66dae by me. Closes #3400
-
- Jun 25, 2018
-
-
Vladimir Davydov authored
If called on a unix socket, bind(2) creates a new file, see unix(7). When we stop a unix tcp server, we should remove that file. Currently, we do it from the tcp server fiber, after the server loop is broken, which happens when the socket is closed, see tcp_server_loop(). This opens a time window for another tcp server to reuse the same path: main fiber tcp server loop ---------- --------------- -- Start a tcp server. s = socket.tcp_server('unix/', sock_path, ...) -- Stop the server. s:close() socket_readable? => no, break loop -- Start a new tcp server. Use the same path as before. -- This function succeeds, because the socket is closed -- so tcp_server_bind_addr() will clean up by itself. s = socket.tcp_server('unix/', sock_path, ...) tcp_server_bind tcp_server_bind_addr socket_bind => EADDRINUSE tcp_connect => ECONNREFUSED -- Remove dead unix socket. fio.unlink(addr.port) socket_bind => success -- Deletes unix socket used -- by the new server. fio.unlink(addr.port) In particular, the race results in sporadic failures of app-tap/console test, which restarts a tcp server using the same file path. To fix this issue, let's close the socket after removing the socket file. This is absolutely legit on any UNIX system, and this eliminates the race shown above, because a new server that tries to bind on the same path as the one already used by a dying server will not receive ECONNREFUSED until the socket fd is closed and hence the file is removed. A note about the app-tap/console test. After this patch is applied, socket.close() takes a little longer for unix tcp server, because it yields twice, once for removing the socket file and once for closing the socket file descriptor. As a result, on_disconnect() trigger left from the previous test case has time to run after session.type() check. Actually, those triggers have already been tested and we should have cleared them before proceeding to the next test case. So instead of adding two new on_disconnect checks to the test plan, let's clear the triggers before session.type() test case and remove 3 on_connect and 5 on_auth checks from the test plan. Closes #3168
-
Vladislav Shpilevoy authored
Consider this packet: msgpack = require('msgpack') data = msgpack.encode(18400000000000000000)..'aaaaaaa' Tarantool interprets 18400000000000000000 as size of a coming iproto request, and tries with no any checks to allocate buffer of such size. It calculates needed capacity like this: capacity = start_value; while (capacity < size) capacity *= 2; Here it is possible that on i-th iteration 'capacity' < 'size', but 'capacity * 2' overflows 64 bits and becomes < 'size' again, so this loop never ends and occupies 100% CPU. Strictly speaking overflow has undefined behavior. On the original system it led to nullifying 'capacity'. Such size is improbable as a real packet gabarits, but can appear as a result of parsing of some invalid packet, first bytes of which accidentally appears to be valid MessagePack uint. This is how the bug emerged on the real system. Lets restrict the maximal packet size to 2GB. Closes #3464
-
- Jun 14, 2018
-
-
Vladimir Davydov authored
Since tuples stored in temporary spaces are never written to disk, we can always delete them immediately, even when a snapshot is in progress. Closes #3432
-
Vladimir Davydov authored
-
- Jun 08, 2018
-
-
Alexander Turenko authored
It fixes the following errors during tarantool installation from packages on debian / ubuntu: ``` Unpacking tarantool (1.9.1.23.gacbd91c-1) ... dpkg: error processing archive /var/cache/apt/archives/tarantool_1.9.1.23.gacbd91c-1_amd64.deb (--unpack): trying to overwrite '/lib/systemd/system/tarantool.service', which is also in package tarantool-common 1.9.1.23.gacbd91c-1 ``` The problem is that tarantool.service file was shipped with tarantool-common and tarantool packages both. It is the regression after 8925b862. The way to avoid installing / enabling the service file within tarantool package is to pass `--name` option to dh_systemd_enable, but do not pass the service file name. In that case dh_systemd_enable does not found the service file and does not enforce existence of the file. Hope there is less hacky way to do so, but I don't found one at the moment.
-
Georgy Kirichenko authored
Use volatile asm modifier to prevent unwanted and awkward optimizations causing segfault while backtracing
-
- Jun 07, 2018
-
-
Alexander Turenko authored
* added --verbose to show output of successful TAP13 test (#73) * allow to call create_cluster(), drop_cluster() multiple times (#83) * support configurations (*.cfg files) in core = app tests * added return_listen_uri = <boolean> option for create_cluster() * save and print at fail tarantool log for core = app tests (#87)
-
Alexander Turenko authored
It is necessary for build on Ubuntu Bionic. Debian bugreport: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=881481 Debhelper commit: https://github.com/Debian/debhelper/commit/740c628a1e571acded7e2aac5d6e7058e61da37f
-
lifemaker authored
-
- Jun 02, 2018
-
-
Konstantin Osipov authored
Fix a compiler warning with clang 6
-
- Jun 01, 2018
-
-
Vladimir Davydov authored
The callback invoked upon compaction completion uses checkpoint_last() to determine whether compacted runs may be deleted: if the max LSN stored in a compacted run (run->dump_lsn) is greater than the LSN of the last checkpoint (gc_lsn) then the run doesn't belong to the last checkpoint and hence is safe to delete, see commit 35db70fa ("vinyl: remove runs not referenced by any checkpoint immediately"). The problem is checkpoint_last() isn't synced with vylog rotation - it returns the signature of the last successfully created memtx snapshot and is updated in memtx_engine_commit_checkpoint() after vylog is rotated. If a compaction task completes after vylog is rotated but before snap file is renamed, it will assume that compacted runs do not belong to the last checkpoint, although they do (as they have been appended to the rotated vylog), and delete them. To eliminate this race, let's use vylog signature instead of snap signature in vy_task_compact_complete(). Closes #3437
-
- May 31, 2018
-
-
Vladimir Davydov authored
latch_destroy() and fiber_cond_destroy() are basically no-op. All they do is check that latch/cond is not used. When a global latch or cond object is destroyed at exit, it may still have users and this is OK as we don't stop fibers at exit. In vinyl this results in the following false-positive assertion failures: src/latch.h:81: latch_destroy: Assertion `l->owner == NULL' failed. src/fiber_cond.c:49: fiber_cond_destroy: Assertion `rlist_empty(&c->waiters)' failed. Remove "destruction" of vy_log::latch to suppress the first one. Wake up all fibers waiting on vy_quota::cond before destruction to suppress the second one. Add some test cases. Closes #3412
-
- May 29, 2018
-
-
Georgy Kirichenko authored
Handle cases if instance_uuid and replicaset_uuid are present in box.cfg and have same values as already set. Fixes #3421
-
- May 25, 2018
-
-
Konstantin Osipov authored
replication: make replication_connect_timeout dynamic
-
Konstantin Osipov authored
-
Vladimir Davydov authored
replicaset_sync() returns not only if the instance synchronized to connected replicas, but also if some replicas have disconnected and the quorum can't be formed any more. Nevertheless, it always prints that sync has been completed. Fix it. See #3422
-
Vladimir Davydov authored
If a replica disconnects while sync is in progress, box.cfg{} may stop syncing leaving the instance in 'orphan' mode. This will happen if not enough replicas are connected to form a quorum. This makes sense e.g. on network error, but not when a replica is loading, because in the latter case it should be up and running quite soon. Let's account replicas that disconnected because they haven't completed initial configuration yet and continue syncing if connected + loading > quorum. Closes #3422
-
Konstantin Belyavskiy authored
Small refactoring: remove 'enum replica_state' since reuse a subset from applier state machine 'enum replica_state' to check if we have achieved replication quorum and hence can leave read-only mode.
-