- Apr 18, 2018
-
-
Konstantin Belyavskiy authored
When bootstrapping a new cluster, each replica from replicaset can be chosen as a leader, but if it is 'read-only', bootstrap will failed with an error. Fixed it by eliminating read-only replicas from voting by adding access rights information to IPROTO_REQUEST_VOTE reply. Closes #3257
-
- Apr 11, 2018
-
-
Georgy Kirichenko authored
There was an invalid refactoring with forgotten renaming.
-
Arseny Antonov authored
The reason of the failures is TLSv1.0/TLSv1.1 brownout on the PyPI side, see [1] for more information. [1]: pypa/packaging-problems#130
-
- Apr 10, 2018
-
-
Arseny Antonov authored
* Added new coveralls options to sync repo * Pass travis job to coverage docker
-
- Apr 09, 2018
-
-
Konstantin Belyavskiy authored
If 'box.cfg.read_only' is false, 'replication' defines at least one replica (other than itself), but they are not available at the time of box.cfg execution and replication_connect_quorum is set to zero, master displays 'orphan' status instead of 'running' since logic which cnange this state is executed only after successfull connection. Closes #3278
-
- Apr 07, 2018
-
-
Vladimir Davydov authored
If a fiber waiting for a read task to complete is cancelled, it will leave the read iterator immediately, leaving the read task pending. If the index is dropped before the read task is complete, the task will attempt to dereference a deleted run upon completion: 0 0x560b4007dbbc in print_backtrace+9 1 0x560b3ff80a1d in _ZL12sig_fatal_cbiP9siginfo_tPv+1e7 2 0x7f52b09190c0 in __restore_rt+0 3 0x7f52af6ea30a in bzero+5a 4 0x560b3ffc7a99 in mempool_free+2a 5 0x560b3ffcaeb7 in vy_page_read_cb_free+47 6 0x560b400806a2 in cbus_call_done+3f 7 0x560b400805ea in cmsg_deliver+30 8 0x560b40080e4b in cbus_process+51 9 0x560b4003046b in _ZL10tx_prio_cbP7ev_loopP10ev_watcheri+2b 10 0x560b4023d86e in ev_invoke_pending+ca 11 0x560b4023e772 in ev_run+5a0 12 0x560b3ff822dc in main+5ed 13 0x7f52af6862b1 in __libc_start_main+f1 14 0x560b3ff801da in _start+2a 15 (nil) in +2a Fix this by elevating the run reference counter per each read task. Note, currently we use vy_run::refs not only as a reference counter, but also as a counter of slices created for the run - see how we compare it to vy_run::compacted_slice_count in vy_task_compact_complete(). This isn't going to work anymore, obviously. Now we need to count slices created per each run in a separate counter, vy_run::slice_count. Anyway, it was a rather dubious hack to abuse reference counter for counting slices and it's good to finally get rid of it.
-
Vladimir Davydov authored
We use ERRINJ_DOUBLE for all other timeout injections. This makes them more flexible as we can inject an arbitrary timeout in tests, not just enable some hard-coded timeout. Besides, it makes tests easier to follow. So let's use ERRINJ_DOUBLE for ERRINJ_VY_READ_PAGE_TIMEOUT too.
-
Vladimir Davydov authored
If a space has no indexes, index_find() will return NULL, which will be happily dereferenced by on_replace_dd_sequence(). Looks like this bug goes back to the time when we made index_find() exception-free and introduced index_find_xc() wrapper. Fix it and add a test case.
-
- Apr 05, 2018
-
-
Ilya Markov authored
* Remove rewriting format of default logger in case of syslog option. * Add facility option parsing and use parsed results in format message according to RFC3164. Possible values and default value of syslog facility are taken from nginx (https://nginx.ru/en/docs/syslog.html) * Move initialization of logger type and format fucntion before initialization of descriptor in log_XXX_init, so that we can test format function of syslog logger. Closes gh-3244.
-
- Apr 04, 2018
-
-
Alexander Turenko authored
Filed gh-3311 to remove this export soon. Fixes #3310.
-
- Apr 03, 2018
-
-
Vladimir Davydov authored
If the size of a transaction is greater than the configured memory limit (box.cfg.vinyl_memory), the transaction will hang on commit for 60 seconds (box.cfg.vinyl_timeout) and then fail with the following error message: Timed out waiting for Vinyl memory quota This is confusing. Let's fail such transactions immediately with OutOfMemory error. Closes #3291
-
- Apr 02, 2018
-
-
Arseny Antonov authored
-
- Mar 30, 2018
-
-
Konstantin Belyavskiy authored
In case of sudden power-loss, if data was not written to WAL but already sent to remote replica, local can't recover properly and we have different datasets. Fix it by using remote replica's data and LSN comparison. Based on @GeorgyKirichenko proposal and @locker race free check. Closes #3210
-
Konstantin Belyavskiy authored
Stay in orphan (read-only) mode until local vclock is lower than master's to make sure that datasets are the same across replicaset. Update replication/catch test to reflect the change. Suggested by @kostja Needed for #3210
-
Vladimir Davydov authored
Closes #3148
-
Vladimir Davydov authored
EV_USE_REALTIME and EV_USE_MONOTONIC, which force libev to use clock_gettime, are enabled automatically on Linux, but not on OS X. We used to forcefully enable them for performance reasons, but this broke compilation on certain OS X versions and so was disabled by commit d36ba279 ("Fix gh-1777: clock_gettime detected but unavailable in macos"). Today we need these features enabled not just because of performance, but also to avoid crashes when time changes on the host - see issue #2527 and commit a6c87bf9 ("Use ev_monotonic_now/time instead of ev_now/time for timeouts"). Fortunately, we have this cmake defined macro HAVE_CLOCKGETTIME_DECL, which is set if clock_gettime is available. Let's enable EV_USE_REALTIME and EV_USE_MONOTONIC if this macro is defined. Closes #3299
-
- Mar 29, 2018
-
-
Ilya Markov authored
The bug was that logging we passed to function write number of bytes which may be more than size of buffer. This may happen because formatting log string we use vsnprintf which returns number of bytes would be written to buffer, not the actual number. Fix this with limiting number of bytes passing to write function. Close #3248
-
Ilya Markov authored
* Refactor tests. * Add ev_async and fiber_cond for thread-safe log_rotate usage. Follow up #3015
-
Ilya Markov authored
Fix race condition in test on log_rotate. Test opened file that must be created by log_rotate and read from it. But as log_rotate is executed in separate thread, file may be not created or log may be not written yet by the time of opening in test. Fix this with waiting creation and reading the line.
-
Kirill Shcherbatov authored
Netbox does not need nullability or collation info, but some customers do. Lets fill index parts with these fields. Fixes #3256
-
- Mar 27, 2018
-
-
Georgy Kirichenko authored
* session_run_on_disconnect_triggers is called only if there are corresponding triggers so move session_storage_cleanup to session_destroy. * fix session storage cleanup path: use "box.session.aggregate_storage[sid]" instead of "session.aggregate_storage[sid]" (what was wrong) Fixed #3279
-
- Mar 22, 2018
-
-
Konstantin Osipov authored
-
Alec Larson authored
Empty strings should be ignored, rather than throw an error. Passing only empty strings (or nothing) to `pathjoin` will return '.' which means the current directory Every path part passed to `pathjoin` is now converted to a string The `gsub('/+', '/')` call already does what the removed code does, so avoid the unnecessary work Simply check if the result equals '/' before removing a trailing '/'. The previous check did extra work for no gain.
-
- Mar 21, 2018
-
-
Vladislav Shpilevoy authored
-
- Mar 20, 2018
-
-
Vladislav Shpilevoy authored
It is possible to discard non-sent responses using a special sequence of requests and yields. In details: if DML requests yield on commit too long, there are fast read requests, and a network is saturated, then some non-sent DML responses are discarded. Closes #3255
-
Vladislav Shpilevoy authored
A vinyl space can be altered in such a way, that key definitions of indexes are not changed, but comparators do. It is because space format reset can make some indexed fields optional. To be able update key definitions in place, they must not be used in a worker thread. So lets copy key_defs for a worker, and update index key definitions in place. An alternative is key_defs reference counting, but there is open questions what to do in key_defs in mems, ranges, iterators, runs, slices. Now lets do a hotfix of a crash, and then refactoring. Closes #3229
-
Vladislav Shpilevoy authored
Use key_def_delete instead. It is more safe to use one destructor everywhere for a case, if in a future key_def will not be deleted by simple free().
-
Konstantin Belyavskiy authored
Under FreeBSD getline prototype is not provided by default due to compatibility problems. Get rid of getline (use fgets instead). Based on @locker proposal. Closes #3217.
-
Konstantin Belyavskiy authored
-
- Mar 13, 2018
-
-
imarkov authored
log_rotate writes informational message only in plain format, which is inappropriate when logger is configured to log in json format. Fix it with replacing write with say_info which is safe because it is ev signal callback not signal handler. Fix bug say_format in json formatting, Log message was invalid in several cases. Closes #2987
-
IlyaMarkovMipt authored
Current log rotation is not async signal safe. In order to make it so refactor signal handling with ev_signal. Log rotation for each logger performs in separate coio_task to provide async and thread-safe execution. Relates #3015
-
imarkov authored
Remove yielding and waiting task complete in coio_task_post in case if timeout is zero. This patch is inspired by the need in log_rotate posting coio task. This post should not yield there because the implementation of multiple loggers works with linked list structure of loggers which is not fiber-safe.
-
Vladislav Shpilevoy authored
-
Vladislav Shpilevoy authored
When a space format is updated, a new min field count must be calculated before a new format construction to check that some of fields became optional. Part of #3229
-
- Mar 11, 2018
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
- Mar 07, 2018
-
-
Vladimir Davydov authored
When scanning a secondary index, we actually track each tuple in the transaction manager twice - as a part of the interval read from the secondary index and as a point in the primary index when retrieving the full tuple. This bloats the read set - instead of storing just one interval for a range request, we also store each tuple returned by it, which may count to thousands. There's no point in this extra tracking, because whenever we change a tuple in the primary index, we also update it in all secondary indexes. So let's remove it to save us some memory and cpu cycles. This is an alternative fix for #2534 It should also mitigate #3197
-
Vladimir Davydov authored
We never use vy_point_lookup directly, instead we open vy_read_iterator, which automatically falls back on vy_point_lookup if looking for exact match (EQ + full key). Due to this we can't add a new point lookup specific argument (we would have to propagate it through the read iterator, which is ugly). Let's call vy_point_lookup directly when we know that vy_read_iterator will fall back on it anyway.
-
Vladimir Davydov authored
This reverts commit a31c2c10. The commit reverted by this patch forces all autocommit SELECTs to open a read view immediately, as a result they can't update tuple cache. Turned out that one of our customers intensively uses such SELECTs, and disabling cache for them results in performance degradation. The reason why that commit was introduced in the first place was to avoid read set bloating for big SELECTs (e.g. space.count()): currently we track not only read interval boundaries, but also each tuple fetched from the primary index if it is a secondary index that is being scanned. However, it doesn't seem that we really need to do that - tracking an interval read from a secondary index guarantees that if a tuple returned by the iterator is modified the transaction will be aborted and so there's no need to track individual tuples read from the primary index. That said, let's revert this commit and instead remove point lookup tracking in case it is a secondary index that is being scanned (done later in the series).
-