- Mar 13, 2018
-
-
IlyaMarkovMipt authored
Current log rotation is not async signal safe. In order to make it so refactor signal handling with ev_signal. Log rotation for each logger performs in separate coio_task to provide async and thread-safe execution. Relates #3015
-
imarkov authored
Remove yielding and waiting task complete in coio_task_post in case if timeout is zero. This patch is inspired by the need in log_rotate posting coio task. This post should not yield there because the implementation of multiple loggers works with linked list structure of loggers which is not fiber-safe.
-
Vladislav Shpilevoy authored
-
Vladislav Shpilevoy authored
When a space format is updated, a new min field count must be calculated before a new format construction to check that some of fields became optional. Part of #3229
-
- Mar 11, 2018
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
- Mar 07, 2018
-
-
Vladimir Davydov authored
When scanning a secondary index, we actually track each tuple in the transaction manager twice - as a part of the interval read from the secondary index and as a point in the primary index when retrieving the full tuple. This bloats the read set - instead of storing just one interval for a range request, we also store each tuple returned by it, which may count to thousands. There's no point in this extra tracking, because whenever we change a tuple in the primary index, we also update it in all secondary indexes. So let's remove it to save us some memory and cpu cycles. This is an alternative fix for #2534 It should also mitigate #3197
-
Vladimir Davydov authored
We never use vy_point_lookup directly, instead we open vy_read_iterator, which automatically falls back on vy_point_lookup if looking for exact match (EQ + full key). Due to this we can't add a new point lookup specific argument (we would have to propagate it through the read iterator, which is ugly). Let's call vy_point_lookup directly when we know that vy_read_iterator will fall back on it anyway.
-
Vladimir Davydov authored
This reverts commit a31c2c10. The commit reverted by this patch forces all autocommit SELECTs to open a read view immediately, as a result they can't update tuple cache. Turned out that one of our customers intensively uses such SELECTs, and disabling cache for them results in performance degradation. The reason why that commit was introduced in the first place was to avoid read set bloating for big SELECTs (e.g. space.count()): currently we track not only read interval boundaries, but also each tuple fetched from the primary index if it is a secondary index that is being scanned. However, it doesn't seem that we really need to do that - tracking an interval read from a secondary index guarantees that if a tuple returned by the iterator is modified the transaction will be aborted and so there's no need to track individual tuples read from the primary index. That said, let's revert this commit and instead remove point lookup tracking in case it is a secondary index that is being scanned (done later in the series).
-
Vladislav Shpilevoy authored
The first reason of the patch is that vinyl iterators are virtual already, and 'if's about constant index attributes (like index->id) can be replaced by new next() implementation. Now in next() index->id is checked to detect necessity of primary index lookup. Lets split next() in 2 functions: primary_next() and secondary_next() to remove 'if'. The second reason, that in #2129 logic of secondary index lookup complicates a lot. For example, there is raw idea to do not add statements into a cache before looking up in a primary index, because after #2129 any tuple, read from a secondary index, can be dirty. Needed for #2129
-
Konstantin Osipov authored
-
- Mar 06, 2018
-
-
Georgy Kirichenko authored
In most cases a tarantool yields on test_run connection and current transactions rollback, but some times a tarantool console already has more input to execute and select returns different results. Fixed #3145
-
Vladimir Davydov authored
We don't write empty run files anymore. Remove the dead code.
-
Vladimir Davydov authored
Run iterator uses curr_pos (i.e. page number plus offset) as pointer to the current position. Whenever it needs to get a statement at curr_pos, it calls vy_run_iterator_read(), which allocates a new statement. It doesn't try to cache the last allocated statement, which results in multiple pointless reallocations of the same statement. For instance, vy_run_iterator_next_key() rereads the current statement, then moves to the next key, then calls vy_run_iterator_find_lsn(), which rereads the current statement again. This is just stupid. To avoid that, let's keep vy_run_iterator->curr_stmt in sync with curr_pos. This simplifies the code quite a bit and makes it more efficient.
-
Vladimir Davydov authored
vy_run_iterator_get() remembers the position of the last statement returned by the iterator in curr_stmt_pos. It then uses it to skip a disk read in case it is called again for the same iterator position. The code is left from the time when iterators had public virtual method 'get', which could be called several times without advancing the iterator. Nowadays, vy_run_iterator_get() is never called twice for the same iterator position (check coverity scan) so we can zap this logic.
-
Vladimir Davydov authored
vy_run_iterator_load_page() keeps two most recently read pages. This makes sense, because we often probe a page for a better match. Keeping two pages rather than just one makes sure we won't throw out the current page if probing fails to find a better match. What doesn't make sense though is cache promotion logic: we keep promoting the page containing the current key. The comment says: /* * The cache is at least two pages. Ensure that * subsequent read keeps the cur_key in the cache * by moving its page to the start of LRU list. */ vy_run_iterator_cache_touch(itr, cur_key_page_no); The comment is quite misleading. The "cache" contains at most two pages. Proudly calling this travesty of a cache LRU is downright ridiculous. Anyway, touching the current page will simply swap the two cached pages if a key history spans less than two pages, resulting in no performance gain or loss whatsoever. However, if a key history spans more than two pages, it will evict a page that is about to be read. That said, let's get rid of this piece of crap.
-
Vladimir Davydov authored
vy_run_iterator_next_key() has to special-case LE/LT for the first and the last page. This is needed, because this function is used by vy_read_iterator_seek() for starting iteration. Actually, there's no point for vy_read_iterator_seek() to use vy_read_iterator_next_key() - vy_read_iterator_next_pos() + vy_read_iterator_find_lsn() would be enough. Bearing this in mind, simplify vy_run_iterator_next_key().
-
Konstantin Osipov authored
-
- Mar 05, 2018
-
-
Vladislav Shpilevoy authored
Inside a docker 'Connection refused' error turns into 'Cannot assign requested address' - because of it netbox test fails, which searches for 'Connection refused' in logs. Let search for both. Follow up #3164
-
Vladislav Shpilevoy authored
When reconnect fails, it prints error to log. But reconnect can fail again and again very many times until the connection is closed. Lets print the first error message under warn log level, and the following ones under verbose log level until error will not be changed. Closes #3175
-
Vladislav Shpilevoy authored
If a connection has reconnect_after > 0, then it is never deleted until it is explicitly closed or reconnect_after is reset. It is because worker fiber of such connection holds all references during yield. Fix it by do not waiting for a next reconnection inside a state machine - protocol_sm() function must not be infinite in a case of error. Closes #3164
-
Konstantin Osipov authored
-
imarkov authored
When fio.read from multiple fibers is performed, as fio.read yields before performing actual read operation and ibuf shared among several fibers may be corrupted and read data is mixed. The fix is to create new ibuf for each fio.read call in case buffer is not specified. Closes #3187
-
- Mar 01, 2018
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
imarkov authored
* Add test on usage run_triggers inside before_replace. Test verifies that other before_replace triggers within the same space will be executed but other triggers won't. * Add test on return old in before_replace. Test verifies that other before_replace triggers will be executed, but insertion won't take place and other triggers will be ignored. Closes #3128
-
Konstantin Belyavskiy authored
Fix problem: cannot connect to unix binary socket using authentication. To solve this issue update our uri parser to support following schema: login:password@unix/:/path1/path2/path3 Add tests Closes: #2933
-
Konstantin Osipov authored
If curl does not have SSL support, don't panic, honor CURL_FIND_REQUIRED variable, and panic only if it's set. Correct the grammar in the error message.
-
Konstantin Belyavskiy authored
On MAC several old curl versions don't support SSL by default. Try to check it and print error message if so. Close #3065
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
Closes #2666
-
Vladislav Shpilevoy authored
When a field type is specified as numbered key, and a name is named key, then the type is ignored. For example: {name = '<name>', '<type>'} - here the '<type>' is ignored and result space format contains 'any' regardless of <type>. Fix tuple format field parsing to take this case into account. Closes #2895
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Vladimir Davydov authored
ERRINJ_VY_READ_PAGE and ERRINJ_VY_READ_PAGE_TIMEOUT injections are used by vinyl/errinj test to check that a page read error is handled properly. The cache checks if these injections are enabled and bails out if so. Since commit a31c2c10 ("vinyl: force read view in iterator in autocommit mode"), this is not necessary, because cache is not used unless SELECT is called from a transaction, and the above mentioned test doesn't use transactions. So let's remove the checks. If we ever enable cache for all SELECTs, we can disable cache in the test with box.cfg.vinyl_cache instead of using error injections.
-
Vladimir Davydov authored
Currently, new entries are added to the cache even if the limit is set to 0, which is obviously incorrect. Fix it. This might be useful for debugging and/or performance evaluation. Closes #3172
-
Vladimir Davydov authored
Make box.cfg.vinyl_cache option dynamically configured. If the new value is less than the current cache usage, it will block until it reclaims enough cache entries to fit in the new limit.
-
Konstantin Osipov authored
-