- Apr 05, 2017
-
-
lenkis authored
Closes #1488
-
Roman Tsisyk authored
Fixes #2190
-
- Apr 04, 2017
-
-
Georgy Kirichenko authored
Handle event loop for every 0.1M of restored snapshot rows. Issue #2108
-
Konstantin Osipov authored
vinyl expects that entry->res contains a correct signature of the transaction. We used to sloppily assign it to some non-zero signature of the last committed transaction in the batch in case of success. Vinyl expects an exact value, growing monotonically for each next transaction. Optimistically promote writer vclock with replica vclock before write, in hope that the same row is never applied twice. To be covered with test and addressed in a separate patch (gh-2283).
-
Georgy Kirichenko authored
Fix error code and error message for big tuples. Fixed #2236
-
Ilya authored
Fixes #2092
-
Roman Tsisyk authored
Use luaL_checkuint64()/luaL_pushuint64() to work with uint64_t values. Fixes #2096
-
- Apr 03, 2017
-
-
Vladislav Shpilevoy authored
The dump of index-wide run can start when compaction of one or more ranges is in progress (see #2209). If we reset the max_dump_size and compact_priority in compact_complete, there can be the following case: compact_prio = x compact_prio = y > x compact_prio = 0 | | | => y has lost. +-----------------|-----------------------+ | compact | +-----------------|-----------------------+ +------------+ | dump | +------------+ Reset the compact_priority and max_dump_size in compact_new() and restore the saved values on abort. Needed for #2209
-
Vladislav Shpilevoy authored
For all pages except the last we use its min_key as the left border and min_key of the next page as the right border. The last page doesn't have the right neighbour, so there is no way to determine its bounds. This patch add a right border for the last page to vy_task. This feature is needed to split correctly index-wide runs into ranges during compaction (#2209). Index-wide run +-------------------------------------------+ | page 1 | page 2 | page 3 | page N | <--- max key +-------------------------------------------+ +-----------+-------------+---------------------+-------------+---------+ | range 1 | range 2 | range 3 | range 4 | range 5 | +-----------+-------------+---------------------+-------------+---------+ Page N uses dump_task.max_written_key as its right border.
-
Vladislav Shpilevoy authored
First, the vy_range_get_dump_iterator(), to merge frozen in-memory indexes. Second, the vy_range_get_compact_iterator(), to merge fozen in-memory indexes and some runs. Needed for #2209
-
Vladislav Shpilevoy authored
During transition process to the single mem per index architecture we plan to use special ranges to handle index-wide runs. Add a flag to vy_log to distinguish between regular ranges and ranges used for index-wide runs. Needed for #2209
-
Vladislav Shpilevoy authored
-
- Mar 31, 2017
-
-
Roman Tsisyk authored
Follow up d08f494e See #2225
-
Georgy Kirichenko authored
Export API to create key definitions and tuple formats. Now modules can create custom key defs and custom tuples formats to use optimized tuple_compare.cc functions for fast tuple comparsion. Closes #2225
-
Roman Tsisyk authored
Follow up the previous commit.
-
Roman Tsisyk authored
vy_log_record.index_id is actually LSN from the time of index creation whereas vy_log_record.iid is original index_id from data dictionary. Rename misleading index_id to index_lsn to match the actual state.
-
Vladimir Davydov authored
Instead of providing high-level helpers (vy_collect_garbage(), vy_log_relay(), vy_log_backup()), expose vy_recovery API. This eliminates encapsulation issues (like vy_log having to know how to format run paths).
-
Vladimir Davydov authored
Both of the methods need it, but instead of receiving it as an argument, they store it somewhere within the engine. This eases extraction of vy_recovery from vy_log, done by the next patch.
-
Vladimir Davydov authored
vy_recovery_new() reads xlog, which blocks the current fiber, so we call it from a coeio task. Let's make it use coeio internally.
-
Vladimir Davydov authored
Rename *.xctl to *.vylog
-
Vladimir Davydov authored
Moving vylog to the core and naming it xctl in order to store metadata coming from all engines in it turned out to be a premature move, because we don't exactly know what format to use, neither do we have a proper infrastructure yet. Move it back for now. We will get back to it once we figure out how to implement replication groups.
-
Vladimir Davydov authored
Introduce Engine::collectGarbage() and backup() callbacks for vinyl, and call xctl_collect_garbage() and xctl_backup() from them. Also, move xctl_init(), xctl_free(), xctl_{begin,end}_recovery(), xctl_rotate() to vinyl. Needed to turn xctl into vinyl-private metadata log.
-
Vladimir Davydov authored
Introduce Engine::backup() callback and implement it for memtx. Needed to turn xctl into vinyl-private metadata log.
-
Ilya authored
Rewrite the test case in Lua. Fixes #1667
-
Roman Tsisyk authored
* Add comments to all functions and enum members. * Add XXX_name() functions to get enum names by a code and refactor Lua xlog reader. * Use NULL instead of "" for missing values in XXX_type_strs to fix Lua xlog reader. * Add missing strings to iproto_key_name_strs[]. * Rename row_index into page_index.
-
- Mar 30, 2017
-
-
Georgy Kirichenko authored
Remove map-in-a-map-in-a-map layout and use xrow->body instead. See #2100
-
Roman Tsisyk authored
Follow up b3b89649
-
Roman Tsisyk authored
-
Roman Tsisyk authored
Follow up 987b20b5 Closes #2258
-
Roman Tsisyk authored
Fixes #2259
-
Alexandr Lyapunov authored
Follow up #2256
-
- Mar 29, 2017
-
-
Ilya authored
Rewrite test using TAP to avoid interference of test-run. Fixes #2198
-
Alexandr Lyapunov authored
Fixes #2257
-
Georgy Kirichenko authored
Now struct key_def contains only key definition - field numbers with types and format-specific comparators. struct index_def contains corresponding key definition and index parameters (type, name, and e.g.). Needed for #2225
-
Georgy Kirichenko authored
Remove unused functions key_list_add_key and key_list_del_key. Prerequisite for #2225
-
Vladimir Davydov authored
A run can be deleted by a concurrent compaction task while a run iterator is reading it via coeio. Currently, we detect this by checking index and range versions after coeio task completes, which results in vy_run_iterator being dependant on vy_range. Due to this dependency, we can't add a run w/o a range to an iterator, which is required to make in-memory levels per index rather than per range. Let's remove this dependency by making vy_run_unref() return a boolean flag set if the run was deleted, and using this to abort run iterator.
-
- Mar 28, 2017
-
-
Ilya authored
Fixes #2063
-
Roman Tsisyk authored
-
- Mar 27, 2017
-
-
Konstantin Osipov authored
* remove dead code * reduce scope of struct recovery *recovery
-
Ilya authored
Fixes #2198
-