- Apr 10, 2017
-
-
bigbes authored
-
Georgy Kirichenko authored
Add box_tuple_compare_with_key to module api. See #2225
-
- Apr 07, 2017
- Apr 06, 2017
-
-
Konstantin Osipov authored
Bikeshed to simplify merge of gh-1842, representing another noisy change in gh-1842. No semantical changes.
-
Alexandr Lyapunov authored
* add comments * move the transaction management code to a single place * no semantical changes
-
Vladimir Davydov authored
There's an optimization in vy_range_rotate_mem() which makes it delete the active in-memory tree instead of moving it to the frozen list provided it's empty. However, since commit 8116f209 ("vinyl: pin in-memory trees on tx prepare") an empty in-memory tree may be pinned by an ongoing transaction, in which deleting it will result in use after free in vy_tx_write().
-
Vladimir Davydov authored
-
Vladislav Shpilevoy authored
-
Vladislav Shpilevoy authored
Test is broken by design.
-
Konstantin Osipov authored
-
Vladimir Davydov authored
Pin each in-memory tree which is going to be modified by a transaction on Engine::prepare, unpin it on commit or rollback. Pinned in-memory trees can't be dumped until they are unpinned - dump/compaction task constructors wait until pin_count drops to 0 before handing the task over to a worker thread. Since the constructors rotate the in-memory tree before waiting and new transaction can only go to active in-memory trees, they won't wait forever, only until all transactions that were started before the task was scheduled are over. This is needed to insert statements into in-memory trees on tx prepare, not on tx commit as per now. Note, since vy_scheduler->checkpoint_lsn is only set after wal_checkpoint() while we select the in-memory tree to insert a new statement into in Engine::prepare, i.e. before writing to WAL, in order to guarantee snapshot consistency (i.e. that no statements inserted after WAL rotation will make it to the snapshot) we have to change the condition triggering rotation of the in-memory tree upon insertion of a statement from: mem->min_lsn <= checkpoint_lsn to mem->snapshot_version != snapshot_version (snapshot_version is increased on snapshot before wal_checkpoint())
-
Konstantin Osipov authored
Vinyl transaction manager does not allow a transaction to have lsn 0. This is a pre-requisite for the patch for gh-1842 (vy_tx_write() rewrite).
-
Konstantin Osipov authored
-
- Apr 05, 2017
-
-
Konstantin Osipov authored
Make struct recovery_journal re-usable and employ it during final join, to ensure the engine has correct LSNs during final join. Ensure we properly set recovery_journal when bootstrapping a replica from a remote master and reading master write ahead log. This is necessary for Vinyl, which expects strictly monotonic transaction signatures on commit at all times.
-
bigbes authored
-
Roman Tsisyk authored
+ Travis CI + Telegram + Slack + Google Groups
-
Ilya authored
Fixes #2261
-
Ilya authored
Rewrite this test using TAP to avoid interference of test-run. Fixes #1849
-
Roman Tsisyk authored
Moved to tarantool/doc repository.
-
Roman Tsisyk authored
-
lenkis authored
Closes #1488
-
Roman Tsisyk authored
Fixes #2190
-
Oleg Kovalev authored
-
- Apr 04, 2017
-
-
Georgy Kirichenko authored
Handle event loop for every 0.1M of restored snapshot rows. Issue #2108
-
Konstantin Osipov authored
vinyl expects that entry->res contains a correct signature of the transaction. We used to sloppily assign it to some non-zero signature of the last committed transaction in the batch in case of success. Vinyl expects an exact value, growing monotonically for each next transaction. Optimistically promote writer vclock with replica vclock before write, in hope that the same row is never applied twice. To be covered with test and addressed in a separate patch (gh-2283).
-
Georgy Kirichenko authored
Fix error code and error message for big tuples. Fixed #2236
-
Ilya authored
Fixes #2092
-
Roman Tsisyk authored
Use luaL_checkuint64()/luaL_pushuint64() to work with uint64_t values. Fixes #2096
-
- Apr 03, 2017
-
-
Vladislav Shpilevoy authored
The dump of index-wide run can start when compaction of one or more ranges is in progress (see #2209). If we reset the max_dump_size and compact_priority in compact_complete, there can be the following case: compact_prio = x compact_prio = y > x compact_prio = 0 | | | => y has lost. +-----------------|-----------------------+ | compact | +-----------------|-----------------------+ +------------+ | dump | +------------+ Reset the compact_priority and max_dump_size in compact_new() and restore the saved values on abort. Needed for #2209
-
Vladislav Shpilevoy authored
For all pages except the last we use its min_key as the left border and min_key of the next page as the right border. The last page doesn't have the right neighbour, so there is no way to determine its bounds. This patch add a right border for the last page to vy_task. This feature is needed to split correctly index-wide runs into ranges during compaction (#2209). Index-wide run +-------------------------------------------+ | page 1 | page 2 | page 3 | page N | <--- max key +-------------------------------------------+ +-----------+-------------+---------------------+-------------+---------+ | range 1 | range 2 | range 3 | range 4 | range 5 | +-----------+-------------+---------------------+-------------+---------+ Page N uses dump_task.max_written_key as its right border.
-
Vladislav Shpilevoy authored
First, the vy_range_get_dump_iterator(), to merge frozen in-memory indexes. Second, the vy_range_get_compact_iterator(), to merge fozen in-memory indexes and some runs. Needed for #2209
-
Vladislav Shpilevoy authored
During transition process to the single mem per index architecture we plan to use special ranges to handle index-wide runs. Add a flag to vy_log to distinguish between regular ranges and ranges used for index-wide runs. Needed for #2209
-
Vladislav Shpilevoy authored
-
- Mar 31, 2017
-
-
Roman Tsisyk authored
Follow up d08f494e See #2225
-