- Sep 21, 2016
-
-
Georgy Kirichenko authored
-
Vladimir Davydov authored
- check read() result carefully - fix buf memory leak - fix run memory leak in case of read error
-
- Sep 20, 2016
-
-
Roman Tsisyk authored
-
Roman Tsisyk authored
* Merge vy_run_read_page to vy_run_iterator_load_page * Move struct vy_page code to struct vy_run_iterator section Needed for #1756
-
Roman Tsisyk authored
Change default value of box.cfg.vinyl.threads to 1. Keep 3 workers in the test suite.
-
Roman Tsisyk authored
-
Roman Tsisyk authored
Wake up scheduler if range dump conditions are met. Fixes #1769
-
Roman Tsisyk authored
Refactor replication relay to invoke Engine::join() using TX thread. Now engines can start their own threads when needed: * MemTX starts a new cord to read rows from a snapshot file. * Vinyl should use coeio for disk operations (see #1756). This patch fixes race conditions between relay and TX threads for both engines. Closes #1757
-
Vladimir Davydov authored
1. Consolidate temp file creation, initial run write, and range file rename in a single function, vy_range_write_run(). Use it in both dump and compact procedures. 2. Do unlink on range file write failure in vy_range_write_run() instead of postponing it unilt vy_range_delete(). 3. #2 allows us to move range->id initialization from vy_range_complete() to vy_range_new(), as uninitialized ->id was only used to check if a range file is incomplete and delete it in vy_range_delete(). 4. #3 allows to make index->range_id_max non-atomic, as vy_range_new() is only called from the tx thread (in contrast to vy_range_complete, which hosted range->id allocation before).
-
- Sep 19, 2016
-
-
Konstantin Osipov authored
* update comments * reduce the number of forced checkpoints
-
Roman Tsisyk authored
Always check return code from next_key()/next_lsn()/get().
-
Roman Tsisyk authored
Page always must have at least one tuple.
-
Vladimir Davydov authored
Follow-up for d1e4b531 ("vinyl: remove aging from scheduler").
-
Roman Tsisyk authored
Get rid of harmful heuristic.
-
Alexandr Lyapunov authored
-
Vladislav Shpilevoy authored
-
Vladimir Davydov authored
To catch bugs like the one fixed by 05423bcc ("vinyl: fix small run recovery").
-
Vladimir Davydov authored
If run size is less than ALIGN_POS(sizeof(struct vy_run_info)), then vy_range_recover() won't load it. This is due to a residue of aligned read/write functionality, the rest of which was removed by commit 7fe24c42 ("All operation will be done over xlog. We don't need to use aligned rw ops"). Remove this last ALIGN_POS. Steps to reproduce: space = box.schema.space.create("vinyl", { engine = 'vinyl' }) space:create_index('primary', { parts = { 1, 'unsigned' } }) space:insert({0}) box.snapshot -- restart server space:select()
-
Vladimir Davydov authored
We don't really need to init vy_range->min_key each time we insert it to a tree, as we do now. It's enough to do it once in a range's lifetime - when the range is created, i.e. on recovery or on split. While we're at it, let's not panic if we fail to alloc a tuple for min_key, but fail gracefully.
-
Vladimir Davydov authored
grep_log() now checks a bigger chunk of log by default (64K vs 2K), so that debug tracing shouldn't normally break it. If it does, one can increase grep_log's bytes argument manually to check even more.
-
Georgy Kirichenko authored
-
Georgy Kirichenko authored
-
Vladimir Davydov authored
-
Roman Tsisyk authored
Follow up 3dc06ff8 and b7724150
-
Alexandr Lyapunov authored
Also removed vlsn from vy_run_iterator_search
-
- Sep 16, 2016
-
-
Vladimir Davydov authored
We shouldn't link *empty* mems back to the original range, but we don't link *non-empty* ones due to the typo.
-
Roman Tsisyk authored
-
Nick Zavaritsky authored
-
Roman Tsisyk authored
-
Roman Tsisyk authored
-
Roman Tsisyk authored
* Replace vy_run->page_cache with LRU cache in vy_run_iterator * Remove vy_run_iterator_lock_page()/vy_run_iterator_unlock_page() * Refactor struct vy_page to remove dangling pointers to vy_page_info * Fix error handling in vy_run_iterator and vy_merge_iterator * Remove special optimization for case when key == page->min_key
-
Roman Tsisyk authored
-
Vladimir Davydov authored
Currently, a dropped index might have compaction scheduled on scheduler stop, in which case its ranges are inter-linked and cannot be easily freed (dumb free of all ranges would result in double free). So let's drop the final index cleanup for now. We need to rework it anyway, because we never free active indexes on shutdown, although we should to make valgrind happy.
-
Vladimir Davydov authored
Currently, compaction works as follows: 1. worker: write old range's runs and shadow indexes to disk, creating as many new ranges as necessary (min 1) 2. tx: redistribute active memory index of the old range among new ranges 3. tx: replace old range with new ones and delete old one Such a design has a serious drawback: redistribution (step #2) scales lineary with the number of tuples and hence may take too long. So this patch reworks the compact procedure. Now it looks like this: 1. tx: create new ranges and insert them to the tree instead of the old range; in order not to break lookups, link mem and run of the old range to new ones 2. worker: write mem and run of the old range to disk creating runs for each of new ranges 3. tx: unlink old range's mem and run from new ranges and delete it An old range is either split in two parts by the median key (approximate) or not split at all, depending on its size. Note, we don't split a range if it hasn't been compacted at least once. This breaks assumptions of vinyl/split test, so I disable it for now. I will rework it later, perhaps after sanitizing the scheduler. Closes #1745
-
- Sep 15, 2016
-
-
Konstantin Osipov authored
Reorder the branches of squash loop to begin with REPLACE, this makes the reasoning about this loop a whole lot easier.
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
-
bigbes authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-