- Sep 19, 2016
-
-
Roman Tsisyk authored
Always check return code from next_key()/next_lsn()/get().
-
Roman Tsisyk authored
Page always must have at least one tuple.
-
Vladimir Davydov authored
Follow-up for d1e4b531 ("vinyl: remove aging from scheduler").
-
Roman Tsisyk authored
Get rid of harmful heuristic.
-
Alexandr Lyapunov authored
-
Vladislav Shpilevoy authored
-
Vladimir Davydov authored
To catch bugs like the one fixed by 05423bcc ("vinyl: fix small run recovery").
-
Vladimir Davydov authored
If run size is less than ALIGN_POS(sizeof(struct vy_run_info)), then vy_range_recover() won't load it. This is due to a residue of aligned read/write functionality, the rest of which was removed by commit 7fe24c42 ("All operation will be done over xlog. We don't need to use aligned rw ops"). Remove this last ALIGN_POS. Steps to reproduce: space = box.schema.space.create("vinyl", { engine = 'vinyl' }) space:create_index('primary', { parts = { 1, 'unsigned' } }) space:insert({0}) box.snapshot -- restart server space:select()
-
Vladimir Davydov authored
We don't really need to init vy_range->min_key each time we insert it to a tree, as we do now. It's enough to do it once in a range's lifetime - when the range is created, i.e. on recovery or on split. While we're at it, let's not panic if we fail to alloc a tuple for min_key, but fail gracefully.
-
Vladimir Davydov authored
grep_log() now checks a bigger chunk of log by default (64K vs 2K), so that debug tracing shouldn't normally break it. If it does, one can increase grep_log's bytes argument manually to check even more.
-
Georgy Kirichenko authored
-
Georgy Kirichenko authored
-
Vladimir Davydov authored
-
Roman Tsisyk authored
Follow up 3dc06ff8 and b7724150
-
Alexandr Lyapunov authored
Also removed vlsn from vy_run_iterator_search
-
- Sep 16, 2016
-
-
Vladimir Davydov authored
We shouldn't link *empty* mems back to the original range, but we don't link *non-empty* ones due to the typo.
-
Roman Tsisyk authored
-
Nick Zavaritsky authored
-
Roman Tsisyk authored
-
Roman Tsisyk authored
-
Roman Tsisyk authored
* Replace vy_run->page_cache with LRU cache in vy_run_iterator * Remove vy_run_iterator_lock_page()/vy_run_iterator_unlock_page() * Refactor struct vy_page to remove dangling pointers to vy_page_info * Fix error handling in vy_run_iterator and vy_merge_iterator * Remove special optimization for case when key == page->min_key
-
Roman Tsisyk authored
-
Vladimir Davydov authored
Currently, a dropped index might have compaction scheduled on scheduler stop, in which case its ranges are inter-linked and cannot be easily freed (dumb free of all ranges would result in double free). So let's drop the final index cleanup for now. We need to rework it anyway, because we never free active indexes on shutdown, although we should to make valgrind happy.
-
Vladimir Davydov authored
Currently, compaction works as follows: 1. worker: write old range's runs and shadow indexes to disk, creating as many new ranges as necessary (min 1) 2. tx: redistribute active memory index of the old range among new ranges 3. tx: replace old range with new ones and delete old one Such a design has a serious drawback: redistribution (step #2) scales lineary with the number of tuples and hence may take too long. So this patch reworks the compact procedure. Now it looks like this: 1. tx: create new ranges and insert them to the tree instead of the old range; in order not to break lookups, link mem and run of the old range to new ones 2. worker: write mem and run of the old range to disk creating runs for each of new ranges 3. tx: unlink old range's mem and run from new ranges and delete it An old range is either split in two parts by the median key (approximate) or not split at all, depending on its size. Note, we don't split a range if it hasn't been compacted at least once. This breaks assumptions of vinyl/split test, so I disable it for now. I will rework it later, perhaps after sanitizing the scheduler. Closes #1745
-
- Sep 15, 2016
-
-
Konstantin Osipov authored
Reorder the branches of squash loop to begin with REPLACE, this makes the reasoning about this loop a whole lot easier.
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
-
bigbes authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Always use vy_tuple_key_part(key, 0) == NULL as an indicator of NULL key. Before this patch, some of the edge case optimizations in run, mem and txv iterators didn't work, since they compared input key with NULL, whereas it was set to a special key with zero parts. @todo: this is a yet another discrepancy stemming from distinct tuple formats in memtx and vinyl.
-
bigbes authored
-
Konstantin Osipov authored
-
Georgy Kirichenko authored
-
- Sep 14, 2016
-
-
Konstantin Osipov authored
-
bigbes authored
-
Konstantin Osipov authored
Index page size and range size are storage-level properties and can not change during index life time. Initialize these properties from global defaults at index creation time. This way they are not affected after restart with different global options. Remove "engine-level" options support from Lua, all options are known to us now that we ditched third party engines.
-
Vladislav Shpilevoy authored
Closes #1738
-
Vladislav Shpilevoy authored
Closes #1737
-
Konstantin Osipov authored
g/static inline/statis is easier than merging an old branch with a runaway trunk
-