- Sep 16, 2016
-
-
Vladimir Davydov authored
Currently, a dropped index might have compaction scheduled on scheduler stop, in which case its ranges are inter-linked and cannot be easily freed (dumb free of all ranges would result in double free). So let's drop the final index cleanup for now. We need to rework it anyway, because we never free active indexes on shutdown, although we should to make valgrind happy.
-
Vladimir Davydov authored
Currently, compaction works as follows: 1. worker: write old range's runs and shadow indexes to disk, creating as many new ranges as necessary (min 1) 2. tx: redistribute active memory index of the old range among new ranges 3. tx: replace old range with new ones and delete old one Such a design has a serious drawback: redistribution (step #2) scales lineary with the number of tuples and hence may take too long. So this patch reworks the compact procedure. Now it looks like this: 1. tx: create new ranges and insert them to the tree instead of the old range; in order not to break lookups, link mem and run of the old range to new ones 2. worker: write mem and run of the old range to disk creating runs for each of new ranges 3. tx: unlink old range's mem and run from new ranges and delete it An old range is either split in two parts by the median key (approximate) or not split at all, depending on its size. Note, we don't split a range if it hasn't been compacted at least once. This breaks assumptions of vinyl/split test, so I disable it for now. I will rework it later, perhaps after sanitizing the scheduler. Closes #1745
-
- Sep 15, 2016
-
-
Konstantin Osipov authored
Reorder the branches of squash loop to begin with REPLACE, this makes the reasoning about this loop a whole lot easier.
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
-
bigbes authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Always use vy_tuple_key_part(key, 0) == NULL as an indicator of NULL key. Before this patch, some of the edge case optimizations in run, mem and txv iterators didn't work, since they compared input key with NULL, whereas it was set to a special key with zero parts. @todo: this is a yet another discrepancy stemming from distinct tuple formats in memtx and vinyl.
-
bigbes authored
-
Konstantin Osipov authored
-
Georgy Kirichenko authored
-
- Sep 14, 2016
-
-
Konstantin Osipov authored
-
bigbes authored
-
Konstantin Osipov authored
Index page size and range size are storage-level properties and can not change during index life time. Initialize these properties from global defaults at index creation time. This way they are not affected after restart with different global options. Remove "engine-level" options support from Lua, all options are known to us now that we ditched third party engines.
-
Vladislav Shpilevoy authored
Closes #1738
-
Vladislav Shpilevoy authored
Closes #1737
-
Konstantin Osipov authored
g/static inline/statis is easier than merging an old branch with a runaway trunk
-
Georgy Kirichenko authored
ops
-
Georgy Kirichenko authored
-
Vladimir Davydov authored
Do a proper roll back on dump/compact failure.
-
Ruben Ayrapetyan authored
Now, names of related interfaces, global variables and fields are based on "iterator". This is pure refactoring without any functional changes. Closes #1694
-
- Sep 13, 2016
-
-
bigbes authored
-
bigbes authored
-
Konstantin Osipov authored
The range iterator open function used an obsolete indicator for special key (NULL key), which led to selection of a wrong range in :max() query. The bug required a compaction process to trigger it, since when there was only 1 range max() naturally always got the last one. After compaction, range iterator open could yield a random range for the special key. Update errinj.test.lua to always trigger compaction.
-
Konstantin Osipov authored
-
Roman Tsisyk authored
Remove all ranges from scheduler before deleting index. Follow up 43ca199e
-
Konstantin Osipov authored
The merge iterator didn't take into account iterator diretion when merging its sources, and always used to return the smallest tuple among the alternatives. If iterator order is LT or LE, the biggest tuple should be returned instead.
-
Vladimir Davydov authored
We don't remove dead index's ranges from the scheduler heap after we scheduled a drop task for it - we only do it from the drop task itself. As a result, we can happily go and schedule another task for the index being already scheduled for destruction. This can result in the index->ref == 1 assertion being violated in vy_task_drop_execute() or memory corruption induced by use-after-free of a vy_index struct.
-
Vladimir Davydov authored
Switch to pointers from built-in structures range->i[01].
-
Roman Tsisyk authored
-
- Sep 12, 2016
-
-
Konstantin Osipov authored
* add a test case based on error injection * fix a few release mode warnings * the test case is still failing, depends on introduction of shadow memory indexes.
-
Roman Tsisyk authored
Use global queues for scheduling instead of per-index. Fixes #1708
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Update test run (fixes the sporadic console handshake bug when running tests). Fixes gh-1163.
-
Vladislav Shpilevoy authored
-
Nick Zavaritsky authored
-
Svyatoslav Feldsherov authored
Closes #1599
-
Nick Zavaritsky authored
-
bigbes authored
-