Skip to content
Snippets Groups Projects
  1. Sep 21, 2016
  2. Sep 20, 2016
    • Roman Tsisyk's avatar
      vinyl: follow up box.cfg.vinyl.threads = 1 · 2f4a1c5d
      Roman Tsisyk authored
      2f4a1c5d
    • Roman Tsisyk's avatar
      vinyl: refactor vy_run_iterator_load_page · c5b3b4c4
      Roman Tsisyk authored
      * Merge vy_run_read_page to vy_run_iterator_load_page
      * Move struct vy_page code to struct vy_run_iterator section
      
      Needed for #1756
      c5b3b4c4
    • Roman Tsisyk's avatar
      vinyl: start only one worker thread by default · 1f58bc7f
      Roman Tsisyk authored
      Change default value of box.cfg.vinyl.threads to 1.
      Keep 3 workers in the test suite.
      1f58bc7f
    • Roman Tsisyk's avatar
      vinyl: disable scheduler during recovery · e128864d
      Roman Tsisyk authored
      e128864d
    • Roman Tsisyk's avatar
      vinyl: remove fixed timeout from scheduler · d793f06e
      Roman Tsisyk authored
      Wake up scheduler if range dump conditions are met.
      
      Fixes #1769
      d793f06e
    • Roman Tsisyk's avatar
      Process initial replication join in TX thread · dba97493
      Roman Tsisyk authored
      Refactor replication relay to invoke Engine::join() using TX thread.
      Now engines can start their own threads when needed:
      
      * MemTX starts a new cord to read rows from a snapshot file.
      * Vinyl should use coeio for disk operations (see #1756).
      
      This patch fixes race conditions between relay and TX threads
      for both engines.
      
      Closes #1757
      dba97493
    • Vladimir Davydov's avatar
      vinyl: cleanup range file creation · a46865db
      Vladimir Davydov authored
       1. Consolidate temp file creation, initial run write, and range file
          rename in a single function, vy_range_write_run(). Use it in both
          dump and compact procedures.
      
       2. Do unlink on range file write failure in vy_range_write_run()
          instead of postponing it unilt vy_range_delete().
      
       3. #2 allows us to move range->id initialization from
          vy_range_complete() to vy_range_new(), as uninitialized ->id was
          only used to check if a range file is incomplete and delete it in
          vy_range_delete().
      
       4. #3 allows to make index->range_id_max non-atomic, as vy_range_new()
          is only called from the tx thread (in contrast to vy_range_complete,
          which hosted range->id allocation before).
      a46865db
  3. Sep 19, 2016
  4. Sep 16, 2016
    • Vladimir Davydov's avatar
      vinyl: fix a typo in vy_range_compact_abort · 0aa23cc1
      Vladimir Davydov authored
      We shouldn't link *empty* mems back to the original range, but we don't
      link *non-empty* ones due to the typo.
      0aa23cc1
    • Roman Tsisyk's avatar
      e4dbae1b
    • Nick Zavaritsky's avatar
      Fix unused variable warning · 25a47a56
      Nick Zavaritsky authored
      25a47a56
    • Roman Tsisyk's avatar
      3dc06ff8
    • Roman Tsisyk's avatar
    • Roman Tsisyk's avatar
      vinyl: remove vy_run->page_cache · a8cb3c4f
      Roman Tsisyk authored
      * Replace vy_run->page_cache with LRU cache in vy_run_iterator
      * Remove vy_run_iterator_lock_page()/vy_run_iterator_unlock_page()
      * Refactor struct vy_page to remove dangling pointers to vy_page_info
      * Fix error handling in vy_run_iterator and vy_merge_iterator
      * Remove special optimization for case when key == page->min_key
      a8cb3c4f
    • Roman Tsisyk's avatar
      vinyl: remove unused vy_tmp_mem_iterator · 800155ba
      Roman Tsisyk authored
      800155ba
    • Vladimir Davydov's avatar
      vinyl: do not free dropped indexes on sched stop · ed71319a
      Vladimir Davydov authored
      Currently, a dropped index might have compaction scheduled on scheduler
      stop, in which case its ranges are inter-linked and cannot be easily
      freed (dumb free of all ranges would result in double free). So let's
      drop the final index cleanup for now. We need to rework it anyway,
      because we never free active indexes on shutdown, although we should to
      make valgrind happy.
      ed71319a
    • Vladimir Davydov's avatar
      vinyl: rework range compaction · 993f410a
      Vladimir Davydov authored
      Currently, compaction works as follows:
       1. worker: write old range's runs and shadow indexes to disk, creating
          as many new ranges as necessary (min 1)
       2. tx: redistribute active memory index of the old range among new
          ranges
       3. tx: replace old range with new ones and delete old one
      
      Such a design has a serious drawback: redistribution (step #2) scales
      lineary with the number of tuples and hence may take too long.
      
      So this patch reworks the compact procedure. Now it looks like this:
       1. tx: create new ranges and insert them to the tree instead of the old
          range; in order not to break lookups, link mem and run of the old
          range to new ones
       2. worker: write mem and run of the old range to disk creating runs for
          each of new ranges
       3. tx: unlink old range's mem and run from new ranges and delete it
      
      An old range is either split in two parts by the median key
      (approximate) or not split at all, depending on its size.
      
      Note, we don't split a range if it hasn't been compacted at least once.
      This breaks assumptions of vinyl/split test, so I disable it for now. I
      will rework it later, perhaps after sanitizing the scheduler.
      
      Closes #1745
      993f410a
  5. Sep 15, 2016
Loading