Skip to content
Snippets Groups Projects
  • Vladimir Davydov's avatar
    5e340b6e
    memtx: drop UNCHANGED (get = get_raw) index vtab optimization · 5e340b6e
    Vladimir Davydov authored
    We use a special, less efficient index vtab if a space can store
    compressed tuples. The problem is it isn't enough to look at a space
    definition to figure out if there are compressed tuples in the space:
    there may be compressed tuples left from before the alter operation that
    disabled compression, since we don't rebuild tuples on alter. To update
    an index vtab dynamically, we implement some complicated logic, but
    it's buggy (results in a test failure in EE). Fixing it requires some
    non-trivial effort, because a vtab may be changed after index creation
    (when a space format is updated).
    
    Let's drop this optimization altogether for now and use the same vtab
    for both compressed and uncompressed indexes. We might return to this
    issue in future, but first we need to run some benchmarks to check if
    this optimization is worth the complexity. Possible ways how we could
    resurrect this optimization:
     - Call get_raw from get directly (without function pointer), inline
       memtx_prepare_result_tuple, and move is_compressed flag to struct
       tuple for better cache locality.
     - Rebuild all tuples on space alter and use a different vtab for
       compressed indexes.
    
    NO_DOC=bug fix
    NO_TEST=enterprise
    NO_CHANGELOG=unrelased
    5e340b6e
    History
    memtx: drop UNCHANGED (get = get_raw) index vtab optimization
    Vladimir Davydov authored
    We use a special, less efficient index vtab if a space can store
    compressed tuples. The problem is it isn't enough to look at a space
    definition to figure out if there are compressed tuples in the space:
    there may be compressed tuples left from before the alter operation that
    disabled compression, since we don't rebuild tuples on alter. To update
    an index vtab dynamically, we implement some complicated logic, but
    it's buggy (results in a test failure in EE). Fixing it requires some
    non-trivial effort, because a vtab may be changed after index creation
    (when a space format is updated).
    
    Let's drop this optimization altogether for now and use the same vtab
    for both compressed and uncompressed indexes. We might return to this
    issue in future, but first we need to run some benchmarks to check if
    this optimization is worth the complexity. Possible ways how we could
    resurrect this optimization:
     - Call get_raw from get directly (without function pointer), inline
       memtx_prepare_result_tuple, and move is_compressed flag to struct
       tuple for better cache locality.
     - Rebuild all tuples on space alter and use a different vtab for
       compressed indexes.
    
    NO_DOC=bug fix
    NO_TEST=enterprise
    NO_CHANGELOG=unrelased