Skip to content
Snippets Groups Projects
  1. Mar 31, 2018
    • Kirill Yukhin's avatar
      Add value field to _schema space · ddd8a259
      Kirill Yukhin authored
      _schema represented as key-value storage for various
      values common for Tarantool, like next id for space creation.
      SQL requires format to be fully specified for columns being
      access. Unfortunatelly, _schema is inserted into _space
      before _space's format is set and since DD triggers are
      disabled during upgrade, format for _schema tuple in _space
      stays the same. So, set value nullable field in upgrade,
      regenerate initial snap, update tests.
      Also, as far as _schema's tuple in _space is not updated:
      relax fieldno check in sql.c
      ddd8a259
  2. Mar 30, 2018
  3. Mar 29, 2018
    • Vladislav Shpilevoy's avatar
      Merge branch '1.10' into 2.0 · 3780f5ab
      Vladislav Shpilevoy authored
      3780f5ab
    • Vladislav Shpilevoy's avatar
      Fix net.box test · 405446e0
      Vladislav Shpilevoy authored
      405446e0
    • Vladimir Davydov's avatar
      vinyl: fix discrepancy between vy_log.tx_size and actual tx len · 94569f65
      Vladimir Davydov authored
      When a vylog transaction is rolled back, we always reset vy_log.tx_size.
      Generally speaking, this is incorrect as rollback doesn't necessarily
      remove all pending records from the tx buffer - there still may be
      records committed with vy_log_tx_try_commit() that were left in the
      buffer due to write errors.  We don't rollback such records, but we
      still reset tx_size, which leads to a discrepancy between vy_log.tx_size
      and the actual length of vy_log.tx list, which further on results in an
      assertion failure:
      
        src/box/vy_log.c:698: vy_log_flush: Assertion `i < vy_log.tx_size' failed.
      
      We need vy_log.tx_size to allocate xrow_header array of a proper size so
      that we can flush pending vylog records to disk. This isn't a hot path
      there, because vylog operations are rare. Besides, we iterate over all
      records anyway to fill the xrow_header array. That said, let's remove
      vy_log.tx_size altogether and instead calculate the vy_log.tx list
      length right in place.
      94569f65
    • Vladimir Davydov's avatar
      vinyl: use rlist for iterating over objects recovered from vylog · 197e1ef0
      Vladimir Davydov authored
      Currently, we use mh_foreach, but each object is on an rlist, which
      suits better for iteration.
      197e1ef0
    • Vladimir Davydov's avatar
      index: add abort_create virtual method · 7dee93a0
      Vladimir Davydov authored
      The new method is called if index creation failed, either due to WAL
      write error or build error. It will be used by Vinyl to purge prepared
      LSM tree from vylog.
      7dee93a0
    • Vladislav Shpilevoy's avatar
      Merge branch '1.10' into 2.0 · 4d5349e9
      Vladislav Shpilevoy authored
      4d5349e9
    • Vladislav Shpilevoy's avatar
      Fix net.box test · d9e254f8
      Vladislav Shpilevoy authored
      d9e254f8
    • IlyaMarkovMipt's avatar
      cfg: Add constraints on box.cfg params · 8b61be60
      IlyaMarkovMipt authored
      Introduce limitations on combinations of box.cfg parameters.
      * Add restriction on log type file and log_nonblock=true.
      * Add restriction on log type syslog and log_format json.
      * Each restriction creates error in case of its violation.
      * Change log_nonblock default value to nil, which means default values
      of log_nonblock corresponding to type of logger.
      * Add box_getb function for receiving bool parameters from cfg.
      
      Relates #3014 #3072
      8b61be60
    • Konstantin Osipov's avatar
      Merge branch '1.9' into 1.10 · 97cc085f
      Konstantin Osipov authored
      97cc085f
    • Ilya Markov's avatar
      log: Fix logging large objects · 5ab4581d
      Ilya Markov authored
      The bug was that logging we passed to function write
      number of bytes which may be more than size of buffer.
      This may happen because formatting log string we use vsnprintf which
      returns number of bytes would be written to buffer, not the actual
      number.
      
      Fix this with limiting number of bytes passing to write function.
      
      Close #3248
      5ab4581d
    • Konstantin Osipov's avatar
    • Konstantin Osipov's avatar
      Merge branch '1.9' into 1.10 · 180af15f
      Konstantin Osipov authored
      180af15f
    • Vladimir Davydov's avatar
      vinyl: improve latency stat · f3a84293
      Vladimir Davydov authored
      To facilitate performance analysis, let's report not only 99th
      percentile, but also 50th, 75th, 90th, and 95th. Also, let's add
      microsecond-granular buckets to the latency histogram.
      
      Closes #3207
      f3a84293
    • Ilya Markov's avatar
      say: Fix log_rotate · 26a4effe
      Ilya Markov authored
      * Refactor tests.
      * Add ev_async and fiber_cond for thread-safe log_rotate usage.
      
      Follow up #3015
      26a4effe
    • Ilya Markov's avatar
      log: Fix logger.test.lua · d0dcc8b9
      Ilya Markov authored
      Fix race condition in test on log_rotate.
      Test opened file that must be created by log_rotate and read from it.
      But as log_rotate is executed in separate thread, file may be not
      created or log may be not written yet by the time of opening in test.
      
      Fix this with waiting creation and reading the line.
      d0dcc8b9
    • Vladislav Shpilevoy's avatar
      netbox: deprecate console support · bd06e32a
      Vladislav Shpilevoy authored
      Print warning about that. After a while the cosole support will
      be deleted from netbox.
      bd06e32a
    • Vladislav Shpilevoy's avatar
      console: do not use netbox for console text connections · 1730c538
      Vladislav Shpilevoy authored
      Netbox console support complicates both netbox and console. Lets
      use sockets directly for text protocol.
      
      Part of #2677
      1730c538
    • Vladislav Shpilevoy's avatar
      netbox: allow to create a netbox connection from existing socket · d2468dac
      Vladislav Shpilevoy authored
      It is needed to create a binary console connection, when a
      socket is already created and a greeting is read and decoded.
      d2468dac
    • Konstantin Osipov's avatar
      Merge branch '1.10' into 2.0 · dd240de1
      Konstantin Osipov authored
      dd240de1
    • Vladimir Davydov's avatar
      bloom: drop spectrum · bc859dce
      Vladimir Davydov authored
      As it was pointed out earlier, the bloom spectrum concept is rather
      dubious, because its overhead for a reasonable false positive rate is
      about 10 bytes per record while storing all hashes in an array takes
      only 4 bytes per record so one can stash all hashes and count records
      first, then create the optimal bloom filter and add all hashes there.
      bc859dce
    • Vladimir Davydov's avatar
      bloom: optimize tuple bloom filter size · 4357bcf3
      Vladimir Davydov authored
      When we check if a multi-part key is hashed in a bloom filter, we check
      all its sub keys as well so the resulting false positive rate will be
      equal to the product of multiplication of false positive rates of bloom
      filters created for each sub key.
      
      The false positive rate of a bloom filter is given by the formula:
      
        f = (1 - exp(-kn/m)) ^ k
      
      where m is the number of bits in the bloom filter, k is the number of
      hash functions, and n is the number of elements hashed in the filter.
      By varying n, we can estimate the false positive rate of an existing
      bloom filter when used for a greater number of elements, in other words
      we can estimate the false positive rate of a bloom filter created for
      checking sub keys when used for checking full keys.
      
      Knowing this, we can adjust the target false positive rate of a bloom
      filter used for checking keys of a particular length based on false
      positive rates of bloom filters used for checking its sub keys. This
      will reduce the number of hash functions required to conform to the
      configured false positive rate and hence the bloom filter size.
      
      Follow-up #3177
      4357bcf3
    • Vladimir Davydov's avatar
      vinyl: introduce bloom filters for partial key lookups · fc654aaf
      Vladimir Davydov authored
      Currently, we store and use bloom only for full-key lookups. However,
      there are use cases when we can also benefit from maintaining bloom
      filters for partial keys as well - see #3177 for example. So this patch
      replaces the current full-key bloom filter with a multipart one, which
      is basically a set of bloom filters, one per each partial key. Old bloom
      filters stored on disk will be recovered as is so users will see the
      benefit of this patch only after major compaction takes place.
      
      When a key or tuple is checked against a multipart bloom filter, we
      check all its partial keys to reduce the false positive result.
      Nevertheless there's no size optimization as per now. E.g. even if the
      cardinality of a partial key is the same as of the full key, we will
      still store two full-sized bloom filters although we could probably save
      some space in this case by assuming that checking against the bloom
      corresponding to a partial key would reduce the false positive rate of
      full key lookups. This is addressed later in the series.
      
      Before this patch we used a bloom spectrum object to construct a bloom
      filter. A bloom spectrum is basically a set of bloom filters ranging in
      size. The point of using a spectrum is that we don't know what the run
      size will be while we are writing it so we create 10 bloom filters and
      choose the best of them after we are done. With the default bloom fpr of
      0.05 it is 10 byte overhead per record, which seems to be OK. However,
      if we try to optimize other parameters as well, e.g. the number of hash
      functions, the cost of a spectrum will become prohibitive. Funny thing
      is a tuple hash is only 4 bytes long, which means if we stored all
      hashes in an array and built a bloom filter after we'd written a run, we
      would reduce the memory footprint by more than half! And that would only
      slightly increase the run write time as scanning a memory map of hashes
      and constructing a bloom filter is cheap in comparison to mering runs.
      Putting it all together, we stop using bloom spectrum in this patch,
      instead we stash all hashes in a new bloom builder object and use them
      to build a perfect bloom filer after the run has been written and we
      know the cardinality of each partial key.
      
      Closes #3177
      fc654aaf
    • Vladimir Davydov's avatar
      bloom: rename bloom_possible_has to bloom_maybe_has · f03fd4db
      Vladimir Davydov authored
      Suggested by @kostja
      f03fd4db
    • Vladimir Davydov's avatar
      bloom: use malloc for bitmap allocations · 78df5acd
      Vladimir Davydov authored
      There's absolutely no point in using mmap() instead of malloc() for
      bitmap allocation - malloc() will fallback on mmap() anyway provided
      the allocation is large enough.
      
      Note about the unit test: since we don't round the bloom filter size up
      to a multiple of page size anymore, we have to use a more sophisticated
      hash function for the test to pass.
      78df5acd
    • Vladimir Davydov's avatar
      test: vinyl/layout: fix bloom filter filtering in output · 88c4c19a
      Vladimir Davydov authored
      We filter bloom filters, because they depend on ICU version and hence
      the test output may vary from one platform to another (see commit
      0a37ccad "Filter out bloom_filter in vinyl/layout.test.lua").
      However, using test_run for this is unreliable, because a bloom string
      can contain newline characters and hence be split in multiple lines in
      console output, in which case the filter won't work. Fix this by
      filtering bloom_filter manually.
      88c4c19a
    • Bulat Niatshin's avatar
      sql: remove unnecessary SCopy opcodes · 6733a400
      Bulat Niatshin authored
      Since OP_NoConflict opcode appears in INSERT/UPDATE VDBE listings
      only when UNIQUE constraint check can't be handled by Tarantool
      (after pushing 2255 to 2.0), related OP_SCopy should appear in
      VDBE listing only when OP_NoConflict is present. This patch
      contains a small fix for that.
      
      Fix for #2255
      6733a400
    • Hollow's avatar
      sql: correct confusing message · 09a0dc5c
      Hollow authored
      Required to fix the following message:
      
      "- error: expressions prohibited in PRIMARY KEY and UNIQUE constraints"
      
      Currently this message appears even when user tries to CREATE
      non-unique functional index which isn't concerned with the
      following error message. Thus far the message was corrected
      to the proper one.
      
      Closes #3236
      09a0dc5c
    • Vladislav Shpilevoy's avatar
      sql: do not bless tuples · 71051d23
      Vladislav Shpilevoy authored
      Tuple blessing is needed generaly for Lua. Bless references a
      tuple before returning to be able to unref it, when a next result
      is returned, and the previous one is already gone into Lua, or
      another public API.
      
      For internal usage it is not needed to bless a tuple:
      - to just get a tuple field you can skip even simple ref, because
        tuple_field_...() does not yield, and the tuple can not be
        deleted during it;
      - for internal SQL iterators you simply reference a new tuple,
        and unref a previous for youself, with no bless.
      71051d23
    • Nikita Pettik's avatar
      sql: rework OP_OpenWrite/OpenRead · d61e4b2a
      Nikita Pettik authored
      After new DDL SQL implementation has been introduced, OP_OpenWrite,
      OP_OpenRead and OP_ReopenIdx opcodes can be refactored.
      
      Firstly, if schema versions at compile time and runtime don't match,
      finish VDBE execution with appropriate message error. Exception is the
      situation when fifth pointer is set to OPFLAG_FRESH_PTR, which means
      that space pointer has been fetched during runtime right before that.
      
      Secondly, there is no need to fetch number of columns in index from
      KeyInfo: iterator yields the full tuple, so it would always be equal to
      the number of fields in a whole space.
      
      Finally, now we always can pass space pointer to these opcodes
      regardless of DML routine. In case of OP_ReopenIdx opcode space and
      index from given cursor is checked on equality to those given in
      arguments. If they match, this opcode will become no-op.
      d61e4b2a
    • Nikita Pettik's avatar
      sql: rework code generation for DDL routine · 795d9050
      Nikita Pettik authored
      As far as new opcodes to operate on system spaces have been introduced,
      there is no more need to open cursors on such tables, when it comes to
      insert/delete operations. In addition, it allows to get rid of system
      spaces lookups from SQL data dictionary.  Moreover, previously DDL
      relied on nested parse to make deletions from system space. But during
      nested parse it is impossible to convert space id to space pointer (i.e.
      emit OP_SIDtoPtr opcode) without any overhead or "hacks".  So, nested
      parse has been replaced with hardcoded sequences of opcodes,
      implementing the same logic.
      
      Closes #3252
      795d9050
    • Nikita Pettik's avatar
      sql: introduce opcodes to operate on system spaces · cbbb3a2e
      Nikita Pettik authored
      Since it is impossible to use space pointers during execution of DDL
      (after any DDL schema may change and pointers fetched at compile time
      can become expired), special opcodes to operate on system spaces have
      been introduced: OP_SInsert and OP_SDelete. They take space id as
      an argument and make space lookup each time before insertion or deletion.
      However, sometimes it is still required to iterate through space (e.g.
      to satisfy WHERE clause) during DDL. As far as now cursors rely on
      pointers to space, it is required to convert space id to space pointer
      during VDBE execution. Hence, another opcode is added: OP_SIDtoPtr.
      Finally, existing opcodes, which are invoked only during DDL, have been
      also refactored.
      
      Part of #3252
      cbbb3a2e
    • Nikita Pettik's avatar
      sql: pass space pointer to OP_OpenRead/OpenWrite · 1f24d461
      Nikita Pettik authored
      Originally in SQLite, to open table (i.e. btree) it was required to pass
      number of page root to OP_OpenRead or OP_OpenWrite opcodes as an
      argument. However, now there are only Tarantool spaces and nothing
      prevents from operating directly on pointers to them. Thus, to pass
      pointers from compile time to runtime, opcode OP_LoadPtr has been
      introduced. It fetches pointer from P4 and stores to the register
      specified by P2.
      It is worth mentioning, that pointers are able to expire after schema
      changes (i.e. after DML routine). For this reason, schema version is
      saved to VDBE at compile time and checked each time during cursor
      opening.
      
      Part of #3252
      1f24d461
    • Nikita Pettik's avatar
      sql: replace pgnoRoot with struct space in BtCursor · 07e10012
      Nikita Pettik authored
      Instead of passing encoded space id and index id to SQL bindings,
      pointers to space and index are saved in cursor and passed implicitly.
      Space and index lookups appear during execution of
      OP_OpenRead/OP_OpenWrite once. Moreover, having struct space it has
      become possible to remove several wrapper-function calls on insertions
      and deletions by invoking box_process_rw().
      
      Closes #3122
      07e10012
Loading