Skip to content
Snippets Groups Projects
  1. Mar 04, 2020
    • Roman Khabibov's avatar
      sql: support constraint drop · 85adac03
      Roman Khabibov authored
      Extend <ALTER TABLE> statement to drop table constraints by their
      names.
      
      Closes #4120
      
      @TarantoolBot document
      Title: Drop table constraints in SQL
      Now, it is possible to drop table constraints (PRIMARY KEY,
      UNIQUE, FOREIGN KEY, CHECK) using
      <ALTER TABLE table_name DROP CONSTRAINT constraint_name> statement
      by their names.
      
      For example:
      
      tarantool> box.execute([[CREATE TABLE test (
                                   a INTEGER PRIMARY KEY,
                                   b INTEGER,
                                   CONSTRAINT cnstr CHECK (a >= 0)
                              );]])
      ---
      - row_count: 1
      ...
      
      tarantool> box.execute('ALTER TABLE test DROP CONSTRAINT cnstr;')
      ---
      - row_count: 1
      ...
      
      The same for all the other constraints.
      85adac03
    • Roman Khabibov's avatar
      sql: don't select from _index during parsing · 4bdcb3fa
      Roman Khabibov authored
      Remove function box_index_by_name() from parser to avoid selects
      during parsing. Add the ability to choose index during VDBE code
      compilation which will be used to find the tuple to drop from a
      system space.
      
      Needed for #4120
      4bdcb3fa
    • Roman Khabibov's avatar
      sql: improve "no such constraint" error message · 7d558ae8
      Roman Khabibov authored
      Clarify the error message for better user handling. Add the name
      of space where the constraint under dropping wasn't founded.
      
      Part of #4120
      7d558ae8
  2. Mar 03, 2020
    • Serge Petrenko's avatar
      replication: fix rebootstrap in case the instance is listed in box.cfg.replication · dbcfaf70
      Serge Petrenko authored
      When checking wheter rejoin is needed, replica loops through all the
      instances in box.cfg.replication, which makes it believe that there is a
      master holding files, needed by it, since it accounts itself just like
      all other instances.
      So make replica skip itself when finding an instance which holds files
      needed by it, and determining whether rebootstrap is needed.
      
      We already have a working test for the issue, it missed the issue due to
      replica.lua replication settings. Fix replica.lua to optionally include
      itself in box.cfg.replication, so that the corresponding test works
      correctly.
      
      Closes #4759
      dbcfaf70
  3. Mar 02, 2020
    • Serge Petrenko's avatar
      replication: do not relay rows coming from a remote instance back to it · ed2e1430
      Serge Petrenko authored
      We have a mechanism for restoring rows originating from an instance that
      suffered a sudden power loss: remote masters resend the isntance's rows
      received before a certain point in time, defined by remote master vclock
      at the moment of subscribe.
      However, this is useful only on initial replication configuraiton, when
      an instance has just recovered, so that it can receive what it has
      relayed but haven't synced to disk.
      In other cases, when an instance is operating normally and master-master
      replication is configured, the mechanism described above may lead to
      instance re-applying instance's own rows, coming from a master it has just
      subscribed to.
      To fix the problem do not relay rows coming from a remote instance, if
      the instance has already recovered.
      
      Closes #4739
      ed2e1430
    • Serge Petrenko's avatar
      replication: implement an instance id filter for relay · 45de9907
      Serge Petrenko authored
      Add a filter for relay to skip rows coming from unwanted instances.
      A list of instance ids whose rows replica doesn't want to fetch is encoded
      together with SUBSCRIBE request after a freshly introduced flag IPROTO_ID_FILTER.
      
      Filtering rows is needed to prevent an instance from fetching its own
      rows from a remote master, which is useful on initial configuration and
      harmful on resubscribe.
      
      Prerequisite #4739, #3294
      
      @TarantoolBot document
      
      Title: document new binary protocol key and subscribe request changes
      
      Add key `IPROTO_ID_FILTER = 0x51` to the internals reference.
      This is an optional key used in SUBSCRIBE request followed by an array
      of ids of instances whose rows won't be relayed to the replica.
      
      SUBSCRIBE request is supplemented with an optional field of the
      following structure:
      ```
      +====================+
      |      ID_FILTER     |
      |   0x51 : ID LIST   |
      | MP_INT : MP_ARRRAY |
      |                    |
      +====================+
      ```
      The field is encoded only when the id list is not empty.
      45de9907
    • Serge Petrenko's avatar
      wal: warn when trying to write a record with a broken lsn · e0750262
      Serge Petrenko authored
      There is an assertion in vclock_follow `lsn > prev_lsn`, which doesn't
      fire in release builds, of course. Let's at least warn the user on an
      attempt to write a record with a duplicate or otherwise broken lsn, and
      not follow such an lsn.
      
      Follow-up #4739
      e0750262
    • Serge Petrenko's avatar
      box: expose box_is_orphan method · 7b83b73d
      Serge Petrenko authored
      is_orphan status check is needed by applier in order to tell relay
      whether to send the instance's own rows back or not.
      
      Prerequisite #4739
      7b83b73d
  4. Feb 28, 2020
  5. Feb 27, 2020
    • Cyrill Gorcunov's avatar
      test: unit/popen · 40a51647
      Cyrill Gorcunov authored
      
      Basic tests for popen engine
      
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      40a51647
    • Cyrill Gorcunov's avatar
      popen: introduce a backend engine · f58cb606
      Cyrill Gorcunov authored
      
      In the patch we introduce popen backend engine which provides
      a way to execute external programs and communicate with their
      stdin/stdout/stderr streams.
      
      It is possible to run a child process with:
      
       a) completely closed stdX descriptors
       b) provide /dev/null descriptors to appropritate stdX
       c) pass new transport into a child (currently we use
          pipes for this sake, but may extend to tty/sockets)
       d) inherit stdX from a parent, iow do nothing
      
      On tarantool start we create @popen_pids_map hash which maps
      created processes PIDs to popen_handle structure, this structure
      keeps everything needed to control and communicate with the children.
      The hash will allow us to find a hild process quickly from inside
      of a signal handler.
      
      Each handle links into @popen_head list, which is need to be able
      to destory children processes on exit procedure (ie when we exit
      tarantool and need to cleanup the resources used).
      
      Every new process is born by vfork() call - we can't use fork()
      because of at_fork() handlers in libeio which cause deadlocking
      in internal mutex usage. Thus the caller waits until vfork()
      finishes its work and runs exec (or exit with error).
      
      Because children processes are running without any limitations
      they can exit by self or can be killed by some other third side
      (say user of a hw node), we need to watch their state which is
      done by setting a hook with ev_child_start() helper. This helper
      allows us to catch SIGCHLD when a child get exited/signaled
      and unregister it from a pool or currently running children.
      Note the libev wait() reaps child zomby by self. Another
      interesting detail is that libev catches signal in async way
      but our SIGCHLD hook is called in sync way before child reap.
      
      This engine provides the following API:
       - popen_init
      	to initialize engine
       - popen_free
      	to finalize engine and free all reasources
      	allocated so far
       - popen_new
      	to create a new child process and start it
       - popen_delete
      	to release resources occupied and
      	terminate a child process
       - popen_stat
      	to fetch statistics about a child process
       - popen_command
      	to fetch command line string formerly used
      	on the popen object creation
       - popen_write_timeout
      	to write data into child's stdin with
      	timeout
       - popen_read_timeout
      	to read data from child's stdout/stderr
      	with timeout
       - popen_state
      	to fetch state (alive, exited or killed) and
      	exit code of a child process
       - popen_state_str
      	to get state of a child process in string
      	form, for Lua usage mostly
       - popen_send_signal
      	to send signal to a child process (for
      	example to kill it)
      
      Known issues to fix in next series:
      
       - environment variables for non-linux systems do not support
         inheritance for now due to lack of testing on my side;
      
       - for linux base systems we use popen2 system call passing
         O_CLOEXEC flag so that two concurrent popen_create calls
         would not affect each other with pipes inheritance (while
         currently we don't have a case where concurrent calls could
         be done as far as I know, still better to be on a safe side
         from the beginning);
      
       - there are some files (such as xlog) which tarantool opens
         for own needs without setting O_CLOEXEC flag and it get
         propagated to a children process; for linux based systems
         we use close_inherited_fds helper which walks over opened
         files of a process and close them. But for other targets
         like MachO or FreeBSD this helper just zapped simply because
         I don't have such machines to experimant with; we should
         investigate this moment in more details later once base
         code is merged in;
      
       - need to consider a case where we will be using piping for
         descriptors (for example we might be writting into stdin
         of a child from another pipe, for this sake we could use
         splice() syscall which gonna be a way faster than copying
         data inside kernel between process). Still the question
         is -- do we really need it? Since we use interanal flags
         in popen handle this should not be a big problem to extend
         this interfaces;
      
         this particular feature is considered to have a very low
         priority but I left it here just to not forget.
      
      Part-of #4031
      
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      f58cb606
    • Cyrill Gorcunov's avatar
      coio: export helpers · c81d9aa4
      Cyrill Gorcunov authored
      
      There is no reason to hide functions. In particular
      we will use these helpers in popen code.
      
      Part-of #4031
      
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      c81d9aa4
    • Maria's avatar
      test: add clean up for box/access test · e33216af
      Maria authored
      The commit e8009f41 ('box: user.grant
      error should be versatile') did not do proper clean-up: it grants
      non-default privileges for user 'guest' and does not revoke them at the
      end. That caused occasional failures of other tests, all with the same
      error saying user 'guest' already had access on universe.
      
      This case should be handled by test-run in a future, see [1].
      
      [1]: https://github.com/tarantool/test-run/issues/156
      
      Follows up #714
      e33216af
    • Alexander Turenko's avatar
      test: stabilize flaky fiber memory leak detection · d6cf327f
      Alexander Turenko authored
      After #4736 regression fix (in fact it just reverts the new logic in
      small) it is possible again that a fiber's region may hold a memory for
      a while, but release it eventually. When the used memory exceeds 128 KiB
      threshold, fiber_gc() puts 'garbage' slabs back to slab_cache and
      subtracts them from region_used() metric. But until this point those
      slabs are accounted in region_used() and so in fiber.info() metrics.
      
      This commit fixes flakiness of test cases of the following kind:
      
       | fiber.info()[fiber.self().id()].memory.used -- should be zero
       | <...workload...>
       | fiber.info()[fiber.self().id()].memory.used -- should be zero
      
      The problem is that the first `<...>.memory.used` value may be non-zero.
      It depends of previous tests that were executed on this tarantool
      instance.
      
      The obvious way to solve it would be print differences between
      `<...>.memory.used` values before and after a workload instead of
      absolute values. This however does not work, because a first slab in a
      region can be almost used at the point where a test case starts and a
      next slab will be acquired from a slab_cache. This means that the
      previous slab will become a 'garbage' and will not be collected until
      128 KiB threshold will exceed: the latter `<...>.memory.used` check will
      return a bigger value than the former one. However, if the threshold
      will be reached during the workload, the latter check may show lesser
      value than the former one. In short, the test case would be unstable
      after this change.
      
      It is resolved by restarting of a tarantool instance before such test
      cases to ensure that there are no 'garbage' slabs in a current fiber's
      region.
      
      Note: This works only if a test case reserves only one slab at the
      moment: otherwise some memory may be hold after the case (and so a
      memory check after a workload will fail). However it seems that our
      cases are small enough to don't trigger this situation.
      
      Call of region_free() would be enough, but we have no Lua API for it.
      
      Fixes #4750.
      d6cf327f
  6. Feb 25, 2020
  7. Feb 24, 2020
    • Vladislav Shpilevoy's avatar
      upgrade: fix generated sequence upgrade from 2.1 · 6d45a41e
      Vladislav Shpilevoy authored
      The bug was in an attempt to update a record in _space_sequence
      in-place, to add field path and number. This was not properly
      supported by the system space's trigger, and was banned in the
      previous patch of this series.
      
      But delete + tuple update + insert work fine. The patch uses them.
      
      To test it the old disabled and heavily outdated
      xlog/upgrade.test.lua was replaced with a smaller analogue, which
      is supposed to be created separately for each upgrade bug.
      According to the new policy of creating test files.
      
      The patch tries to make it easy to add new upgrade tests and
      snapshots. A new test should consist of fill.lua script to
      populate spaces, snapshot, needed xlogs, and a .test.lua file.
      Fill script and binaries should be in the same folder as test file
      name, which is located in version folder. Like this:
      
       xlog/
       |
       + <test_name>.test.lua
       |
       +- upgrade/
          |
          +- <version>/
          |   |
          |   +-<test_name>/
          |     |
          |     +- fill.lua
          |     +- *.snap
          |     +- *.xlog
      
      Version is supposed to say explicitly what a version files in
      there have.
      
      Closes #4771
      6d45a41e
    • Vladislav Shpilevoy's avatar
      box: forbid to update/replace _space_sequence · 1a84b80e
      Vladislav Shpilevoy authored
      Anyway this does not work for generated sequences. A proper
      support of update would complicate the code and won't give
      anything useful.
      
      Part of #4771
      1a84b80e
    • Vladislav Shpilevoy's avatar
      upgrade: add missing sys triggers off and erasure · e1c7d25f
      Vladislav Shpilevoy authored
      box.internal.bootstrap() before doing anything turns off system
      space triggers, because it is likely to do some hard changes
      violating existing rules. And eliminates data from all system
      spaces to fill it from the scratch.
      
      Each time when a new space is added, its erasure and turning off
      its triggers should have been called explicitly here. As a result
      it was not done sometimes, by accident. For example, triggers
      were not turned off for _sequence_data, _sequence,
      _space_sequence.
      
      Content removal wasn't done for _space_sequence.
      
      The patch makes a generic solution which does not require manual
      patching of trigger manipulation and truncation anymore.
      
      The bug was discovered while working on #4771, although it is not
      related.
      e1c7d25f
    • Cyrill Gorcunov's avatar
      fiber: leak slab if unable to bring prots back · 8d53fadc
      Cyrill Gorcunov authored
      
      In case if we unable to revert guard page back to
      read|write we should never use such slab again.
      
      Initially I thought of just put panic here and
      exit but it is too destructive. I think better
      print an error and continue. If node admin ignore
      this message then one moment at future there won't
      be slab left for use and creating new fibers get
      prohibited.
      
      In future (hopefully near one) we plan to drop
      guard pages to prevent VMA fracturing and use
      stack marks instead.
      
      Reviewed-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      8d53fadc
    • Cyrill Gorcunov's avatar
      fiber: set diagnostics at madvise/mprotect failure · c6752297
      Cyrill Gorcunov authored
      
      Both madvise and mprotect calls can fail due to various
      reasons, mostly because of lack of free memory in the
      system.
      
      We log such cases via say_x helpers but this is not enough.
      In particular tarantool/memcached relies on diag error to be
      set to detect an error condition:
      
       | expire_fiber = fiber_new(name, memcached_expire_loop);
       | const box_error_t *err = box_error_last();
       | if (err) {
       |	say_error("Can't start the expire fiber");
       |	say_error("%s", box_error_message(err));
       |	return -1;
       | }
      
      Thus lets use diag_set() helper here and instead of macros
      use inline functions for better readability.
      
      Fixes #4722
      
      Reported-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Reviewed-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      c6752297
    • Chris Sosnin's avatar
      app: verify unix socket path length in socket.tcp_server() · aae93514
      Chris Sosnin authored
      Providing socket pathname longer than UNIX_PATH_MAX to socket.tcp_server()
      will not cause any error, lbox_socket_local_resolve will just truncate the
      name according to the limit, causing bad behavior (tarantool will try to
      access a socket, which doesn't exist). Thus, let's verify, that pathname
      can fit into buffer.
      
      Closes #4634
      aae93514
  8. Feb 21, 2020
    • Alexander V. Tikhonov's avatar
      gitlab-ci: adjust base URL of RPM/Deb repositories · 4dee6890
      Alexander V. Tikhonov authored
      Our S3 based repositories now reflect packagecloud.io repositories
      structure.
      
      It will allow us to migrate from packagecloud.io w/o much complicating
      redirection rules on a web server serving download.tarantool.org.
      
      Deploy source packages (*.src.rpm) into separate 'SRPM' repository
      like packagecloud.io does.
      
      Changed repository signing key from its subkey to public and moved it
      to gitlab-ci environment.
      
      Follows up #3380
      4dee6890
  9. Feb 20, 2020
    • Cyrill Gorcunov's avatar
      70e18f5d
    • Cyrill Gorcunov's avatar
      box/journal: sanitize completion naming · 66de0fe6
      Cyrill Gorcunov authored
      
      Instead of on_done use on_complete prefix
      since done it too general while we're trying
      to complete write procedue. Also it is more
      consistent with txn_complete name.
      
      Acked-by: default avatarKonstantin Osipov <kostja.osipov@gmail.com>
      Acked-by: default avatarNikita Pettik <korablev@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      66de0fe6
    • Cyrill Gorcunov's avatar
      box/journal: use plain int for return value · 54ab4bf7
      Cyrill Gorcunov authored
      
      We're returning int64_t with values 0 or -1 by now,
      there is no need for such big return type, plain
      integer is enough.
      
      Acked-by: default avatarNikita Pettik <korablev@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      54ab4bf7
    • Cyrill Gorcunov's avatar
      box/txn: fix void args mess · 11dec7dd
      Cyrill Gorcunov authored
      
      Using void explicitly in functions which take
      no arguments allows to optimize code a bit and
      don't assume if there might be variable args.
      
      Moreover in commit e070cc4d we dropped arguments
      from txn_begin but didn't update vy_scheduler.c.
      The compiler didn't complain because it assumed
      there are vargs.
      
      Acked-by: default avatarKonstantin Osipov <kostja.osipov@gmail.com>
      Acked-by: default avatarNikita Pettik <korablev@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      11dec7dd
    • Alexander V. Tikhonov's avatar
      travis-ci/gitlab-ci: remove Ubuntu 18.10 Cosmic · 961e8c5f
      Alexander V. Tikhonov authored
      Found that on 19.02.2020 APT repositories with packages
      for Ubuntu 18.10 Cosmic were removed from Ubuntu archive:
      
       E: The repository 'http://security.ubuntu.com/ubuntu cosmic-security Release' does not have a Release file.
       E: The repository 'http://archive.ubuntu.com/ubuntu cosmic Release' does not have a Release file.
       E: The repository 'http://archive.ubuntu.com/ubuntu cosmic-updates Release' does not have a Release file.
       E: The repository 'http://archive.ubuntu.com/ubuntu cosmic-backports Release' does not have a Release file.
      
      Also found the half a year old message about Ubuntu 18.10
      Cosmic EOL:
       https://fridge.ubuntu.com/2019/07/19/ubuntu-18-10-cosmic-cuttlefish-end-of-life-reached-on-july-18-2019/
      
      Removed the Ubuntu 18.10 Cosmic from gitlab-ci and
      travis-ci testings.
      961e8c5f
    • Vladislav Shpilevoy's avatar
      app: handle concatenated argv name-value correctly · 29cfd564
      Vladislav Shpilevoy authored
      The server used to crash when any option argument was passed with
      a value concatenated to it, like this: '-lvalue', '-evalue'
      instead of '-l value' and '-e value'.
      
      However this is a valid way of writing values, and it should not
      have crashed regardless of its validity.
      
      The bug was in usage of 'optind' global variable from getopt()
      function family. It is not supposed to be used for getting an
      option's value. It points to a next argv to parse. Next argv !=
      value of current argv, like it was with '-lvalue' and '-evalue'.
      
      For getting a current value there is a variable 'optarg'.
      
      Closes #4775
      29cfd564
    • Vladislav Shpilevoy's avatar
      tuple: use field type in update of a float/double · daad1cb0
      Vladislav Shpilevoy authored
      There was a bug that float +/- float could result into infinity
      even if result fits a double. It was fixed by storing double or
      float depending on a result value. But it didn't take result field
      type into account. That led to a bug when a double field +/- a
      value fit the float range, and could be stored as float resulting
      into an error at attempt to create a tuple.
      
      Now if a field type is double in the tuple format, it will store
      double always, even if it fits the float range.
      
      Follow-up #4701
      daad1cb0
    • Vladislav Shpilevoy's avatar
      tuple: allow xrow_update sizeof reserve more memory · ce557f8d
      Vladislav Shpilevoy authored
      Currently xrow_update sizeof + store are used to calculate the
      result tuple's size, preallocate it as a monolithic memory block,
      and save the update tree into it.
      
      Sizeof was expected to return the exact memory size needed for the
      tuple. But this is not useful when result size of a field depends
      on its format type, and therefore on its position in the tuple.
      Because in that case sizeof would need to care about tuple format,
      and walk format tree just like store does now. Or it would be
      needed to save the found json tree nodes into struct
      xrow_update_op during sizeof calculation. All of this would make
      sizeof code more complex.
      
      The patch makes it possible for sizeof to return the maximal
      needed size. So, for example, a floating point field size now
      returns size needed for encoding of a double. And then store can
      either encode double or float.
      
      Follow-up #4701
      ce557f8d
    • Vladislav Shpilevoy's avatar
      tuple: pass tuple format to xrow_update_*_store() · 55cb1957
      Vladislav Shpilevoy authored
      Tuple format now is passed to xrow_update routines. It is going to
      be used for two purposes:
      
      - Find field types of the result tuple fields. It will be used to
        decide whether a floating point value should be saved with
        single or double precision;
      
      - In future use the format and tuple offset map to find target
        fields for O(1), without decoding anything. May be especially
        useful for JSON updates of indexed fields.
      
      For the types the format is passed to *_store() functions. Types
      can't be calculated earlier, because '!' and '#' operations change
      field order. Even if a type would be calculated during operations
      appliance for field N, an operation '!' or '#' on field < N would
      make this calculation useless.
      
      Follow-up #4701
      55cb1957
    • Vladislav Shpilevoy's avatar
      tuple: don't truncate float in :update() · fef4fdfc
      Vladislav Shpilevoy authored
      Before the patch there were the rules:
      * float +/- double = double
      * double +/- double = double
      * float +/- float = float
      
      The rules were applied regardless of values. That led to a problem
      when float + float exceeding maximal float value could fit into
      double, but was stored as an infinity.
      
      The patch makes so that if a floating point arithmetic operation
      result fits into float, it is stored as float. Otherwise as
      double. Regardless of initial types.
      
      This alongside saves some memory for cases when doubles can be
      stored as floats, and therefore takes 4 less bytes. Although
      these cases are rare, because any not integer value stored in a
      double may have a long garbage tail in its fraction.
      
      Closes #4701
      fef4fdfc
  10. Feb 19, 2020
    • Vladislav Shpilevoy's avatar
      app: os.setenv() affects os.environ() · 954d4bdc
      Vladislav Shpilevoy authored
      os.setenv() and os.environ() are Lua API for
      
          extern char **environ;
          int setenv();
      
      The Open Group standardized access points for environment
      variables. But there is no a word about that environ never
      changes. Programs can't relay on that. For example, addition of
      a new variable may cause realloc of the whole environ array, and
      therefore change of its pointer value. That was exactly the case
      in os.environ() - it was using value of environ array remembered
      when Tarantool started.
      
      And os.setenv() could realloc the array and turn the saved pointer
      into garbage.
      
      Closes #4733
      954d4bdc
    • Kirill Yukhin's avatar
      luajit: bump new version · 04dd6f43
      Kirill Yukhin authored
      Revert "build: introduce LUAJIT_ENABLE_PAIRSMM flag"
      
      Related to #4770
      04dd6f43
  11. Feb 18, 2020
    • Alexander V. Tikhonov's avatar
      gitlab-ci: enable performance testing · 87c68344
      Alexander V. Tikhonov authored
      Enabled Tarantool performance testing on Gitlab-CI for release/master
      branches and "*-perf" named branches. For this purpose 'perf' and
      'cleanup' stages were added into Gitlab-CI pipeline.
      
      Performance testing support next benchmarks:
      
      - cbench
      - linkbench
      - nosqlbench (hash and tree Tarantool run modes)
      - sysbench
      - tpcc
      - ycsb (hash and tree Tarantool run modes)
      
      Benchmarks use scripts from repository:
      http://github.com/tarantool/bench-run
      
      Performance testing uses docker images, built with docker files from
      bench-run repository:
      
      - perf/ubuntu-bionic:perf_master           -- parent image with
                                                    benchmarks only
      - perf_tmp/ubuntu-bionic:perf_<commit_SHA> -- child images used for
                                                    testing Tarantool sources
      
      @Totktonada: Harness and workloads are to be reviewed.
      87c68344
    • Oleg Babin's avatar
      lua: handle uri.format empty input properly · 57f6fc93
      Oleg Babin authored
      After 7fd6c809
      (buffer: port static allocator to Lua) uri started to use
      static_allocator - cyclic buffer that also is used in
      several modules.
      
      However situation when uri.format output is zero-length
      string was not handled properly and ffi.string could
      return data that was previously written in static buffer
      because use as string terminator the first zero byte.
      
      To prevent such situation let's pass result length explicitly.
      
      Closes #4779
      57f6fc93
Loading