Skip to content
Snippets Groups Projects
  1. Apr 27, 2024
    • Magomed Kostoev's avatar
      perf: reduce the BPS tree perftest dataset · e279070a
      Magomed Kostoev authored
      Since the performance benchmarks for three additional flavors of the
      BPS tree had been introduced, the amount of test in this suite has
      increased to 228. Given some tests work with datasets of 10M entries,
      the amount of time required to run these increased significantly.
      
      Mitigate this by reducing the test datasets.
      
      NO_DOC=perf test
      NO_TEST=perf test
      NO_CHANGELOG=perf test
      e279070a
    • Magomed Kostoev's avatar
      perf: add new BPS tree variations benchmarks · 76ff4029
      Magomed Kostoev authored
      These add three new configs to be tested in the benchmarks: tree with
      child cardinalities enabled, with inner cardinality enabled and with
      both of these.
      
      By the way simplified the performance analisys by reducing the memory
      allocation overhead (it's not required to be zero-initialized) and by
      moving the test tree build into a separated function.
      
      NO_DOC=perf test
      NO_TEST=perf test
      NO_CHANGELOG=perf test
      76ff4029
    • Magomed Kostoev's avatar
      bps: add 2-way support for logarithmic offsets · bfe83ac8
      Magomed Kostoev authored
      The current tree does not allow to find offset of an element or create
      an iterator to an element based on its offset. This patch is meant to
      fix this by expanding the data structure with additional information
      and introducing methods using it: subtree cardinalities.
      
      A subtree cardinality is the amount of elements in it. For example,
      cardinality of a leaf block is count of elements in it (effectively
      it equals to leaf.header.size), cardinality of an inner block is the
      sum of cardinalities of its chlidren.
      
      The patch includes two chosable ways to store this information:
      `BPS_INNER_CARD` and `BPS_INNER_CHILD_CARDS`.
      
      The first implementation sores block cardinality in each inner block.
      This implementation has minimal memory overhead (it just introduces
      a new 64-bit field in `struct bps_inner`), but calculation of offsets
      is not that fast, since in order to find an offset of a particular
      child of an inner node we have to look into each of its children
      prior to the looking one.
      
      The second one sores an array of children cardinalities in inner
      blocks. The memory overhead of this implementation is visible since
      it significantly decreases the children capacity of inner blocks. The
      max count in inner block is decreased from 42 to 25 for tree of 8-byte
      elements with 512-byte blocks and from 25 to 18 for tree of 16-byte
      elements with 512-byte blocks. Offset calcluations are faster though.
      
      It's possible (though impractical) to enable both solutions, the tree
      will use the best ways to perform offset-based tasks, but will have to
      maintain both children cardinality array and inner own cardinalities.
      
      Along with the theoretical support this patch introduces a bunch of
      functions using it:
      - `iterator_at(t, offset)`: gives an iterator to an element of a tree
        or tree view by its offset;
      - `find_get_offset(t, key, offset_ptr)`: the same as `find` but also
        provides to the user the offset of the found element in the output
        parameter;
      - `[lower|upper]_bound[_elem]_get_offset(t, key, exact, offset_ptr)`:
        the same as upper/lower bound functions but provide to the user the
        offset to the found position (end of the tree included).
      - `insert_get_offset(t, new_elem, replaced, offset_ptr)`: the same as
        `insert`, but also provides the offset to the inserted element.
      - `delete_get_offset(t, elem, offset_ptr)`: same as `delete`, but also
        returns offset to the deleted element prior to the deletion in the
        output parameter.
      
      Another new function introduced is bps_tree_view_debug_check(t). This
      function is similar to the bps_tree_debug_check(t), but is applicable
      to tree views. It's used to adopt default tree view tests to the new
      tree variations.
      
      Each new implementation is tested by old tree tests (these now support
      several tree variations selected with a C definition, the definitions
      are specified in the test/unit/CMakeLists.txt).
      
      New offset API-related test introduced (this one tests both of two
      tree variations - BPS_INNER_CARD and BPS_INNER_CHILD_CARDS).
      
      Part of #8204
      
      NO_DOC=internal
      NO_CHANGELOG=internal
      bfe83ac8
    • Magomed Kostoev's avatar
      test: prepare BPS tree tests for new tree flavors · 8ecf3cdc
      Magomed Kostoev authored
      New BPS tree flavors are to be introduced and tested with the existing
      test suite. There're a bunch of problems though:
      1. The white box test uses magic constants to performs its checks, it
         is better to use constants defined by the bps_tree.h instead.
      2. The bps_tree.cc test itself is not TAP-compatible, fix this by
         introducing more assertions.
      3. The bps_tree_iteartor.c test is not TAP-compatible too, is uses
         the result file to check some cases. Let's remove the manual
         printing tests and modify the automated ones to cover the removed
         cases.
      
      By the way performed minor bps_tree.cc test refactoring.
      
      NO_DOC=test update
      NO_CHANGELOG=test update
      8ecf3cdc
    • Magomed Kostoev's avatar
      bps: refactor the debug check functions · b5f6e0b0
      Magomed Kostoev authored
      The checkpatch does not permit to modify several parts of inner debug
      check functions complaining about too big indentation. The modification
      will be required further to implement the LogN offset in the BPS tree,
      so this patch refactors the functions and introduces a helper function
      for this: bps_tree_debug_insert_and_move_next.
      
      The refactored functions are:
      - bps_tree_debug_check_insert_and_move_to_right_inner
      - bps_tree_debug_check_insert_and_move_to_left_inner
      - bps_tree_debug_check_insert_and_move_to_right_leaf
      - bps_tree_debug_check_insert_and_move_to_left_leaf
      
      NO_DOC=refactoring
      NO_TEST=refactoring
      NO_CHANGELOG=refactoring
      b5f6e0b0
  2. Apr 24, 2024
    • Alexander Turenko's avatar
      popen: add timeout for :wait() · 735b0dce
      Alexander Turenko authored
      This commit solves several problems:
      
      * Eliminates polling with fiber sleeps for a process status in `:wait()`.
        Now the method waits for libev's SIGCHLD watcher (via a fiber cond).
      * Fixes use-after-free and crash/infinite hang in `:wait()` when the
        handle is closed from another fiber.
      * Adds `timeout` parameter to `:wait()`.
      
      Popen handles are not reference counted, so the code that waits for a
      process completion needs to be a bit tricky to don't access possibly
      freed memory. I guess things would be simpler if we implemented
      refcounting on the handles, but the same set of problems are generally
      solved on the lua/popen side (it tracks `:close()` calls), and I don't
      see enough motivation to rearrange it. At least, until we'll create the
      handles not only from Lua.
      
      Fixes #4915
      Fixes #7653
      Fixes #4916
      
      @TarantoolBot document
      Title: popen: :wait() now has the timeout parameter
      
      Usage example:
      
      ```lua
      local ph = popen.new(<...>)
      local res, err = ph:wait({timeout = 1})
      
      if res == nil then
          -- Timeout is reached.
          assert(err.type == 'TimedOut')
          <...>
      end
      ```
      
      Also `:wait()` now has defined behavior when the popen handle is closed
      from another fiber: the method returns the `ChannelIsClosed` error.
      
      Both updates should have 'Since X.Y.Z' marks in the documentation to
      allow users to decide whether to use the new features based on what
      tarantool releases should be supported by the calling code. IOW, a user
      may lean on the defined close-during-wait behavior or decide to don't.
      The same is true for the new timeout option.
      
      See the `lbox_popen_wait()` comment for the updated formal description
      of the `<popen handle>:wait(<...>)` method.
      735b0dce
    • Andrey Saranchin's avatar
      test: fix flaky persistent triggers test · 927e3516
      Andrey Saranchin authored
      The replication test of persistent triggers was waiting only for
      the persistent triggers to arrive on replica, so the replica tried
      to write to the space which was not created there yet. Let's wait
      for all changes to arrive to make the test stable.
      
      Closes #9967
      
      NO_CHANGELOG=test
      NO_DOC=test
      927e3516
  3. Apr 23, 2024
    • Georgiy Lebedev's avatar
      netbox: close transport after stopping worker loop and wait for the stop · fcf7f5c4
      Georgiy Lebedev authored
      
      Currently, we close the transport from transport from
      `luaT_netbox_transport_stop`, and we do not wait for the worker fiber to
      stop. This causes several problems.
      
      Firstly, the worker can switch context by yielding (`coio_wait`) or
      entering the Lua VM (`netbox_on_state_change`). During a context switch,
      the connection can get closed. When the connection is closed, its receive
      buffer is reset. If there was some pending response that was partially
      retrieved (e.g., a large select), then after resetting the buffer we will
      read some inconsistent data. We must not allow this to happen, so let's
      check for this case after returning from places where the worker can switch
      context. In between closing the connection and cancelling the connection's
      worker, an `on_disconnect` trigger can be called, which, in turn, can
      also yield, returning control to the worker before it gets cancelled.
      
      Secondly, when the worker enters the Lua VM, garbage collection can be
      triggered and the connection owning the worker could get closed
      unexpectedly to the worker.
      
      The fundamental source of these problems is that we close the transport
      before the worker's loop stops. Instead, we should close it after the
      worker's loop stops. In `luaT_netbox_transport_stop`, we should only cancel
      the worker, and either wait for the worker to stop, if we are not executing
      on it, or otherwise throw an exception (`luaL_testcancel`) to stop the
      worker's loop. The user will still have the opportunity to catch this
      exception and prevent stoppage of the worker at his own risk. To safeguard
      from this scenario, we will now keep the `is_closing` flag enabled once
      `luaT_netbox_transport_stop` is called and never disable it.
      
      There also still remains a special case of the connection getting garbage
      collected, when it is impossible to stop the worker's loop, since we cannot
      join the worker (yielding is forbidden from finalizers), and an exception
      will not go past the finalizer. However, this case is safe, since the
      connection is not going to be used by this point, so the worker can simply
      stop on its own at some point. The only thing we need to account for is
      that we cannot wait for the worker to stop: we can reuse the `wait` option
      of `luaT_netbox_transport_stop` for this.
      
      Closes #9621
      Closes #9826
      
      NO_DOC=<bugfix>
      
      Co-authored-by: default avatarVladimir Davydov <vdavydov@tarantool.org>
      fcf7f5c4
    • Nikolay Shirokovskiy's avatar
      box: add details to DML update/upsert specific errors · 5602c28d
      Nikolay Shirokovskiy authored
      Add next UPDATE error payload fields:
      - space name
      - space id
      - index name
      - index id
      - tuple (tuple value on the moment of update)
      - ops (update operations)
      
      Add next UPSERT error payload fields for invalid operations syntax:
      - space name
      - space id
      - ops (upsert operations)
      
      Closes #7223
      
      NO_DOC=minor
      5602c28d
    • Nikolay Shirokovskiy's avatar
      box: add details to DML CANT_UPDATE_PRIMARY_KEY error · f9c7f89a
      Nikolay Shirokovskiy authored
      Add next error payload fields:
      - space name
      - space id
      - old tuple
      - new tuple
      
      Part of #7223
      
      NO_CHANGELOG=unfinished
      NO_DOC=minor
      f9c7f89a
    • Nikolay Shirokovskiy's avatar
      box: allow to pass NULL for payload only error field · ffb36790
      Nikolay Shirokovskiy authored
      In this case payload field will be omitted. We are going to use it with
      CANT_UPDATE_PRIMARY_KEY error.
      
      Follows up #7223
      
      NO_CHANGELOG=internal
      NO_DOC=internal
      ffb36790
    • Nikolay Shirokovskiy's avatar
      box: add details to DML errors related to tuple validation · 40136d9e
      Nikolay Shirokovskiy authored
      Add next error payload fields:
      - space name
      - space id
      - index name (where index is involved)
      - index id (where index is involved)
      - tuple
      
      Part of #7223
      
      NO_CHANGELOG=unfinished
      NO_DOC=minor
      40136d9e
    • Nikolay Shirokovskiy's avatar
      box: make replace_check_dup set diag · 1c2b14c2
      Nikolay Shirokovskiy authored
      Part of #7223
      
      NO_TEST=refactoring
      NO_CHANGELOG=refactoring
      NO_DOC=refactoring
      1c2b14c2
    • Nikolay Shirokovskiy's avatar
      box: panic on OOM on tuple validation · 0e392c29
      Nikolay Shirokovskiy authored
      Part of #7223
      
      NO_TEST=refactoring
      NO_CHANGELOG=refactoring
      NO_DOC=refactoring
      0e392c29
    • Nikolay Shirokovskiy's avatar
      box: add details to DML errors related to key validation · 8fc59b10
      Nikolay Shirokovskiy authored
      Add next error payload fields:
      - space name
      - space id
      - index name
      - index id
      - key
      
      Part of #7223
      
      NO_CHANGELOG=unfinished
      NO_DOC=minor
      8fc59b10
    • Nikolay Shirokovskiy's avatar
      box: refactor index point lookup validation · a30d1418
      Nikolay Shirokovskiy authored
      Add index uniqness check to the `exact_key_validate`. Also while at it
      let's dropd dead `index_find_.*xc` and excess
      `exact_key_validate_nullable`.
      
      Part of #7223
      
      NO_TEST=refactoring
      NO_CHANGELOG=refactoring
      NO_DOC=refactoring
      a30d1418
  4. Apr 16, 2024
    • Sergey Ostanevich's avatar
      changelog: cleanup 3.1.0 changelogs · 7fd530f6
      Sergey Ostanevich authored
      Remove all changelogs reported in release notes for 3.1.0.
      
      NO_CHANGELOG=changelog
      NO_DOC=changelog
      NO_TEST=changelog
      7fd530f6
    • Aleksandr Lyapunov's avatar
      memtx: fix a bug with mvcc and exclude_null option · 14e21297
      Aleksandr Lyapunov authored
      Before this patch MVCC engine expected that if index_replace
      sets `result` to NULL then index_replace sets `successor` to
      something (NULL or existing tuple, depending on index type).
      That looked fine because by contract `successor` is set when
      true insertion was happened.
      
      Unfortunately it was not considered that in case of part with
      `exclude_null` option in index the insertion can be silently
      skipped and thus `successor` can be not set. The latter access
      of it was actually an UB.
      
      Fix it by explicit check of tuple_key_is_excluded and work on
      this case correctly.
      
      Note that logically `index_replace` should return a flag whether
      the new tuple was filtered (excluded) by key_def. But on the other
      hand this flag is required only for mvcc while the function is
      already has lots of arguments and it's very cheap to determine
      this flag right from memtx_tx, so I decided to make the most
      simple patch.
      
      NO_DOC=bugfix
      14e21297
  5. Apr 15, 2024
    • Sergey Vorontsov's avatar
      ci: add timeouts for workflow jobs · da682276
      Sergey Vorontsov authored
      By default, each job in a workflow can run for up to 6 hours of the
      execution time. If a job reaches this limit, the job is terminated by
      GitHub automatically and fails to complete. This patch sets job timeouts
      to 60 minutes to avoid waiting for jobs to complete for 6 hours.
      
      NO_DOC=ci
      NO_TEST=ci
      NO_CHANGELOG=ci
      da682276
    • Alexander Turenko's avatar
      build: use VK S3 for icu4c and zziplib archives · 03445e6b
      Alexander Turenko authored
      The CI/CD builds are performed on VK Cloud virtual machines, so the
      access to VK S3 is more reliable than to GitHub archives.
      
      In fact, we experience periodical download problems with source archives
      on GitHub in Tarantool Enterprise Edition builds in CI/CD and it is the
      motivation to backup the archives on our side. The problems appear quite
      frequently last few days.
      
      The download problems are not on VK Cloud side and not on GitHub side.
      The packet loss is somewhere in the middle. I don't know an exact reason
      for now.
      
      NO_DOC=no user-visible changes
      NO_CHANGELOG=see NO_DOC
      NO_TEST=see NO_DOC
      03445e6b
    • Gleb Kashkin's avatar
      config: add warnings on skipped URIs · bc18a054
      Gleb Kashkin authored
      There is a function `find_suitable_uri()` that basically looks for a
      URI with non-zero address and port. This patch adds logging that should
      ensure that it is easy to understand if a URI was skipped and why.
      Note that additional logging is disabled by default and happens only
      in sharding and replicaset configuration.
      
      Closes #9644
      
      NO_DOC=internal
      NO_TEST=hard to test internal logging
      bc18a054
    • Gleb Kashkin's avatar
      config: add subj URI to config verification error · 54e90083
      Gleb Kashkin authored
      Before this patch, it was quite difficult to determine, which URI in
      config was unsuitable.
      Now the subject URI is listed in the error body.
      
      NO_DOC=bugfix
      
      Part of #9644
      54e90083
    • Andrey Saranchin's avatar
      trigger: introduce persistent triggers · 4c81aba8
      Andrey Saranchin authored
      After this patch, functions with non-empty option `trigger` will be
      inserted into the trigger registry. The option specifies event names in
      which the function will be inserted as a trigger.
      
      When a function is created, it is set as a trigger without any checks -
      it can replace an existing one. When it is deleted, it can delete a
      trigger it hasn't set - for example, it can happen when user manually
      replaces a persistent trigger. So, it is recommended to have different
      names (or even separate namespaces) for persistent and usual triggers.
      
      Note that both persistent and non-persistent funcs can be used as
      triggers. Also, when the triggers are called, access rights are
      not checked.
      
      Closes #8663
      
      @TarantoolBot document
      Title: Document `trigger` option of `box.schema.func.create`.
      
      The new option `trigger` allows to create persistent triggers. The
      option can be a string or an array of strings - name (or names) of the
      event in which the trigger will be set.
      
      Each function created with `box.schema.func.create` has its own tuple
      in system space `_func`. When the tuple with non-empty field `trigger`
      is inserted, the function is set to the events listed by this option
      in the event registry, function name is used as a trigger name. When
      such tuple is deleted from `_func`, the triggers are deleted.
      
      When a function is created, it is set as a trigger without any checks -
      it can replace an existing one. When it is deleted, it can delete a
      trigger it hasn't set - for example, it can happen when user manually
      replaces a persistent trigger. So, it is recommended to have different
      names (or even separate namespaces) for persistent and usual triggers.
      
      When a function is called as a trigger, access rights are ignored, so,
      actually, every user that can trigger the event has access to your
      function, but only as a trigger.
      
      Since the space `_func` is not temporary, after Tarantool is restarted,
      the persistent triggers will be set. Also, since the space is not local,
      the persistent triggers will be replicated, so user has to manually
      control that triggers (for example, `before_replace` or `before_commit`)
      are run only on master node, if the application requires such logic.
      
      Example of a persistent trigger on a single node:
      ```lua
      box.cfg{}
      
      -- Create spaces
      box.schema.space.create('cdc')
      box.space.cdc:create_index('pk')
      box.schema.space.create('my_space1')
      box.space.my_space1:create_index('pk')
      box.schema.space.create('my_space2')
      box.space.my_space2:create_index('pk')
      
      -- Set triggers
      local body = 'function(old, new) box.space.cdc:auto_increment{old, new} end'
      local events = {
          'box.space.my_space1.on_replace',
          'box.space.my_space2.on_replace'
      }
      -- Set the function as a trigger for two events at the same time.
      box.schema.func.create('example.space_trigger', {body = body, trigger=events})
      
      -- Some replaces
      box.space.my_space1:replace{0, 'v1'}
      box.space.my_space2:replace{0, 0}
      box.space.my_space1:replace{0, 'v2'}
      box.space.my_space2:replace{0, 1}
      print(box.space.cdc:fselect{})
      ```
      
      Here, restart Tarantool to check if the trigger will be restored.
      
      ```lua
      box.cfg{}
      box.space.my_space1:replace{1, 'v1'}
      box.space.my_space2:replace{1, 0}
      box.space.my_space1:replace{1, 'v2'}
      box.space.my_space2:replace{1, 1}
      print(box.space.cdc:fselect{})
      ```
      
      The output shows that all replaces were captured.
      
      Example of a persistent trigger in a cluster. In this scenario,
      before_replace trigger is not idempotent, so it must be applied
      only once - on actual replace, but not during replication. For this
      purpose, `box.session.type()` can be used.
      ```lua
      -- instance1.lua
      
      local fiber = require('fiber')
      box.cfg{}
      box.schema.user.grant('guest', 'super')
      local body = [[
          function(old_tuple, new_tuple)
              -- Covert kilogramms into gramms
              if box.session.type() ~= 'applier' then
                  return box.tuple.new{new_tuple[1], new_tuple[2] * 1000}
              end
          end
      ]]
      local event = 'box.space.weights.before_replace'
      box.schema.func.create('example.replicated_trigger', {body = body, trigger = event})
      box.schema.space.create('weights')
      box.space.weights:format({
          {name = 'name', type = 'string'},
          {name = 'gramms', type = 'unsigned'},
      })
      box.space.weights:create_index('primary', {parts = {'name'}})
      box.cfg{listen = 3301, replication = {3301, 3302}}
      
      box.ctl.wait_rw()
      
      box.space.weights:replace{'elephant', 4000}
      box.space.weights:replace{'crocodile', 600}
      
      -- Wait for another instance
      while box.space.weights:count() ~= 4 do
          fiber.sleep(0)
      end
      print(box.space.weights:fselect{})
      ```lua
      
      Another instance:
      
      ```lua
      -- instance2.lua
      local fiber = require('fiber')
      box.cfg{listen = 3302, replication = {3301, 3302}}
      
      box.ctl.wait_rw()
      
      box.space.weights:replace{'cat', 6}
      box.space.weights:replace{'dog', 10}
      
      -- Wait for another instance
      while box.space.weights:count() ~= 4 do
          fiber.sleep(0)
      end
      print(box.space.weights:fselect{})
      ```
      
      Output of both instances:
      ```
      +-----------+-------+
      |   name    |gramms |
      +-----------+-------+
      |   "cat"   | 6000  |
      |"crocodile"|600000 |
      |   "dog"   | 10000 |
      |"elephant" |4000000|
      +-----------+-------+
      ```
      
      We see that the trigger was applied exactly once for each tuple.
      
      I would also point out that when the trigger is fired, it pins the
      function, so it's better not to use persistent triggers for intensive
      events if the trigger yields (if the trigger doesn't yield, the problem
      won't be encountered at all). But if one faced such problem, he can
      manually drop the trigger from module `trigger`, wait for a while for
      the trigger to finish its execution and only then drop the function.
      4c81aba8
    • Andrey Saranchin's avatar
      trigger: support arbitrary func_adapter in trigger module · bb7b210d
      Andrey Saranchin authored
      The patch drops the invariant that all the triggers are Lua ones.
      It is needed to introduce persistent triggers, which will be implemented
      with another func_adapter inheritant.
      
      Part of #8663
      
      NO_TEST=see next commit
      NO_CHANGELOG=no behavior changes
      NO_DOC=no behavior changes
      bb7b210d
    • Andrey Saranchin's avatar
      func: introduce new option trigger · fafb063c
      Andrey Saranchin authored
      This option will be used as an event name for this function - when its
      tuple is replaced in space _func, the function will be inserted to the
      trigger registry. Option trigger specifies the name of event in which
      the trigger will be inserted, also one can pass an array of strings
      to set a trigger in several events.
      
      Along the way, fix inappropriate error thrown when `param_list` options
      of a function contains object of invalid type (only string is expected):
      ER_FIELD_TYPE, which was used there, is about a tuple, not array.
      
      Part of #8663
      
      NO_CHANGELOG=later
      NO_DOC=later
      fafb063c
    • Andrey Saranchin's avatar
      func_adapter: introduce func_adapter_func · b1a63dfe
      Andrey Saranchin authored
      The commit introduces new func_adapter implementation. The new
      `func_adapter_func` is a wrapper over `func`, but unlike `func`,
      it's always called without any access checks and is allowed to be
      called without passing ports for arguments and returned values.
      
      Part of #8663
      
      NO_CHANGELOG=internal
      NO_DOC=internal
      b1a63dfe
    • Andrey Saranchin's avatar
      port_lua: allow to dump several times · a1e5ab81
      Andrey Saranchin authored
      The commit populates port_lua with opportunity to be dumped several
      times. It's pretty cheap because under the hood lua_pushvalue is used,
      which copies only trivial objects and pushes references to heavy ones.
      
      NO_CHANGELOG=internal
      NO_DOC=internal
      a1e5ab81
    • Andrey Saranchin's avatar
      func_adapter_lua: allow to pass arguments on Lua stack from fiber · 4fd9f807
      Andrey Saranchin authored
      Currently, passing arguments on Lua stack from fiber to func_adapter_lua
      will lead to problems, so let's manually check if `args` is `port_lua`,
      and, if that's so, check if Lua-stack is not the same as stored in fiber.
      If the check fails, we will create a new one. This logic allows to call
      func_adapter_lua without thinking about sharing the same Lua stack.
      
      We could create the new Lua-stack on every func_adapter call, but it's
      costly. For example, now one empty on_replace trigger slows Tarantool
      down by 27%, but without the optimization slowdown is about 42%.
      
      NO_CHANGELOG=internal
      NO_DOC=internal
      4fd9f807
    • Andrey Saranchin's avatar
      bootstrap: disable dd checks on box.internal.bootstrap · 9ea0e369
      Andrey Saranchin authored
      Lately we restricted access to system spaces for users. So, in order to
      allow schema upgrade to modify them, data dictionary checks are disabled
      when upgrade is started for the current fiber and enabled back when it is
      completed. But we forgot to do the same with `box.internal.bootstrap`, so
      now it fails on access check. The commit fixes this mistake.
      
      NO_TEST=not tested tool
      NO_CHANGELOG=internal
      NO_DOC=internal
      9ea0e369
    • Andrey Saranchin's avatar
      trigger: do not return handler from trigger.del · c54adc1e
      Andrey Saranchin authored
      It was mistake to return handler from trigger deletion - if the handler
      is not Lua object, its lifetime is independent from Lua, and we
      basically return object from its destructor. Let's just return nothing
      because checking if a trigger was actually deleted seems to be misusage
      of our trigger paradigm.
      
      NO_CHANGELOG=not documented behavior
      NO_DOC=bugfix
      c54adc1e
  6. Apr 12, 2024
    • Mergen Imeev's avatar
      connpool: instances, replicasets and groups filter · a8164c77
      Mergen Imeev authored
      Follow-up #9842
      
      NO_CHANGELOG=the experimental module is not released yet
      
      @TarantoolBot document
      Title: The `instances`, `replicasets` and `groups` options
      
      Three new options were introduced in `connpool.filter()` and
      `connpool.call()` functions: `instances`, `replicasets` and `groups`.
      1) The `instances` option is a list of instance names to which the
         filtered instances should belong.
      2) The `replicasets` option is a list of replicaset names to which the
         filtered instances should belong.
      2) The `groups` option is a list of group names to which the filtered
         instances should belong.
      a8164c77
    • Mergen Imeev's avatar
      connpool: prefer_ro and prefer_rw for call() · 60fdffbb
      Mergen Imeev authored
      This patch introduces "prefer_ro" and "prefer_rw" values for the "mode"
      option for the "call()" function.
      
      Closes #9930
      
      @TarantoolBot document
      Title: The `mode` option for `call()` and `filter()`
      
      The new `mode` option is now supported by the `call()` and `filter()`
      functions from the `experimental.connpool` module. This option allows to
      filter candidates based on their `read-only` status.
      
      The `filter()` function supports three values of the `mode` option:
      1) `nil` means that the `read_only` status of the instance is not
         checked;
      2) `ro` means that only instances with `read_only == true` are
         considered;
      3) `rw` means that only instances with `read_only == false` are
         considered.
      
      The `call()` function supports five values of the `mode` option:
      1) `nil` means that the `read_only` status of the instance is not
         checked when instance is selected to execute `call()`;
      2) `ro` means that only instances with `read_only == true` are
         considered when instance is selected to execute `call()`;
      3) `rw` means that only instances with `read_only == false` are
         considered when instance is selected to execute `call()`.
      4) `prefer_ro` means that `call()` will only be executed on instances
         with `read_only == false` if it is not possible to execute it on
         instances with `read_only == true`;
      5) `prefer_rw` means that `call()` will only be executed on instances
         with `read_only == true` if it is not possible to execute it on
         instances with `read_only == false`.
      
      Note that if this option is not `nil`, a connection will be attempted to
      each instance in the config if a connection does not exist. This means
      that any of these functions can potentially block for a maximum of
      `<number of instances> * 10` seconds.
      60fdffbb
    • Mergen Imeev's avatar
      connpool: mode option for call() · c0a43139
      Mergen Imeev authored
      This patch adds a "mode" option to experimental.call(). This option
      allows to execute a function on instances with desired RO status.
      
      Part of #9930
      
      NO_DOC=will be added later
      NO_CHANGELOG=will be added later
      c0a43139
    • Mergen Imeev's avatar
      connpool: mode option for filter() · 19787331
      Mergen Imeev authored
      This patch adds a "mode" option to experimental.filter(). This option
      allows to filter instances based on their RO status.
      
      Part of #9930
      
      NO_DOC=will be added later
      NO_CHANGELOG=will be added later
      19787331
  7. Apr 11, 2024
    • Mergen Imeev's avatar
      connpool: fix connection error description · 441a2dd3
      Mergen Imeev authored
      This patch corrects the connection error description in
      "experimental.connpool.connect()".
      
      NO_DOC=fix error description
      NO_CHANGELOG=fix error description
      441a2dd3
    • Sergey Kaplun's avatar
      luajit: bump new version · db351d3b
      Sergey Kaplun authored
      * ci: bump version of actions/checkout
      * test: fix typo in the link to the issue
      * test: refactor CMake macro LibRealPath
      * test: move LibRealPath to the separate module
      * test: more cautious usage of LD_PRELOAD for ASan
      * test: fix lj-802-panic-at-mcode-protfail GCC+ASan
      * ci: execute LuaJIT tests with GCC 10 and ASAN
      * cmake: replace prove with CTest
      * Prevent down-recursion for side traces.
      * Handle stack reallocation in debug.setmetatable() and
        lua_setmetatable().
      * profilers: print user-friendly errors
      
      Closes #5994
      Closes #9595
      Closes #9217
      Closes #9656
      
      NO_DOC=LuaJIT submodule bump
      NO_TEST=LuaJIT submodule bump
      db351d3b
    • Nikolay Shirokovskiy's avatar
      box: add error field 'name' with name corresponding to code · 2bf95792
      Nikolay Shirokovskiy authored
      Closes #9875
      
      @TarantoolBot document
      Title: Add error field 'name'
      Product: Tarantool
      Since: 3.1
      
      The value is equal to name used on creation.
      
      ```
       tarantool> box.error.new(box.error.ILLEGAL_PARAMS, 'foo').name
      ---
      - ILLEGAL_PARAMS
      ...
      ```
      2bf95792
    • Georgy Moshkin's avatar
      box: fix memory leak when dropping temporary spaces · a90819bd
      Georgy Moshkin authored
      Before this fix we used to leak memory when dropping fully-temporary
      spaces. The problem was that the memory is only freed in the on_commit
      callback of the transaction, but dropping a temporary space is a NOP
      (meta-data is not persisted), and the on_commit triggers used to not be
      invoked for NOP transactions.
      
      Closes #9296
      
      NO_DOC=bug fix
      a90819bd
    • Oleg Chaplashkin's avatar
      test: bump test-run to new version · 4466deaf
      Oleg Chaplashkin authored
      Bump test-run to new version with the following improvements:
      
      - Bump luatest to 1.0.1-5-g105c69d [1]
      - tap13: fix worker fail on failed TAP13 parsing [2]
      
      [1] tarantool/test-run@ed5b623
      [2] tarantool/test-run@7c1a0a7
      
      NO_DOC=test
      NO_TEST=test
      NO_CHANGELOG=test
      4466deaf
  8. Apr 10, 2024
Loading