Skip to content
Snippets Groups Projects
  1. Oct 16, 2020
    • Cyrill Gorcunov's avatar
      .gitignore: add coverity, patches and special ignore · 7a7d2e9e
      Cyrill Gorcunov authored
      
       - coverity stands for coverity scanner results
       - .pc for quilt
       - .git-ignore directory for various things which
         are not supposed to be in repo, just for convenience
      
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      7a7d2e9e
    • Cyrill Gorcunov's avatar
      test/uint: fiber · aa78a941
      Cyrill Gorcunov authored
      
      When optimization level increased we've found that
      memset calls might be optimized, moreover there
      were no check if calls themselves would override
      the stack.
      
      Lets rework and use number of calls instead. Strictly
      speaking the test is still a bit fragile and precise
      testing would rather require asm level code.
      
      Repored-by: default avatar"Alexander V. Tikhonov" <avtikhon@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      aa78a941
    • Vladislav Shpilevoy's avatar
      raft: fix crash when leader resigned from its role · f85e886e
      Vladislav Shpilevoy authored
      Nodes with disabled Raft keep listening for Raft events and
      persist them. To be able to quickly enroll into the process if
      they are configured to be candidates.
      
      The same for the voter nodes - they can't be a leader, but watch
      and persist all what is happening.
      
      However when a leader resigned from its role, the voter and
      disabled nodes tried to start a new election round, even though
      they were not supposed to. That led to a crash, and is fixed in
      this patch.
      
      Closes #5426
      f85e886e
    • Serge Petrenko's avatar
      test: merge possible failed test outputs for election_qsync_stress · eb989bc7
      Serge Petrenko authored
      Instead of having all possible 2^10 diffs in case select{} fails,
      log the select{} output in case the space contains less than 10
      elements.
      Requested by @avtikhon for easier flaky test handling.
      
      Related to #5395
      eb989bc7
    • Alexander V. Tikhonov's avatar
      gitlab-ci: add out-of-source build · 6f588675
      Alexander V. Tikhonov authored
      Added out of source build make targets and added test job to gitlab-ci.
      
      Closes #4874
      6f588675
    • Alexander V. Tikhonov's avatar
      build: enable cmake in curl build · 2b076019
      Alexander V. Tikhonov authored
      Initially tried to change autoconf tools in Curl build to cmake and
      found the following build issue on:
      
        CentOS 6
        CentOS 7
        Debian 8
        Ubuntu 14.04
      
      Issue found:
      
        CMake Error at CMakeLists.txt:41 (cmake_minimum_required):
          CMake 3.0 or higher is required.  You are running version 2.8.12.2
      
      To fix the issue check is removed of the version from curl sources
      in Tarantool's third party '<root Tarantool sources>/third_party/curl':
      
        12af024bc85606b14ffc415413a7e86e6bbee7eb ('Enable curl build with old cmake')
      
      After this fix completely changed autoconf to cmake in curl build.
      Autoconf part was completely removed and code cleaned up for cmake.
      For curl cmake build all autoconf options were ported to cmake
      configuration call, check the accurate list of the change in [1].
      
      Also the following issues resolved:
      
      1. Found that CURL cmake configuration file:
      
           third_party/curl/lib/CMakeLists.txt
      
         has installation part for built libcurl.a library:
      
           install(TARGETS ${LIB_NAME}
             EXPORT ${TARGETS_EXPORT_NAME}
             ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
             LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
      
         where it changes CMAKE_INSTALL_LIBDIR to appropriate name with
         suffix of the architecture, like:
      
           lib
           lib64
           x86_64
      
         Found that find_library routine from the file:
      
           cmake/FindLibCURL.cmake
      
         returns only 'lib' value and it breaks the building of the depends
         binaries. To avoid of it the CMAKE_INSTALL_LIBDIR option was set to
         cmake call:
      
           -DCMAKE_INSTALL_LIBDIR=lib
      
      2. Found issue with building on CentOS 6:
      
           Linking C executable curl
           build/ares/dest/lib/libcares.a(ares__timeval.c.o): In function `ares__tvnow':
           ares__timeval.c:(.text+0x15): undefined reference to `clock_gettime'
           collect2: error: ld returned 1 exit status
      
         It was fixed with added "-lrt" flag to CMAKE_C_FLAGS and
         CMAKE_CXX_FLAGS build flags, when cmake version is lower
         than 3.0 and RT library had needed function.
      
      3. Found issues with building Tarantool statically on Debian 9 and
         its package build on CentOS 8.
      
         Building statically got the issues with openssl linking, like [2]:
      
           static-build/openssl-prefix/lib/libcrypto.a(threads_pthread.o): In function `CRYPTO_THREAD_lock_new':
           threads_pthread.c:(.text+0x45): undefined reference to `pthread_rwlock_init'
      
         It happened because in openssl radically changed how threads-related
         things were handled before 1.1.0 version. It required the application
         to provide us with the locking functionality in form of callbacks.
         Since 1.1.0, these matters are entirely internal, so libcrypto
         requires the presence of the platform specific thread implementation
         of our choosing, which is pthread on everything.
      
         After '-pthread' added to C compile flags package build on CentOS 8
         failed with the issue [3]:
      
           /build/usr/src/debug/tarantool-2.6.0.141/third_party/curl/lib/warnless.c:101:4: error: #error "SIZEOF_CURL_OFF_T not defined"
            #  error "SIZEOF_CURL_OFF_T not defined"
               ^~~~~
           /build/usr/src/debug/tarantool-2.6.0.141/third_party/curl/lib/warnless.c: In function 'curlx_uztoso':
           /build/usr/src/debug/tarantool-2.6.0.141/third_party/curl/lib/warnless.c:192:40: error: 'CURL_MASK_SCOFFT' undeclared (first use in this function); did you mean 'CURL_MASK_SINT'?
            return (curl_off_t)(uznum & (size_t) CURL_MASK_SCOFFT);
                                                 ^~~~~~~~~~~~~~~~
                                                 CURL_MASK_SINT
      
         To avoid of the issue decided to use '-pthread' flag only for openssl
         when it had not this flag in openssl compilation.
      
      4. Found issue with static build using CentOS 7, where SSL cmake rule
         failed. Building the image got the issue:
      
            [  1%] Performing configure step for 'bundled-libcurl-project'
            CMake Warning at CMakeLists.txt:50 (message):
              the curl cmake build system is poorly maintained.  Be aware
      
            -- curl version=[7.66.0-DEV]
            -- Found c-ares: /tarantool/build/ares/dest/lib/libcares.a
            Found *nroff option: -- -man
            CMake Error at /usr/share/cmake/Modules/FindOpenSSL.cmake:278 (list):
              list GET given empty list
            Call Stack (most recent call first):
              CMakeLists.txt:347 (find_package)
      
         Root cause of the issue that Dockerfile uses globaly
         installed openSSL with:
      
           cmake ... -DOPENSSL_ROOT_DIR=/usr/local ...
      
         Its cmake build file:
      
           /usr/share/cmake/Modules/FindOpenSSL.cmake
      
         fails on parsing the SSL version:
      
         it has:
                REGEX "^#define[\t ]+OPENSSL_VERSION_NUMBER[\t ]+0x([0-9a-fA-F])+.*")
      
         but it should to use:
                REGEX "^#[\t ]*define[\t ]+OPENSSL_VERSION_NUMBER[\t ]+0x([0-9a-fA-F])+.*")
      
         Anyway we want to use the same OpenSSL library for libcurl, as is used
         for Tarantool itself. So the path to it set for cURL build:
      
           list(APPEND LIBCURL_CMAKE_FLAGS "-DCMAKE_MODULE_PATH=${PROJECT_SOURCE_DIR}/cmake")
      
      5. In CMake build CMAKE_USE_LIBSSH2 flag is enabled by default, while
         in autoconf --with-libssh2 was disabled by default. We need to switch
         CMAKE_USE_LIBSSH2 flag off with:
      
           list(APPEND LIBCURL_CMAKE_FLAGS "-DCMAKE_USE_LIBSSH2=OFF")
      
         to avoid of linking issues, like:
      
           ld: libssh2.c:(.text+0x4d8): undefined reference to `libssh2_*...
      
         this issue exists in curl issues [4].
      
         Furthermore the following defaults are also disabled to keep the
         configuration consistent with autoconf one:
      
           list(APPEND LIBCURL_CMAKE_FLAGS "-DPICKY_COMPILER=OFF")
           list(APPEND LIBCURL_CMAKE_FLAGS "-DBUILD_CURL_EXE=OFF")
      
      Closes #4968
      Closes #5019
      Closes #5396
      
      [1] - https://github.com/tarantool/tarantool/issues/4968#issue-617183031
      [2] - https://gitlab.com/tarantool/tarantool/-/jobs/779176133#L6021
      [3] - https://gitlab.com/tarantool/tarantool/-/jobs/778309145#L3060
      [4] - https://github.com/curl/curl/issues/1146
      2b076019
    • Alexander V. Tikhonov's avatar
      build: generate bootstrap.h in CMAKE_BINARY_DIR · cca763ad
      Alexander V. Tikhonov authored
      Prior to these changes bootstrap.h was generated right in the source
      directory even for out of source build. Firstly such approach doesn't
      respect the idea of building outside the source files. Furthermore
      this leads to build failures when the source directory is located on
      read-only file system.
      
      As a result of the patch bootstrap.h is generated within the build
      tree and include directories are adjusted the corresponding way.
      
      Also changed destination build directory from source to binary path.
      
      Part of #4968
      cca763ad
    • Alexander V. Tikhonov's avatar
      curl: enable curl build with old cmake · ead9c748
      Alexander V. Tikhonov authored
      Blocked check if the current cmake of the older than
      3.2...3.16 version to be able to build curl at the old OS:
      
        - CentOS 6
        - CentOS 7
        - Debian 8
        - Ubuntu 14.04
      
      Used in 'third_party/curl' repository branch:
      
        curl-7_71_1-tarantool
      
      Needed for #4968
      ead9c748
    • Timur Safin's avatar
      module api: luaT_toibuf instead of luaL_checkibuf · 3dc6a76c
      Timur Safin authored
      * made `luaL_checkibuf` public and then renamed to
        `luaT_toibuf` to follow naming convention
        (it's not raising error, simply returning ibuf
        structure).
      
      Closes #5384
      Unverified
      3dc6a76c
    • Timur Safin's avatar
      module api: export box_key_def_dup · f56d31dd
      Timur Safin authored
      Exporting `box_key_def_dup` as accessor to the internal `key_def_dup`
      
      Part of #5384
      Unverified
      f56d31dd
    • Timur Safin's avatar
      module api: export box_tuple_validate · e7307b42
      Timur Safin authored
      For external merger we need means to validate tuple data,
      thus exporting `box_tuple_validate` which is wrapper around
      `tuple_validate` without revealing access to tuple
      internals.
      
      Part of #5384
      Unverified
      e7307b42
    • Timur Safin's avatar
      module api: box_ibuf_* wrappers · feb1fe53
      Timur Safin authored
      Introduced the bare minimum of ibuf accessors, which make
      external merger possible:
      - box_ibuf_reserve
      - box_ibuf_read_range
      - box_ibuf_write_range
      
      Part of #5384
      Unverified
      feb1fe53
    • Alexander Turenko's avatar
      module api: add box_key_def_validate_full_key() · a312e27e
      Alexander Turenko authored
      box_key_def_validate_key() verifies a probably partial key, while the
      new function is to apply the extra restriction on the key part count: it
      should be the same as in a given key definition.
      
      Fixes #5273
      Unverified
      a312e27e
    • Alexander Turenko's avatar
      module api: add box_key_def_validate_key() · 4a12985f
      Alexander Turenko authored
      The function allows to verify a key against a key definition. It accepts
      a partial key.
      
      Part of #5273
      Unverified
      4a12985f
    • Alexander Turenko's avatar
      module api: expose box_key_def_extract_key() · 97ef8598
      Alexander Turenko authored
      Unlike box_tuple_extract_key() it accepts a key_def structure, not
      space_id, index_id pair.
      
      Another difference from box_tuple_extract_key() is that this function
      allows to pass so called multikey index. See commit 2.2.0-259-gf1d9f2575
      ('box: introduce multikey indexes in memtx') for details.
      
      Note: The <multikey_idx> parameter is ignored on the backported version
      of the patch on 1.10.
      
      Part of #5273
      Unverified
      97ef8598
    • Alexander Turenko's avatar
      module api: expose box_key_def_merge() · 1327db7f
      Alexander Turenko authored
      Part of #5273
      Unverified
      1327db7f
    • Alexander Turenko's avatar
      module api: expose box_key_def_validate_tuple() · 4ce916e5
      Alexander Turenko authored
      Part of #5273
      Unverified
      4ce916e5
    • Alexander Turenko's avatar
      module api: add box_key_def_dump_parts() · 62a83e2a
      Alexander Turenko authored
      The function dumps an opaque <box_key_def_t> structure into a non-opaque
      array of <box_key_part_def_t> structures in order to allow an external
      module to obtain information about the key definition.
      
      Part of #5273
      Unverified
      62a83e2a
    • Alexander Turenko's avatar
      module api: add box_key_def_new_v2() · 741b1174
      Alexander Turenko authored
      Unlike box_key_def_new() it allows to set nullability, collation and
      JSON path.
      
      Note: JSON paths are not supported in the backported version of the
      patch for 1.10.
      
      Provided public non-opaque key part definition structure to create a key
      definition. The next commit will also use this structure to dump a key
      definition.
      
      There are several technical points around the box_key_part_def_t
      structure. They are mainly about providing stable ABI.
      
      - Two uint32_t fields are placed first for better aligning of next
        fields (pointers, which are usually 64 bit wide).
      
      - A padding is used to guarantee that the structure size will remain the
        same across tarantool versions. It allows to allocate an array of such
        structures.
      
      - The padding array is not a field of the structure itself, but added as
        a union variant (see the code). It allows to get rid of manual
        calculation of cumulative fields size, which is hard to do in a
        platform independent way.
      
      - A minimal size of the structure is guaranteed by the union with
        padding, but a static assert is required to guarantee that the size
        will not overrun the predefined value.
      
      - PACKED is added as an extra remedy to make the structure layout
        predictable (on given target architecture).
      
      - A bit flag is used for is_nullable. bool is considered as too
        expensive (it requires 8 bits). bitfields (int:1 and so on) do no
        guarantee certain data layout (it is compiler implementation detail),
        while a module is compiled outside of tarantool build and may use
        different toolchain. A bit flag is the only choice.
      
      - A collation is identified using a string. Different IDs may be used on
        different tarantool instances for collations. The only 'real'
        identifier is a collation name, so using it as identifier in the API
        should be more convenient and less error-prone.
      
      - A field type is also identified using a string instead of a number. We
        have <enum field_type> in the module API, but it cannot be used here,
        because IDs are changed across tarantool versions. Aside of this, size
        of a enum is compiler defined. Anyway, we can expose field types as
        numbers and implement number-to-name and name-to-number mapping
        functions, but IMHO it would just add extra complexity.
      
      The dependency on the fiber compilation unit is added for key_def: it is
      purely to obtain the box region in the module API functions. The key_def
      compilation unit does not use anything fiber related in fact.
      
      Part of #5273
      Unverified
      741b1174
    • Alexander Turenko's avatar
      module api: add API_EXPORT to key_def functions · 8e955847
      Alexander Turenko authored
      It is the rule of thumb to use API_EXPORT with module API functions.
      
      To be honest, I don't see strict reason to use this macro except for
      unification with other module API functions.
      
      It adds 'extern', which is default for functions.
      
      It adds __attribute__((visibility("default"))), but symbols to be
      exported are listed in extra/exports or src/export.h (depending of
      tarantool version, see [1]).
      
      It adds __attribute__((nothrow)), which maybe allows a compiler to
      produce more optimized code and also catch an obvious problem and emit a
      warning. I don't know.
      
      So the reason for me is unification.
      
      Part of #5273
      
      [1]: 2.5.0-42-g03790ac55 ('cmake: remove dynamic-list linker option')
      [2]: 1.6.8-71-g55605c5c9 ('Add __attribute__((nothrow)) to API_EXPORT macro')
      Unverified
      8e955847
    • Alexander Turenko's avatar
      module api/lua: add API_EXPORT to tuple functions · ddceb183
      Alexander Turenko authored
      The reason is unification of declarations. It is the rule of thumb to
      use API_EXPORT with module API functions.
      
      Part of #5273
      Unverified
      ddceb183
    • Alexander Turenko's avatar
      module api/lua: expose luaT_tuple_new() · 224988f6
      Alexander Turenko authored
      It is convenient wrapper around box_tuple_new() to create a tuple from a
      Lua table (or from another tuple).
      
      Part of #5273
      Unverified
      224988f6
    • Alexander Turenko's avatar
      module api/lua: add luaT_tuple_encode() · ab95ddaa
      Alexander Turenko authored
      It is the same as luaT_tuple_new(), but returns raw MsgPack data (not
      <box_tuple_t>) allocated on the box region.
      
      The reason to expose this function is to provide ability to use
      box_tuple_compare_with_key() function from an external module for a key
      passed as a Lua table. The compare function has the following signature:
      
       | API_EXPORT int
       | box_tuple_compare_with_key(box_tuple_t *tuple_a, const char *key_b,
       |                            box_key_def_t *key_def);
      
      The second parameter is a key encoded as an MsgPack array, not a tuple
      structure. So luaT_tuple_new() is not applicable here (it is not
      worthful to create a tuple structure if we need just MsgPack data).
      
      Some complexity was introduced to support encoding on the Lua shared
      buffer and the box region both. The motivation is the following:
      
      - luaT_tuple_encode() is exposed with encoding to the box region,
        because it is more usual to the module API. In particular a user of
        the API able to control when the tuple data should be released.
      - Encoding to the Lua shared buffer is kept internally, because there is
        no strong reason to change it to the box region for box.tuple.new().
      
      Part of #5273
      Unverified
      ab95ddaa
    • Alexander Turenko's avatar
      lua: don't raise a Lua error from luaT_tuple_new() · ec9a7fa7
      Alexander Turenko authored
      This change fixes incorrect behaviour at tuple serialization error in
      several places: <space_object>:frommap(), <key_def_object>:compare(),
      <merge_source>:select(). See more in #5382.
      
      Disallow creating a tuple from objects on the Lua stack (idx == 0) in
      luaT_tuple_new() for simplicity. There are no such usages in tarantool.
      The function is not exposed yet to the module API. This is only
      necessary in box.tuple.new(), which anyway raises Lua errors by its
      contract.
      
      The better way to implement it would be rewritting of serialization from
      Lua to msgpack without raising Lua errors, but it is labirous work. Some
      day, I hope, I'll return here.
      
      Part of #5273
      Fixes #5382
      Unverified
      ec9a7fa7
    • Alexander Turenko's avatar
      lua: factor out tuple encoding from luaT_tuple_new · 59149d6a
      Alexander Turenko authored
      It simplifies further changes around encoding a tuple: next commits will
      get rid of throwing serialization errors and will expose a function to
      encode a table (or tuple) from the Lua stack on the box region to the
      module API.
      
      Part of #5273
      Part of #5382
      Unverified
      59149d6a
    • Alexander Turenko's avatar
      module api/lua: add luaL_iscdata() function · 6a63b7c0
      Alexander Turenko authored
      It is useful to provide a module specific error when cdata expected, but
      a value of another type is passed.
      
      Alternative would be using of lua_type() to check against LUA_TCDATA,
      but this constant is not exposed for modules. See more in the
      luaL_iscdata() API comment.
      
      Part of #5273
      Unverified
      6a63b7c0
    • Alexander Turenko's avatar
      module api: expose box region · c6117909
      Alexander Turenko authored
      It is the better alternative to linking the small library directly to a
      module. Why not just use the small library in a module?
      
      Functions from an executable are preferred over ones that are shipped in
      a dynamic library (on Linux, Mac OS differs), while a module may use the
      small library directly. It may cause a problem when some functions from
      the library are inlined, but some are not, and those different versions
      of the library offer structures with different layouts. Small library
      symbols may be exported by the tarantool executable after the change of
      default symbols visibility (see [1]). See more details and examples in
      [2].
      
      So it is better to expose so called box region and get rid of all those
      compatibility problems.
      
      [1]: 2.5.0-42-g03790ac55 ('cmake: remove dynamic-list linker option')
      [2]: https://lists.tarantool.org/pipermail/tarantool-discussions/2020-September/000095.html
      
      Part of #5273
      Unverified
      c6117909
    • Alexander Turenko's avatar
      module api: get rid of typedef redefinitions · a25a3e0d
      Alexander Turenko authored
      Technically C99 forbids it. Clang complains about typedef redefinitions
      when C99 is used (it is default prior to clang 3.6):
      
       | error: redefinition of typedef 'box_tuple_t' is a C11 feature
       |        [-Werror,-Wtypedef-redefinition]
       | error: redefinition of typedef 'box_key_def_t' is a C11 feature
       |        [-Werror,-Wtypedef-redefinition]
      
      The generated module.h file should define a type using typedef once.
      This patch moves extra definitions out of public parts of header files.
      Reordered api headers to place usages of such types after definitions.
      
      Set C99 for the module API test in order to catch problems of this kind
      in a future. Fixed 'unused value' warnings, which appears after the
      change (it is strange that -Wall was not passed here before).
      
      Part of #5273
      Fixes #5313
      Unverified
      a25a3e0d
  2. Oct 15, 2020
  3. Oct 14, 2020
    • Alexander V. Tikhonov's avatar
      gitlab-ci: remove tag from pushed branch commit · 0f564f34
      Alexander V. Tikhonov authored
      
      Drop a tag that points to a current commit (if any) on a job triggered
      by pushing to a branch (as against of pushing a tag). Otherwise we may
      get two jobs for the same x.y.z-0-gxxxxxxxxx build: one is run by
      pushing a branch and another by pushing a tag. The idea is to hide the
      new tag from the branch job as if a tag would be pushed strictly after
      all branch jobs for the same commit.
      
      Closes #3745
      
      Co-authored-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Unverified
      0f564f34
    • Nikita Pettik's avatar
      vinyl: remove squash procedures from source code · 375f2d17
      Nikita Pettik authored
      After previous commit, there's no need in these functions. However,
      someday we may want to reconsider it squash optimization and make it
      work without breaking upsert associativity rule. So to not lost initial
      squash implementation, let's put its removal in a separate patch.
      
      Follow-up #5107
      375f2d17
    • Nikita Pettik's avatar
      vinyl: rework upsert operation · 5a61c471
      Nikita Pettik authored
      Previous upsert implementation had a few drawbacks which led to number
      of various bugs and issues.
      
      Issue #5092 (redundant update operations execution)
      
      In a nutshell, application of upsert(s) (on top of another upsert)
      consists of two actions (see vy_apply_upsert()): execute and squash.
      Consider example:
      
      insert({1, 1})  -- terminal statement, stored on disk
      upsert({1}, {{'-', 2, 20}}) -- old ups1
      upsert({1}, {{'+', 2, 10}}) -- new ups2
      
      'Execute' step takes update operations from the new upsert and combines them
      with key of the old upsert.  {1} + {'+', 2, 10} can't be evaluated since
      key consists of only one field. Note that in case upsert doesn't fold
      into insert the upsert's tuple and the tuple stored in index can be
      different. In our particular case, tuple stored on disk has two fields
      ({1, 1}), so first upsert's update operation can be applied to it:
      {1, 1} + {'+', 2, 10} --> {1, 11}. If upsert's operation can't be executed
      using key of old upsert, we simply continue processing squash step.
      In turn 'squash' is a combination of update operations: arithmetic
      operations are combined so we don't have to store actions over the same
      field; the rest operations - are merged into single array. As a result,
      we get one upsert with squashed operations: upsert({1}, {{'+', 2, -10}}).
      Then vy_apply_upsert() is called again to apply new upsert on the top of
      terminal statement - insert{1, 1}. Since now tuple has second field,
      update operations can be executed and corresponding result is {1, -9}.
      It is the final result of upsert application procedure.
      Now imagine that we have following upserts:
      
      upsert({1, 1}, {{'-', 2, 20}}) -- old ups1
      upsert({1}, {{'+', 2, 10}}) -- new ups2
      
      In this case execution successfully finishes and modifies old upsert's
      tuple: {1, 1} + {'+', 2, 10} --> {1, 11}
      However, we still have to squash/accumulate update operations since they
      may be applied on tuple stored on disk later. After all, we have
      following upsert: upsert({2, 11}, {{'+', 2, -10}}). Then it is applied
      on the top of insert({1, 1}) and we get the same result as in the first
      case - {1, -9}. The only difference is that upsert's tuple was modified.
      As one can see, execution of update operations applied to upsert's tuple
      is redundant in the case index already contains tuple with the same key
      (i.e. when upserts turns into update). Instead, we are able to
      accumulate/squash update operations only. When the last upsert is being
      applied, we can either execute all update operation on tuple fetched
      from index (i.e. upsert is update) OR on tuple specified in the first
      upsert (i.e. first upsert is insert).
      
      Issue #5105 (upsert doesn't follow associative property)
      
      Secondly, current approach breaks associative property: after upsert's
      update operations are merged into one array, part of them (related to
      one upsert) can be skipped, meanwhile the rest - is applied. For
      instance:
      
      -- Index is over second field.
      i = s:create_index('pk', {parts={2, 'uint'}})
      s:replace{1, 2, 3, 'default'}
      s:upsert({2, 2, 2}, {{'=', 4, 'upserted'}})
      -- First update operation modifies primary key, so upsert must be ignored.
      s:upsert({2, 2, 2}, {{'#', 1, 1}, {'!', 3, 1}})
      
      After merging two upserts we get the next one:
      upsert({2, 2, 2}, {{'=', 4, 'upserted'}, {'#', 1, 1}, {'!', 3, 1}}
      
      While we executing update operations, we don't distinguish operations from
      different upserts. Thus, if one operation fails, the rest are ignored
      as well. As a result, first (in general case - all preceding squashed
      upserts) upsert won't be applied, even despite the fact it is
      absolutely correct. What is more, user gets no error/warning concerning
      this fact.
      
      Issue #1622 (no upsert result validation)
      
      After upsert application, there's no check verifying that result
      satisfies space's format: number of fields, their types, overflows etc.
      Due to this tuples violating format may appear in the space, which in
      turn may lead to unpredictable consequences.
      
      To resolve these issues, let's group update operations of each upsert into
      separate array. So that operations related to particular upsert are
      stored in single array. In terms of previous example we will get:
      upsert({2, 2, 2}, {{{'=', 4, 'upserted'}}, {{'#', 1, 1}, {'!', 3, 1}}}
      
      Also note that we don't longer have to apply update operations on tuple
      in vy_apply_upsert() when it comes for two upserts: it can be done once we
      face terminal statement; or if there's no underlying statement (i.e. it is
      delete statement or no statement at all) we apply all update arrays except
      the first one on upsert's tuple. In case one of operations from array
      fail, we skip the rest operations from this array and process to the
      next array. After successful application of update operations of each
      array, we check that the resulting tuple fits into space format. If they
      aren't, we rollback applied operations, log error and moving to the next
      group of operations.
      
      Finally, arithmetic operations are not longer able to be combined: it is
      requirement which is arises from #5105 issue.  Otherwise, result of
      upserts combination might turn out to be inapplicable to the tuple
      stored on disk (e.g. result applied on tuple leads to integer overflow -
      in this case only last upsert leading to overflow must be ignored).
      
      Closes #1622
      Closes #5105
      Closes #5092
      Part of #5107
      5a61c471
    • Artem Starshov's avatar
      luacheck: fix warning in tarantoolctl.in · 33870e37
      Artem Starshov authored
      
      Changed passing global variable arg to function find_instance_name(arg)
      instead of passing arg[0] and arg[2] separately. And removed exception
      in .luacheckrc for file /extra/dist/tarantoolctl.in.
      
      This change only solves linter warning, nothing else.
      
      Fixed #4929.
      
      Reviewed-by: default avatarLeonid Vasiliev <lvasiliev@tarantool.org>
      Reviewed-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Unverified
      33870e37
    • Vladislav Shpilevoy's avatar
      qsync: reset confirmed lsn in limbo on owner change · 41ba1479
      Vladislav Shpilevoy authored
      Order of LSNs from different instances can be any. It means, that
      if the synchronous transaction limbo changed its owner, the old
      value of maximal confirmed LSN does not make sense. The new owner
      means new LSNs, even less than the previously confirmed LSN of an
      old owner.
      
      The patch resets the confirmed LSN when the owner is changed. No
      specific test for that, as this is a hotfix - tests start fail on
      every run after a new election test was added in the previous
      commit. A test for this bug will be added later.
      
      Part of #5395
      41ba1479
    • Vladislav Shpilevoy's avatar
      raft: auto-commit transactions of the old leader · 4da30149
      Vladislav Shpilevoy authored
      According to Raft, when a new leader is elected, it should finish
      transactions of the old leader. In Raft this is done via adding a
      new transaction originated from the new leader.
      
      In case of Tarantool this can be done without a new transaction
      due to WAL format specifics, and the function doing it is called
      box_clear_synchro_queue().
      
      Before the patch, when a node was elected as a leader, it didn't
      finish the pending transactions. The queue clearance was expected
      to be done by a user. There was no any issue with that, just
      technical debt. The patch fixes it.
      
      Now when a node becomes a leader, it finishes synchronous
      transactions of the old leader. This is done a bit differently
      than in the public box.ctl.clear_synchro_queue().
      
      The public box.ctl.clear_synchro_queue() tries to wait for CONFIRM
      messages, which may be late. For replication_synchro_timeout * 2
      time.
      
      But when a new leader is elected, the leader will ignore all rows
      from all the other nodes, as it thinks it is the only source of
      truth. Therefore it makes no sense to wait for CONFIRMs here, and
      the waiting is omitted.
      
      Closes #5339
      4da30149
    • Vladislav Shpilevoy's avatar
      raft: introduce on_update trigger · 43d42969
      Vladislav Shpilevoy authored
      Raft state machine now has a trigger invoked each time when any of
      the visible Raft attributes is changed: state, term, vote.
      
      The trigger is needed to commit synchronous transactions of an old
      leader, when a new leader is elected. This is done via a trigger
      so as not to depend on box in raft code too much. That would make
      it harder to extract it into a new module later.
      
      The trigger is executed in the Raft worker fiber, so as not to
      stop the state machine transitions anywhere, which currently don't
      contain a single yield. And the synchronous transaction queue
      clearance requires a yield, to write CONFIRM and ROLLBACK records
      to WAL.
      
      Part of #5339
      43d42969
Loading