Skip to content
Snippets Groups Projects
  1. Jan 15, 2021
  2. Jan 14, 2021
  3. Dec 30, 2020
  4. Dec 29, 2020
    • Aleksandr Lyapunov's avatar
      txm: change tuple ownership strategy · 88b76800
      Aleksandr Lyapunov authored
      Since a space holds pointers to tuples it must increment
      reference counters of its tuples and must decrement counters
      of tuples that are deleted from the space.
      
      Memtx TX manager also holds references to processing tuples
      in the same way.
      
      Before this patch there was a logic: while a tuple is dirty
      it belongs to TX manager and does not belong to the space. Only
      when a tuple is cleared and it is still in space it will be
      referenced by space.
      
      That logic leads to crashes in some DDL requests since they
      works with indexes directly. For example deleting an index
      causes dereferencing of all its tuples - even dirty.
      
      This patch changes the logic. Now all tuples that are physically
      in the primary index of the space a referenced. Once removed from
      primary index - the tuple is dereferenced. TX manager references
      tuples like before - every holding tuple is referenced.
      
      Part of #5628
      88b76800
    • Aleksandr Lyapunov's avatar
      txm: fix another simple bug in tx manager · de6a4849
      Aleksandr Lyapunov authored
      There was a typo in collection of read set - a dirty
      tuple was added instead of clean.
      
      Closes #5559
      de6a4849
    • Aleksandr Lyapunov's avatar
      txm: fix a simple bug in tx manager · 122dc47f
      Aleksandr Lyapunov authored
      The problem happened when a tuple story was delete by two
      statements, one committed and one not committed.
      
      Part of #5628
      122dc47f
    • mechanik20051988's avatar
      memtx: change small allocator behavior · 4d175bff
      mechanik20051988 authored
      Previously, in small allocator, memory pools
      were allocated at the request, which in the case
      of a small slab_alloc_factor led to use
      pools with incorrect sizes. This patch changed
      small allocator behavior, now we allocate pools
      on the stage of allocator creation. Also we use
      special function to find appropriate pool, which
      is faster, then previous version with rbtree.
      This change fixes #5216.
      
      Also moved a check, that the slab_alloc_factor is in
      the range (1.0, 2.0] from small allocator to memtx_engine.
      If factor is not in range change it to 1.0001 or 2.0 respectively
      
      Closes #5216
      4d175bff
  5. Dec 28, 2020
  6. Dec 27, 2020
    • Artem Starshov's avatar
      lua: fix running init lua script · a8f3a6cb
      Artem Starshov authored
      When tarantool launched with -e flag and in
      script after there is an error, program hangs.
      This happens because shed fiber launches separate
      fiber for init user script and starts auxiliary
      event loop. It's supposed that fiber will stop
      this loop, but in case of error in script, fiber
      tries to stop a loop when the last one isn't
      started yet.
      
      Added a flag, which will watch is loop started and
      when fiber tries to call `ev_break()` we can be sure
      that loop is running already.
      
      Fixes #4983
      a8f3a6cb
  7. Dec 25, 2020
  8. Dec 24, 2020
    • Cyrill Gorcunov's avatar
      crash: report crash data to the feedback server · f132aa9b
      Cyrill Gorcunov authored
      
      We have a feedback server which gathers information about a running instance.
      While general info is enough for now we may loose a precious information about
      crashes (such as call backtrace which caused the issue, type of build and etc).
      
      In the commit we add support of sending this kind of information to the feedback
      server. Internally we gather the reason of failure, pack it into base64 form
      and then run another Tarantool instance which sends it out.
      
      A typical report might look like
      
       | {
       |   "crashdump": {
       |     "version": "1",
       |     "data": {
       |       "uname": {
       |         "sysname": "Linux",
       |         "release": "5.9.14-100.fc32.x86_64",
       |         "version": "#1 SMP Fri Dec 11 14:30:38 UTC 2020",
       |         "machine": "x86_64"
       |       },
       |       "build": {
       |         "version": "2.7.0-115-g360565efb",
       |         "cmake_type": "Linux-x86_64-Debug"
       |       },
       |       "signal": {
       |         "signo": 11,
       |         "si_code": 0,
       |         "si_addr": "0x3e800004838",
       |         "backtrace": "#0  0x630724 in crash_collect+bf\n...",
       |         "timestamp": "2020-12-23 14:42:10 MSK"
       |       }
       |     }
       |   }
       | }
      
      There is no simple way to test this so I did it manually:
      1) Run instance with
      
      	box.cfg{log_level = 8, feedback_host="127.0.0.1:1500"}
      
      2) Run listener shell as
      
      	while true ; do nc -l -p 1500 -c 'echo -e "HTTP/1.1 200 OK\n\n $(date)"'; done
      
      3) Send SIGSEGV
      
      	kill -11 `pidof tarantool`
      
      Once SIGSEGV is delivered the crashinfo data is generated and sent out. For
      debug purpose this data is also printed to the terminal on debug log level.
      
      Closes #5261
      
      Co-developed-by: default avatarVladislav Shpilevoy <v.shpilevoy@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      
      @TarantoolBot document
      Title: Configuration update, allow to disable sending crash information
      
      For better analysis of program crashes the information associated with
      the crash such as
      
       - utsname (similar to `uname -a` output except the network name)
       - build information
       - reason for a crash
       - call backtrace
      
      is sent to the feedback server. To disable it set `feedback_crashinfo`
      to `false`.
      f132aa9b
    • Sergey Nikiforov's avatar
      lua/key_def: fix compare_with_key() part count check · 37b15af9
      Sergey Nikiforov authored
      Added corresponding test
      
      Fixes: #5307
      37b15af9
    • Cyrill Gorcunov's avatar
    • Mergen Imeev's avatar
      sql: do not reset region on select · d0d668fa
      Mergen Imeev authored
      Prior to this patch, region on fiber was reset during select(), get(),
      count(), max(), or min(). This would result in an error if one of these
      operations was used in a user-defined function in SQL. After this patch,
      these functions truncate region instead of resetting it.
      
      Closes #5427
      d0d668fa
  9. Dec 23, 2020
    • Nikita Pettik's avatar
      sql: fix return value type of ifnull built-in · 2a3a0d1a
      Nikita Pettik authored
      Accidentally, in built-in declaration list it was specified that
      ifnull() can return only integer values, meanwhile it should return
      SCALAR: ifnull() returns first non-null argument so type of return value
      depends on type of arguments. Let's fix this and set return type of
      ifnull() SCALAR.
      2a3a0d1a
    • Mergen Imeev's avatar
      box: remove unnecessary rights from peristent functions · 4a50e1c4
      Mergen Imeev authored
      After this patch, the persistent functions "box.schema.user.info" and
      "LUA" will have the same rights as the user who executed them.
      
      The problem was that setuid was unnecessarily set. Because of this,
      these functions had the same rights as the user who created them.
      However, they must have the same rights as the user who used them.
      
      Fixes tarantool/security#1
      4a50e1c4
    • Sergey Kaplun's avatar
      lua: avoid panic if HOOK_GC is not an active hook · 95aa7d20
      Sergey Kaplun authored
      
      Platform panic occurs when fiber.yield() is used within any active
      (i.e. being executed now) hook.
      
      It is a regression caused by 96dbc49d
      ('lua: prohibit fiber yield when GC hook is active').
      
      This patch fixes false positive panic in cases when VM is not running
      a GC hook.
      
      Relates to #4518
      Closes #5649
      
      Reported-by: default avatarMichael Filonenko <filonenko.mikhail@gmail.com>
      95aa7d20
    • Alexander V. Tikhonov's avatar
      test: filter replication/skip_conflict_row output · 2828f912
      Alexander V. Tikhonov authored
      Found that test replication/skip_conflict_row.test.lua fails with
      output message in results file:
      
        [035] @@ -139,7 +139,19 @@
        [035]  -- applier is not in follow state
        [035]  test_run:wait_upstream(1, {status = 'stopped', message_re = "Duplicate key exists in unique index 'primary' in space 'test'"})
        [035]  ---
        [035] -- true
        [035] +- false
        [035] +- id: 1
        [035] +  uuid: f2084d3c-93f2-4267-925f-015df034d0a5
        [035] +  lsn: 553
        [035] +  upstream:
        [035] +    status: follow
        [035] +    idle: 0.0024020448327065
        [035] +    peer: unix/:/builds/4BUsapPU/0/tarantool/tarantool/test/var/035_replication/master.socket-iproto
        [035] +    lag: 0.0046234130859375
        [035] +  downstream:
        [035] +    status: follow
        [035] +    idle: 0.086121961474419
        [035] +    vclock: {2: 3, 1: 553}
        [035]  ...
        [035]  --
        [035]  -- gh-3977: check that NOP is written instead of conflicting row.
      
      Test could not be restarted with checksum because of changing values
      like UUID on each fail. It happend because test-run uses internal
      chain of functions wait_upstream() -> gen_box_info_replication_cond()
      which returns instance information on its fails. To avoid of it this
      output was redirected to log file instead of results file.
      2828f912
  10. Dec 22, 2020
    • mechanik20051988's avatar
      Change the behavior of the option 'force_recovery' · 0be1243e
      mechanik20051988 authored
      There was an option 'force_recovery' that makes tarantool
      to ignore some problems during xlog recovery. This patch
      change  this option behavior and makes tarantool to ignore
      some errors during snapshot recovery just like during
      xlog recovery.
      Error types which can be ignored:
       - snapshot is someway truncated,
         but after necessary system spaces
       - snapshot has some garbage after it declared length
       - single tuple within snapshot has broken checksum
         and may be skipped without consequences (in this case
         we ignore all row with this tuple)
      
      @TarantoolBot document
      Title: Change 'force_recovery' option behavior
      Change 'force_recovery' option behavior to allow
      tarantool loading from broken snapshot
      
      Closes #5422
      0be1243e
    • Alexander V. Tikhonov's avatar
      test: add test filter for vinyl tests · 566b1af7
      Alexander V. Tikhonov authored
      Added test-run filter on box.snapshot error message:
      
        'Invalid VYLOG file: Slice [0-9]+ deleted but not registered'
      
      to avoid of printing changing data in results file to be able to use
      its checksums in fragile list of test-run to rerun it as flaky issue.
      
      Found issues:
      
       1) vinyl/deferred_delete.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/913623306#L4552
      
        [036] 2020-12-15 19:10:01.996 [16602] coio vy_log.c:2202 E> failed to process vylog record: delete_slice{slice_id=744, }
        [036] 2020-12-15 19:10:01.996 [16602] main/103/vinyl vy_log.c:2068 E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 744 deleted but not registered
      
       2) vinyl/gh-4864-stmt-alloc-fail-compact.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/913810422#L4835
      
        [052] @@ -56,9 +56,11 @@
        [052]  --
        [052]  dump(true)
        [052]   | ---
        [052] - | ...
        [052] -dump()
        [052] - | ---
        [052] + | - error: 'Invalid VYLOG file: Slice 253 deleted but not registered'
        [052] + | ...
      
       3) vinyl/misc.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/913727925#L5284
      
        [014] @@ -62,14 +62,14 @@
        [014]  ...
        [014]  box.snapshot()
        [014]  ---
        [014] -- ok
        [014] +- error: 'Invalid VYLOG file: Slice 1141 deleted but not registered'
        [014]  ...
      
       4) vinyl/quota.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/914016074#L4595
      
        [025] 2020-12-15 22:56:50.192 [25576] coio vy_log.c:2202 E> failed to process vylog record: delete_slice{slice_id=522, }
        [025] 2020-12-15 22:56:50.193 [25576] main/103/vinyl vy_log.c:2068 E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 522 deleted but not registered
      
       5) vinyl/update_optimize.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/913728098#L2512
      
        [051] 2020-12-15 20:18:43.365 [17147] coio vy_log.c:2202 E> failed to process vylog record: delete_slice{slice_id=350, }
        [051] 2020-12-15 20:18:43.365 [17147] main/103/vinyl vy_log.c:2068 E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 350 deleted but not registered
      
       6) vinyl/upsert.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/913623510#L6132
      
        [008] @@ -441,7 +441,7 @@
        [008]  -- Mem has DELETE
        [008]  box.snapshot()
        [008]  ---
        [008] -- ok
        [008] +- error: 'Invalid VYLOG file: Slice 1411 deleted but not registered'
        [008]  ...
      
       7) vinyl/replica_quota.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/914272656#L5739
      
        [023] @@ -41,7 +41,7 @@
        [023]  ...
        [023]  box.snapshot()
        [023]  ---
        [023] -- ok
        [023] +- error: 'Invalid VYLOG file: Slice 232 deleted but not registered'
        [023]  ...
      
       8) vinyl/ddl.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/914309343#L4538
      
        [039] @@ -81,7 +81,7 @@
        [039]  ...
        [039]  box.snapshot()
        [039]  ---
        [039] -- ok
        [039] +- error: 'Invalid VYLOG file: Slice 206 deleted but not registered'
        [039]  ...
      
       9) vinyl/write_iterator.test.lua
          https://gitlab.com/tarantool/tarantool/-/jobs/920646297#L4694
      
        [059] @@ -80,7 +80,7 @@
        [059]  ...
        [059]  box.snapshot()
        [059]  ---
        [059] -- ok
        [059] +- error: 'Invalid VYLOG file: Slice 351 deleted but not registered'
        [059]  ...
        [059]  --
        [059]  -- Create a couple of tiny runs on disk, to increate the "number of runs"
      
       10) vinyl/gc.test.lua
           https://gitlab.com/tarantool/tarantool/-/jobs/920441445#L4691
      
        [050] @@ -59,6 +59,7 @@
        [050]  ...
        [050]  gc()
        [050]  ---
        [050] +- error: 'Invalid VYLOG file: Run 1176 deleted but not registered'
        [050]  ...
        [050]  files = ls_data()
        [050]  ---
      
       11) vinyl/gh-3395-read-prepared-uncommitted.test.lua
           https://gitlab.com/tarantool/tarantool/-/jobs/921944705#L4258
      
        [019] @@ -38,7 +38,7 @@
        [019]   | ...
        [019]  box.snapshot()
        [019]   | ---
        [019] - | - ok
        [019] + | - error: 'Invalid VYLOG file: Slice 634 deleted but not registered'
        [019]   | ...
        [019]
        [019]  c = fiber.channel(1)
      566b1af7
  11. Dec 21, 2020
    • Vladislav Shpilevoy's avatar
      raft: fix crash on death timeout decrease · 4042b5c0
      Vladislav Shpilevoy authored
      If death timeout was decreased during waiting for leader death or
      discovery to a new value making the current death waiting end
      immediately, it could crash in libev.
      
      Because it would mean the remaining time until leader death became
      negative. The negative timeout was passed to libev without any
      checks, and there is an assertion, that a timeout should always
      be >= 0.
      
      This commit makes raft code covered almost on 100%, not counting
      one 'unreachable()' place.
      
      Closes #5303
      4042b5c0
Loading