Skip to content
Snippets Groups Projects
  1. Jan 18, 2022
    • Vladimir Davydov's avatar
      test: fix flaky engine/errinj_ddl test · 197088c3
      Vladimir Davydov authored
      The commit fixes the following test failure:
      
      ```
      [011] engine/errinj_ddl.test.lua                      memtx           [ fail ]
      [011]
      [011] Test failed! Result content mismatch:
      [011] --- engine/errinj_ddl.result      Tue Jan 18 15:28:21 2022
      [011] +++ var/rejects/engine/errinj_ddl.reject  Tue Jan 18 15:28:26 2022
      [011] @@ -343,7 +343,7 @@
      [011]  s:create_index('sk', {parts = {2, 'unsigned'}}) -- must fail
      [011]  ---
      [011]  - error: Duplicate key exists in unique index "sk" in space "test" with old tuple
      [011] -    - [101, 101, "xxxxxxxxxxxxxxxx"] and new tuple - [100, 101]
      [011] +    - [100, 101] and new tuple - [101, 101, "xxxxxxxxxxxxxxxx"]
      [011]  ...
      [011]  ch:get()
      [011]  ---
      ```
      
      The test is inherently racy: a conflicting tuple may be inserted to the
      new index either by the index build procedure or by the test fiber doing
      DML in the background. The error messages will disagree regarding what
      tuple should be considered old and which one new. Let's match the error
      message explicitly.
      
      The failure was introduced by d11fb306
      ("box: change ER_TUPLE_FOUND message") which enhanced error messages
      with conflicting tuples.
      197088c3
    • Georgiy Lebedev's avatar
      core: fix testing tuple field count overflow · 41fdd2a8
      Georgiy Lebedev authored
      Testing tuple field count overflow handling requires creating a severely
      large tuple with box.schema.FIELD_MAX (INT32_MAX) fields: introduce an
      error injection for testing this corner case.
      
      Fixes #6684
      41fdd2a8
  2. Jan 17, 2022
  3. Jan 14, 2022
    • Vladimir Davydov's avatar
      test: fix flaky vinyl/gh-4810-dump-during-index-build test · 5cd399b7
      Vladimir Davydov authored
      The commit fixes the following test failure:
      
      ```
      [082] vinyl/gh-4810-dump-during-index-build.test.lua                  Test timeout of 310 secs reached	[ fail ]
      [082]
      [082] Test failed! Result content mismatch:
      [082] --- vinyl/gh-4810-dump-during-index-build.result	Thu Dec  9 05:31:17 2021
      [082] +++ /build/usr/src/debug/tarantool-2.10.0~beta1.324.dev/test/var/rejects/vinyl/gh-4810-dump-during-index-build.reject	Thu Dec  9 06:51:03 2021
      [082] @@ -117,34 +117,3 @@
      [082]  for i = 1, ch:size() do
      [082]      ch:get()
      [082]  end;
      [082] - | ---
      [082] - | ...
      [082] -
      ...
      ```
      
      The test hangs waiting for the test fibers to exit. There are two test
      fibers - one builds an index, another populates the test space. The
      latter uses pcall so it always returns. The one that builds an index,
      however, doesn't. The problem is index build may fail because it builds
      a unique index while the fiber populating the space may insert
      non-unique values. Fix this by building a non-unique index instead,
      which should never fail. To reproduce the issue the test checks is fixed
      one can build any index, unique or non-unique, so it should be fine.
      
      Closes #5508
      5cd399b7
    • Vladimir Davydov's avatar
      test: fix flaky vinyl/gh test failure · cc6c328d
      Vladimir Davydov authored
      The commit fixes the following test failure:
      
      ```
      [005] vinyl/gh.test.lua                                               [ fail ]
      [005]
      [005] Test failed! Result content mismatch:
      [005] --- vinyl/gh.result	Mon Dec 13 15:03:45 2021
      [005] +++ /root/actions-runner/_work/tarantool/tarantool/test/var/rejects/vinyl/gh.reject	Fri Dec 17 10:41:24 2021
      [005] @@ -716,7 +716,7 @@
      [005]  ...
      [005]  test_run:wait_cond(function() return finished == 2 end)
      [005]  ---
      [005] -- true
      [005] +- false
      [005]  ...
      [005]  s:drop()
      [005]  ---
      ```
      
      The reason of the failure is that the fiber doing checkpoints fails,
      because a checkpoint may be already running by the checkpoint daemon.
      Invoke box.snapshot() under pcall to make the test more robust.
      
      Part of #5141
      cc6c328d
    • Vladimir Davydov's avatar
      test: fix flaky vinyl/deferred_delete test · 7f8c549b
      Vladimir Davydov authored
      The commit fixes the following test failure:
      
      ```
      [019] vinyl/deferred_delete.test.lua                                  [ fail ]
      [019]
      [019] Test failed! Result content mismatch:
      [019] --- vinyl/deferred_delete.result	Tue Jan 11 11:10:22 2022
      [019] +++ /build/usr/src/debug/tarantool-2.10.0~beta2.37.dev/test/var/rejects/vinyl/deferred_delete.reject	Fri Jan 14 11:45:26 2022
      [019] @@ -964,7 +964,7 @@
      [019]  ...
      [019]  sk:stat().disk.dump.count -- 1
      [019]  ---
      [019] -- 1
      [019] +- 0
      [019]  ...
      [019]  sk:stat().rows - dummy_rows -- 120 old REPLACEs + 120 new REPLACEs + 120 deferred DELETEs
      [019]  ---
      ```
      
      The test checks that compaction of a primary index triggers dump of
      secondary indexes of the same space, because it generates deferred
      DELETE statements. There's no guarantee that by the time compaction
      completes, secondary index dump have been completed as well, because
      compaction may ignore the memory quota (it uses vy_quota_force_use in
      vy_deferred_delete_on_replace). Make the check more robust by using
      wait_cond.
      
      Follow-up #5089
      7f8c549b
    • Vladimir Davydov's avatar
      test: use wait_cond in vinyl/deferred_delete test · 8c913a10
      Vladimir Davydov authored
      It's better than hand-written busy-wait.
      8c913a10
    • Vladimir Davydov's avatar
      test: fix flaky vinyl/gc test · cd9fd77e
      Vladimir Davydov authored
      The commit fixes the following test failure:
      
      ```
      [013] vinyl/gc.test.lua                                               [ fail ]
      [013]
      [013] Test failed! Result content mismatch:
      [013] --- vinyl/gc.result	Fri Dec 24 12:27:33 2021
      [013] +++ /build/usr/src/debug/tarantool-2.10.0~beta2.18.dev/test/var/rejects/vinyl/gc.reject	Thu Dec 30 10:29:29 2021
      [013] @@ -102,7 +102,7 @@
      [013]  ...
      [013]  check_files_number(2)
      [013]  ---
      [013] -- true
      [013] +- null
      [013]  ...
      [013]  -- All records should have been purged from the log by now
      [013]  -- so we should only keep the previous log file.
      ```
      
      The reason of the failure is that vylog files are deleted asynchronously
      (`box.snapshot()` doesn't wait for `unlink` to complete) since commit
      8e429f4b ("wal: remove old xlog files
      asynchronously"). So to fix the test, we just need to make the test wait
      for garbage collection to complete.
      
      Follow-up #5383
      cd9fd77e
    • mechanik20051988's avatar
      small: fix assertion during slab allocation · 68c29688
      mechanik20051988 authored
      When `slab_get` is called from `region_alloc` or `ibuf_alloc` first
      of all we try to calculate order of appropriate slab. But there is
      no check that requested size with slab meta is <= UINT32_MAX, that
      leads to assertion failer in `slab_order` function. There is no
      need for this assertion we should return `cache->order_max + 1`
      for this case.
      
      Closes #6726
      68c29688
  4. Jan 13, 2022
  5. Jan 12, 2022
  6. Jan 10, 2022
  7. Jan 04, 2022
  8. Dec 30, 2021
  9. Dec 29, 2021
    • Yaroslav Lobankov's avatar
      ci: add integration check for memcached module · e493b777
      Yaroslav Lobankov authored
      This patch extends the 'integration.yml' workflow and adds a new
      workflow call for running tests to verify integration between tarantool
      and the memcached module.
      
      Part of #5265
      Part of #6056
      Closes #6563
      e493b777
    • Aleksandr Lyapunov's avatar
      lib: insignificant change to calm down coverity checker · bd675395
      Aleksandr Lyapunov authored
      Now scan.coverity.com reports "Overrunning buffer pointed to by
      &map of 4 bytes by passing it to a function which accesses it at
      byte offset 7" in bit_iterator_init call.
      
      Add an unit test that verifies that bit iterator works correctly
      with small size bitmaps (like uint32_t).
      
      Change condition a bit hoping that it will calm down the checker.
      
      No functional changes.
      bd675395
  10. Dec 28, 2021
  11. Dec 27, 2021
    • Serge Petrenko's avatar
      recovery: panic in case of recovery and replicaset vclock mismatch · 634f59c7
      Serge Petrenko authored
      We assume that no one touches the instance's WALs, once it has taken the
      wal_dir_lock. This is not the case when upgrading from an old setup
      (running tarantool 1.7.3-6 or less). Such nodes either take a lock on
      snap dir, which may be different from wal dir, or don't take the lock at
      all.
      
      So, it's possible that during upgrade an old node is not stopped
      properly before a new node is started in the same data directory.
      
      The old node might even write some extra data to WAL during new node's
      startup.
      
      This is obviously bad and leads to multiple issues. For example, new node
      might start local recovery, scan the WALs and set replicaset.vclock to
      some value {1 : 5}. While the node recovers WALs they are appended by the old
      node up to vclock {1 : 10}.
      The node finishes local recovery with replicaset vclock {1 : 5}, but
      data recovered up to vclock {1 : 10}.
      
      The node will use the now outdated replicaset vclock to subscribe to
      remote peers (leading to replication breaking due to duplicate keys
      found), to initialize WAL (leading to new xlogs appearing with duplicate
      LSNs). There might be a number of other issues we just haven't stumbled
      upon.
      
      Let's prevent situations like that and panic as soon as we see that the
      initially scanned vclock (replicaset vclock) differs from actually
      recovered vclock.
      
      Closes #6709
      634f59c7
  12. Dec 24, 2021
Loading