Skip to content
Snippets Groups Projects
  1. May 18, 2020
  2. May 15, 2020
    • Vladislav Shpilevoy's avatar
      test: make app-tap/init_script produce less diff · 7f20272e
      Vladislav Shpilevoy authored
      When a new option is added, app-tap/init_script
      outputs big diff. Because all options are printed with
      ordinal indexes. Addition of a new option changes
      indexes of all options after the new one.
      
      The patch removes indexes from the output making diff
      smaller, when a new option is added.
      7f20272e
    • Alexander V. Tikhonov's avatar
      gitlab-ci: disable perf tesing in scheduled runs · 413b563e
      Alexander V. Tikhonov authored
      Release branches should be regularly run using gitlab-ci pipeline
      schedules:
        https://gitlab.com/tarantool/tarantool/pipeline_schedules
      It will help to detect flaky issues. But there is no need to rerun
      too long running performance testing, to block it in schedules the
      option 'schedules' in 'except:' field was set.
      
      Part of #4974
      413b563e
    • Alexander V. Tikhonov's avatar
      gitlab-ci: set OSX to full testing · 95364f02
      Alexander V. Tikhonov authored
      Set all test suites at OSX testing.
      
      Close #4818
      95364f02
    • Serge Petrenko's avatar
      replication: remove unnecessary errors on replicating from an anonymous instance · caf73913
      Serge Petrenko authored
      Since the anonymous replica implementation, it was forbidden to
      replicate (join/subscribe/register) from anonymous instances.
      Actually, only joining and register should be banned, since an anonymous
      replica isn't able to register its peer in _cluster anyway.
      
      Let's allow other anonymous replicas, but not the normal ones, to subscribe to
      an anonymous replica.
      Also remove unnecessary ER_UNSUPPORTED errors from box_process_join()
      and box_process_register() for anonymous replicas. These cases are
      covered with ER_READONLY checks later on, since anonymous replicas must
      be read-only.
      
      Note, this patch doesn't allow normal instances to subscribe to
      anonymous ones. Even though it is technically possible, it may bring
      more problems than profits. Let's allow it later on if there's an explicit
      demand.
      
      Closes #4696
      caf73913
    • Serge Petrenko's avatar
      replication: add box.info.replication_anon · 4e5d60d8
      Serge Petrenko authored
      Closes #4900
      
      @TarantoolBot document
      Title: add new field to box.info: replication_anon
      
      It is now possible to list all the anonymous replicas following the
      instance with a call to `box.info.replication_anon()`
      The output is similar to the one produced by `box.info.replication` with
      an exception that anonymous replicas are indexed by their uuid strings
      rather then server ids, since server ids have no meaning for anonymous
      replicas.
      
      Note, that when you issue a plain `box.info.replication_anon`, the only
      info returned is the number of anonymous replicas following the current
      instance. In order to see the full stats, you have to call
      `box.info.replication_anon()`. This is done to not overload the `box.info`
      output with excess info, since there may be lots of anonymous replicas.
      Example:
      
      ```
      tarantool> box.info.replication_anon
      ---
      - count: 2
      ...
      
      tarantool> box.info.replication_anon()
      ---
      - 3a6a2cfb-7e47-42f6-8309-7a25c37feea1:
          id: 0
          uuid: 3a6a2cfb-7e47-42f6-8309-7a25c37feea1
          lsn: 0
          downstream:
            status: follow
            idle: 0.76203499999974
            vclock: {1: 1}
        f58e4cb0-e0a8-42a1-b439-591dd36c8e5e:
          id: 0
          uuid: f58e4cb0-e0a8-42a1-b439-591dd36c8e5e
          lsn: 0
          downstream:
            status: follow
            idle: 0.0041349999992235
            vclock: {1: 1}
      ...
      
      ```
      
      Note, that anonymous replicas hide their lsn from the others, so
      anonymous replica lsn will always be reported as zero, even if anonymous
      replicas perform some local space operations.
      To know the anonymous replica's lsn, you have to issue `box.info.lsn` on
      it.
      4e5d60d8
  3. May 12, 2020
    • Nikita Pettik's avatar
      vinyl: drop wasted runs in case range recovery fails · 32f59756
      Nikita Pettik authored
      
      If recovery process fails during range restoration, range itself is
      deleted and recovery is assumed to be finished as failed (in case of
      casual i.e. not forced recovery). During recovery of particular range,
      runs to be restored are refed twice: once when they are created at
      vy_run_new() and once when they are attached to slice. This fact is
      taken into consideration and after all ranges are recovered: all runs of
      lsm tree are unrefed so that slices own run resources (as a result, when
      slice is to be deleted its runs unrefed and deleted as well). However, if
      range recovery fails, range is dropped alongside with already recovered
      slices. This leads to unrefing runs - this is not accounted. To sum up
      recovery process below is a brief schema:
      
      foreach range in lsm.ranges {
        vy_lsm_recover_range(range) {
          foreach slice in range.slices {
            // inside recover_slice() each run is refed twice
            if vy_lsm_recover_slice() != 0 {
              // here all already restored slices are deleted and
              // corresponding runs are unrefed, so now they have 1 ref.
              range_delete()
            }
          }
        }
      }
      foreach run in lsm.runs {
        assert(run->refs > 1)
        vy_run_unref(run)
      }
      
      In this case, unrefing such runs one more time would lead to their
      destruction. On the other hand, iteration over runs may turn out to
      be unsafe, so we should use rlist_foreach_entry_safe(). Moreover, we
      should explicitly clean-up these runs calling vy_lsm_remove_run().
      
      Reviewed-by: default avatarVladislav Shpilevoy <vshpilevoi@mail.ru>
      
      Closes #4805
      32f59756
    • Nikita Pettik's avatar
      errinj: introduce delayed injection · 9d4ac029
      Nikita Pettik authored
      
      With new macro ERROR_INJECT_COUNTDOWN it is possible to delay error
      injection by iparam value: injection will be set only after iparam
      times the path is executed. For instance:
      
      void
      foo(int i)
      {
      	/* 2 is delay counter. */
      	ERROR_INJECT_COUNTDOWN(ERRINJ_FOO, {
      		 printf("Error injection on %d cycle!\n", i);
      		});
      }
      
      void
      boo(void)
      {
      	for (int i = 0; i < 10; ++i)
      		foo(i);
      }
      
      box.error.injection.set('ERRINJ_FOO', 2)
      
      The result is "Error injection on 2 cycle!". This type of error
      injection can turn out to be useful to set injection in the middle of
      query processing. Imagine following scenario:
      
      void
      foo(void)
      {
      	int *fds[10];
      	for (int i = 0; i < 10; ++i) {
      		fds[i] = malloc(sizeof(int));
      		if (fds[i] == NULL)
      			goto cleanup;
      	}
      cleanup:
      	free(fds[0]);
      }
      
      "cleanup" section obviously contains error and leads to memory leak.
      But using means of casual error injection without delay such situation
      can't be detected: OOM can be set only for first cycle iteration and in
      this particular case no leaks take place.
      
      Reviewed-by: default avatarVladislav Shpilevoy <vshpilevoi@mail.ru>
      9d4ac029
    • Nikita Pettik's avatar
      vinyl: add test on failed write iterator during compaction · cb062017
      Nikita Pettik authored
      vy_task_write_run() is executed in auxiliary thread (dump or
      compaction). Write iterator is created and used inside this function.
      Meanwhile, creating/destroying tuples in these threads does not change
      reference counter of corresponding  tuple formats (see vy_tuple_delete()
      and vy_stmt_alloc()). Without cleaning up write iterator right in
      write_iterator_start() after fail, this procedure takes place in
      vy_task_compaction_abort() or vy_task_dump_abort().  These *_abort()
      functions in turn are executed in the main thread.  Taking this into
      consideration, tuple might be allocated in aux. thread and deleted in
      the main thread. As a result, format reference counter might decrease,
      whereas it shouldn't change (otherwise tuple format will be destroyed
      before all tuples of this format are gone).
      
      Fortunately, clean-up of write iterator in another thread was found only
      on 1.10 branch, master branch already contains fix but lacks test
      (2f17c929). So let's introduce test with following scenario:
      
      1. run compaction process;
      2. add one or more slice sources in vy_write_iterator_start():
      corresponding slice_stream structures obtain newly created tuples
      in vy_slice_stream_next();
      3. the next call of vy_write_iterator_add_src() fails due to OOM,
      invalid run file or whatever;
      4. if write_iterator_start() didn't provide clean-up of sources, it
      would take place in vy_task_dump_abort() which would be executed in
      the main thread;
      5. now format reference counter would be less than it was before
      compaction.
      
      Closes #4864
      cb062017
    • Nikita Pettik's avatar
      vinyl: clean-up unprocessed read views in *_build_read_views() · 521a6fbd
      Nikita Pettik authored
      vy_write_iterator->read_views[i].history objects are allocated on
      region (see vy_write_iterator_push_rv()) during building history of the
      given key. However, in case of fail of vy_write_iterator_build_history()
      region is truncated but pointers to vy_write_history objects are not
      nullified. As a result, they may be accessed (for instance while
      finalizing write_iterator object in  vy_write_iterator_stop) which in
      turn may lead to crash, segfaul or disk formatting. The same may happen
      if vy_read_view_merge() fails during processing of read view array.
      Let's clean-up those objects in case of error takes place.
      
      Part of #4864
      521a6fbd
  4. May 08, 2020
    • HustonMmmavr's avatar
      static build: dockerfile entrypoint set to exec form · d3a7dd17
      HustonMmmavr authored
      According to dockerfile reference, there are two forms of specifying
      entrypoint: exec and shell. Exec form is preferrable and  allows use
      this image in scripts.
      
      Close #4960
      d3a7dd17
    • Alexander V. Tikhonov's avatar
      gitlab-ci: add Catalina OSX 10.15 · 76157ef6
      Alexander V. Tikhonov authored
      Added Catalina OSX 10.15 to gitlab-ci testing and removed OSX 10.13,
      due to decided to have only 2 last major releases, for now it is
      10.14 and 10.15 OSX versions. Also changed the commit job for branches
      from 10.14 to 10.15 OSX version.
      
      Additional cleanup for 'box_return_mp' and 'box_session_push',
      added API_EXPORT which defines nothrow, compiler warns or errors
      depending on the build options.
      
      Part of #4885
      Close #4873
      76157ef6
    • Alexander V. Tikhonov's avatar
      test: mark tests as fragile in a test's configs · faf7e482
      Alexander V. Tikhonov authored
      Fragiled flaky tests from parallel runs to avoid
      of flaky fails in regular testing:
      
        box-py/snapshot.test.py                ; gh-4514
        replication/misc.test.lua              ; gh-4940
        replication/skip_conflict_row.test.lua ; gh-4958
        replication-py/init_storage.test.py    ; gh-4949
        vinyl/stat.test.lua                    ; gh-4951
        xlog/checkpoint_daemon.test.lua        ; gh-4952
      
      Part of #4953
      faf7e482
    • Oleg Piskunov's avatar
      gitlab-ci: keep perf results as gitlab-ci artifacts · eeb501ec
      Oleg Piskunov authored
      Gitlab-ci pipeline modified in order to keep
      performance results into gitlab-ci artifacts.
      
      Closes #4920
      eeb501ec
    • Georgy Kirichenko's avatar
      wal: simplify rollback · a4f4adeb
      Georgy Kirichenko authored
      Here is a summary on how and when rollback works in WAL.
      
      Disk write failure can cause rollback. In that case the failed and
      all next transactions, sent to WAL, should be rolled back.
      Together. Following transactions should be rolled back too,
      because they could make their statements based on what they saw in
      the failed transaction. Also rollback of the failed transaction
      without rollback of the next ones can actually rewrite what they
      committed.
      
      So when rollback is started, *all* pending transactions should be
      rolled back. However if they would keep coming, the rollback would
      be infinite. This means to complete a rollback it is necessary to
      stop sending new transactions to WAL, then rollback all already
      sent. In the end allow new transactions again.
      
      Step-by-step:
      
      1) stop accepting all new transactions in WAL thread, where
      rollback is started. All new transactions don't even try to go to
      disk. They added to rollback queue immediately after arriving to
      WAL thread.
      
      2) tell TX thread to stop sending new transactions to WAL. So as
      the rollback queue would stop growing.
      
      3) rollback all transactions in reverse order.
      
      4) allow transactions again in WAL thread and TX thread.
      
      The algorithm is long, but simple and understandable. However
      implementation wasn't so easy. It was done using a 4-hop cbus
      route. 2 hops of which were supposed to clear cbus channel from
      all other cbus messages. Next two hops implemented steps 3 and 4.
      Rollback state of the WAL was signaled by checking internals of a
      preallocated cbus message.
      
      The patch makes it simpler and more straightforward. Rollback
      state is now signaled by a simple flag, and there is no a hack
      about clearing cbus channel, no touching attributes of a cbus
      message. The moment when all transactions are stopped and the last
      one has returned from WAL is visible explicitly, because the last
      sent to WAL journal entry is saved.
      
      Also there is a single route for commit and rollback cbus
      messages now, called tx_complete_batch(). This change will come
      in hand in scope of synchronous replication, when WAL write won't
      be enough for commit. And therefore 'commit' as a concept should
      be washed away from WAL's code gradually. Migrate to solely txn
      module.
      a4f4adeb
    • Roman Khabibov's avatar
      console: check on_shutdown() before exit · c7341a3d
      Roman Khabibov authored
      Add check that on_shutdown() triggers were called before exit,
      because in case of EOF or Ctrl+D (no signals) they were ignored.
      
      Closes #4703
      c7341a3d
  5. May 07, 2020
    • Nikita Pettik's avatar
      vinyl: init all vars before cleanup in vy_lsm_split_range() · 4dcba1b5
      Nikita Pettik authored
      If vy_key_from_msgpack() fails in vy_lsm_split_range(), clean-up
      procedure is called. However, at this moment struct vy_range *parts[2]
      is not initialized ergo contains garbage and access to this structure
      may result in crash, segfault or disk formatting. Let's move
      initialization of mentioned variables before call of
      vy_lsm_split_range().
      
      Part of #4864
      4dcba1b5
  6. May 01, 2020
  7. Apr 30, 2020
    • Alexander V. Tikhonov's avatar
      travis-ci/gitlab-ci: add Ubuntu Focal 20.04 · 765f338e
      Alexander V. Tikhonov authored
      Closes #4863
      Unverified
      765f338e
    • Sergey Ostanevich's avatar
      Code cleanup: sync declarations and definitions · b136a61e
      Sergey Ostanevich authored
      API_EXPORT defines nothrow, so compiler warns or errors depending on the
      build options.
      
      Closes #4885
      b136a61e
    • Aleander V. Tikhonov's avatar
      test: fix flaky replication/skip_conflict_row test · f81dae2d
      Aleander V. Tikhonov authored
      Fixed flaky upstream checks at replication/skip_conflict_row test,
      also check on lsn set in test-run wait condition routine.
      
      Errors fixed:
      
      [024] @@ -66,11 +66,11 @@
      [024]  ...
      [024]  box.info.replication[1].upstream.message
      [024]  ---
      [024] -- null
      [024] +- timed out
      [024]  ...
      [024]  box.info.replication[1].upstream.status
      [024]  ---
      [024] -- follow
      [024] +- disconnected
      [024]  ...
      [024]  box.space.test:select()
      [024]  ---
      [024]
      
      [004] @@ -125,11 +125,11 @@
      [004]  ...
      [004]  box.info.replication[1].upstream.message
      [004]  ---
      [004] -- Duplicate key exists in unique index 'primary' in space 'test'
      [004] -...
      [004] -box.info.replication[1].upstream.status
      [004] ----
      [004] -- stopped
      [004] +- null
      [004] +...
      [004] +box.info.replication[1].upstream.status
      [004] +---
      [004] +- follow
      [004]  ...
      [004]  test_run:cmd("switch default")
      [004]  ---
      [004]
      
      [038] @@ -174,7 +174,7 @@
      [038]  ...
      [038]  box.info.replication[1].upstream.status
      [038]  ---
      [038] -- follow
      [038] +- disconnected
      [038]  ...
      [038]  -- write some conflicting records on slave
      [038]  for i = 1, 10 do box.space.test:insert({i, 'r'}) end
      Line 201 (often):
      
      [039] @@ -201,7 +201,7 @@
      [039]  -- lsn should be incremented
      [039]  v1 == box.info.vclock[1] - 10
      [039]  ---
      [039] -- true
      [039] +- false
      [039]  ...
      [039]  -- and state is follow
      [039]  box.info.replication[1].upstream.status
      [039]
      
      [030] @@ -201,12 +201,12 @@
      [030]  -- lsn should be incremented
      [030]  v1 == box.info.vclock[1] - 10
      [030]  ---
      [030] -- true
      [030] +- false
      [030]  ...
      [030]  -- and state is follow
      [030]  box.info.replication[1].upstream.status
      [030]  ---
      [030] -- follow
      [030] +- disconnected
      [030]  ...
      [030]  -- restart server and check replication continues from nop-ed vclock
      [030]  test_run:cmd("switch default")
      Line 230 (OSX):
      
      [022] --- replication/skip_conflict_row.result	Thu Apr 16 21:54:28 2020
      [022] +++ replication/skip_conflict_row.reject	Mon Apr 27 00:52:56 2020
      [022] @@ -230,7 +230,7 @@
      [022]  ...
      [022]  box.info.replication[1].upstream.status
      [022]  ---
      [022] -- follow
      [022] +- disconnected
      [022]  ...
      [022]  box.space.test:select({11}, {iterator = "GE"})
      [022]  ---
      [022]
      
      Close #4457
      f81dae2d
  8. Apr 29, 2020
    • Alexander Turenko's avatar
      travis-ci: don't deploy 2.5+ pkgs to packagecloud · ba206b48
      Alexander Turenko authored
      Now we have S3 based infrastructure for RPM / Deb packages and GitLab CI
      pipelines, which deploys packages to it.
      
      We don't plan to add 2.5+ repositories on packagecloud.io, so instead of
      usual change of target bucket from 2_N to 2_(N+1), the deploy stage is
      removed.
      
      Since all distro specific jobs are duplicated in GitLab CI pipelines and
      those Travis-CI jobs are needed just for deployment, it worth to remove
      them too.
      
      Follows up #3380.
      Part of #4947.
      Unverified
      ba206b48
  9. Apr 28, 2020
    • Vladislav Shpilevoy's avatar
      schema: fix internal symbols dangling in _G · b56484d6
      Vladislav Shpilevoy authored
      A couple of functions were mistakenly declared as 'function'
      instead of 'local function' in schema.lua. That led to their
      presence in the global namespace.
      
      Closes #4812
      b56484d6
    • Vladislav Shpilevoy's avatar
      schema: fix index promotion to functional index · fcce05a4
      Vladislav Shpilevoy authored
      When index:alter() was called on a non-functional index with
      specified 'func', it led to accessing a not declared variable in
      schema.lua.
      fcce05a4
    • Vladislav Shpilevoy's avatar
      box: replace port_tuple with port_c everywhere · 4d82478f
      Vladislav Shpilevoy authored
      Port_tuple is exactly the same as port_c, but is not able to store
      raw MessagePack. In theory it sounds like port_tuple should be a
      bit simpler and therefore faster, but in fact it is not.
      Microbenchmarks didn't reveal any difference. So port_tuple is no
      longer needed, all its functionality is covered by port_c.
      
      Follow up #4641
      4d82478f
    • Vladislav Shpilevoy's avatar
      box: introduce box_return_mp() public C function · dd36c610
      Vladislav Shpilevoy authored
      Closes #4641
      
      @TarantoolBot document
      Title: box_return_mp() public C function
      
      Stored C functions could return a result only via
      `box_return_tuple()` function. That made users create a tuple
      every time they wanted to return something from a C function.
      
      Now public C API offers another way to return - `box_return_mp()`.
      It allows to return arbitrary MessagePack, not wrapped into a
      tuple object. This is simpler to use for small results like a
      number, boolean, or a short string. Besides, `box_return_mp()` is
      much faster than `box_return_tuple()`, especially for small
      MessagePack.
      
      Note, that it is faster only if an alternative is to create a
      tuple by yourself. If an already existing tuple was obtained from
      an iterator, and you want to return it, then of course it is
      faster to return via `box_return_tuple()`, than via extraction of
      tuple data, and calling `box_return_mp()`.
      
      Here is the function declaration from module.h:
      ```C
      /**
       * Return MessagePack from a stored C procedure. The MessagePack
       * is copied, so it is safe to free/reuse the passed arguments
       * after the call.
       * MessagePack is not validated, for the sake of speed. It is
       * expected to be a single encoded object. An attempt to encode
       * and return multiple objects without wrapping them into an
       * MP_ARRAY or MP_MAP is undefined behaviour.
       *
       * \param ctx An opaque structure passed to the stored C procedure
       *        by Tarantool.
       * \param mp Begin of MessagePack.
       * \param mp_end End of MessagePack.
       * \retval -1 Error.
       * \retval 0 Success.
       */
      API_EXPORT int
      box_return_mp(box_function_ctx_t *ctx, const char *mp, const char *mp_end);
      ```
      dd36c610
    • Vladislav Shpilevoy's avatar
      box: introduce port_c · 4c3c9bda
      Vladislav Shpilevoy authored
      Port_c is a new descendant of struct port. It is used now for
      public C functions to store their result. Currently they can
      return only a tuple, but it will change soon, they will be able to
      return arbitrary MessagePack.
      
      Port_tuple is not removed, because still is used for box_select(),
      for functional indexes, and in SQL as a base for port_sql.
      Although that may be changed later. Functional indexes really need
      only a single MessagePack object from their function. While
      box_select() working via port_tuple or port_c didn't show any
      significant difference during micro benchmarks.
      
      Part of #4641
      4c3c9bda
  10. Apr 27, 2020
    • Serge Petrenko's avatar
      applier: follow vclock to the last tx row · 0edb4d97
      Serge Petrenko authored
      
      Since the introduction of transaction boundaries in replication
      protocol, appliers follow replicaset.applier.vclock to the lsn of the
      first row in an arrived batch. This is enough and doesn't lead to errors
      when replicating from other instances, respecting transaction boundaries
      (instances with version 2.1.2 and up).
      
      However, if there's a 1.10 instance in 2.1.2+ cluster, it sends every
      single tx row as a separate transaction, breaking the comparison with
      replicaset.applier.vclock and making the applier apply part of the
      changes, it has already applied when processing a full transaction
      coming from another 2.x instance. Such behaviour leads to
      ER_TUPLE_FOUND errors in the scenario described above.
      
      In order to guard from such cases, follow replicaset.applier.vclock to
      the lsn of the last row in tx.
      
      Closes #4924
      
      Reviewed-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      0edb4d97
    • Roman Khabibov's avatar
      sql: fix sorting rules for values of SCALAR type · 72ce442c
      Roman Khabibov authored
      Function implementing comparison during VDBE sorting routine
      (sqlVdbeCompareMsgpack) did not account values of boolean type in some
      cases. Let's fix it so that booleans always precede numbers if they are
      sorted in ascending order.
      
      Closes #4697
      72ce442c
  11. Apr 24, 2020
    • Cyrill Gorcunov's avatar
      cbus: fix inconsistency in endpoint creation · d6d69c9f
      Cyrill Gorcunov authored
      
      The notification of wait variable shall be done under
      a bound mutex locked. Otherwise the results are not
      guaranteed (see pthread manuals).
      
      Thus when we create a new endpoint via cbus_endpoint_create
      and there is an other thread which sleeps inside cpipe_create
      we should notify the sleeper under cbus.mutex.
      
      Fixes #4806
      
      Reported-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      d6d69c9f
    • Leonid Vasiliev's avatar
      build: fix compilation on Alpine 3.5 · d7fa6d34
      Leonid Vasiliev authored
      The cbus hang test uses glibc pthread mutex implementation details.
      The reason why mutex implementation details is used:
      "For the bug reproducing the canceled thread must be canceled
      during processing cpipe_flush_cb. We need to synchronize
      the main thread and the canceled worker thread for that.
      So, thread synchronization has been realized by means of
      endpoint's mutex internal field(__data.__lock)."
      Therefore, it should not compile in case of using another library.
      d7fa6d34
  12. Apr 21, 2020
  13. Apr 20, 2020
    • Kirill Yukhin's avatar
      Dummy commit · ad13b6d5
      Kirill Yukhin authored
      ad13b6d5
    • Nikita Pettik's avatar
      box/error: ref error.prev while accessing it · fef6505c
      Nikita Pettik authored
      In case accessing previous error doesn't come alongside with
      incrementing its reference counter, it may lead to use-after-free bug.
      Consider following scenario:
      
      _, err = foo() -- foo() returns diagnostic error stack
      preve = err.prev -- err.prev ref. counter == 1
      err:set_prev(nil) -- err.prev ref. counter == 0 so err.prev is destroyed
      preve -- accessing already freed memory
      
      To avoid that let's increment reference counter of .prev member while
      calling error.prev and set corresponding gc finalizer (error_unref()).
      
      Closes #4887
Loading