Skip to content
Snippets Groups Projects
  1. Oct 28, 2019
    • Vladislav Shpilevoy's avatar
      replication: auto reconnect if password is invalid · aa2e2c56
      Vladislav Shpilevoy authored
      Before the patch there was a race in replication
      password configuration. It was possible that a replica
      connects to a master with a custom password before
      that password is actually set. The replica treated the
      error as critical and exited.
      
      But in fact it is not critical. Replica even can
      withstand absence of a user and keeps reconnecting.
      Wrong password situation arises from the same problem
      of non atomic configuration and is fixed the same -
      keep reconnect attempts if the password was wrong.
      
      Closes #4550
      aa2e2c56
    • Vladislav Shpilevoy's avatar
      key_def: key_def.new() accept both 'field' and 'fieldno' · 39918baf
      Vladislav Shpilevoy authored
      Closes #4519
      
      @TarantoolBot document
      Title: key_def.new() accept both 'field' and 'fieldno'
      
      Before the patch key_def.new() took an index part
      array as it is returned in <index_object>.parts: each
      part should include 'type', 'fieldno', and what else
      .parts element contains.
      
      But it was not possible to create a key_def from an
      index definition - the array passed to
      <space_object>.create_index() 'parts' argument. Because
      key_def.new() didn't recognize 'field' option. That
      might be useful, when a key_def is needed on a remote
      client, where a space object and its indexes do not
      exist. And it would be strange to force a user to
      create them just so as he would be able to access
      
          <net_box connection>.space.<space_name>.
              index.<index_name>.parts
      
      As well as it would be crutchy to make a user manually
      replace 'field' with 'fieldno' in its index definition
      just to create a key_def.
      
      Additionally, an ability to pass an index definition
      to a key_def constructor makes the API more symmetric.
      
      Note, that it still is not 100% symmetric, because a
      user can't pass field names to the key_def
      constructor. A space is needed for that anyway.
      39918baf
    • Vladislav Shpilevoy's avatar
      box: raise an error on nil replicaset and instance uuid · a8ebd334
      Vladislav Shpilevoy authored
      Before the patch the nil UUID was ignored and a new random one
      was generated. This was because internally box treats nil UUID
      as its absence.
      
      Now a user will see an explicit message that nil UUID is a
      reserved value.
      
      Closes #4282
      a8ebd334
  2. Oct 21, 2019
  3. Oct 20, 2019
    • Nikita Pettik's avatar
      sql: remove expmask from prepared statement · 8aa685fb
      Nikita Pettik authored
      expmask indicated necessity to recompile statement after parameter was
      bound: it might turn out that parameter can affect query plan. However,
      part of this mechanism has been removed long ago as a SQLite's legacy.
      In its current state expmask is likely to be useless and assertions
      involving it are obviously unsuitable. This patch completely removes
      expmask and related routines.
      
      Closes #4566
      8aa685fb
  4. Oct 17, 2019
    • Kirill Shcherbatov's avatar
      sql: check returned value type for UDF · 439a48f2
      Kirill Shcherbatov authored
      All user-defined functions feature type of returned value (if it
      is not specified during function creation, it is assumed to be
      ANY), but currently it is ignored. This patch introduces check
      which verifies that returned value is of specified in function's
      definition type. There's no attempt at implicit conversion to
      specified (target) type - returned value must be literally of
      the specified type.
      
      Closes #4387
      439a48f2
    • Kirill Shcherbatov's avatar
      sql: better LUA arguments conversion for UDFs · 0a6daf7a
      Kirill Shcherbatov authored
      Start using comprehensive serializer luaL_tofield() to prepare
      LUA arguments for UDFs. This allows to support cdata types
      returned from Lua function.
      
      Needed for #4387
      
      @TarantoolBot document
      Title: UDF returning nil or box.NULL in SQL
      
      Values nil and box.NULL returned by UDF in SQL
      both transformed to SQL NULL and are equal.
      
      Example:
      tarantool> box.execute("SELECT LUA('return box.NULL') is NULL
                                     and LUA('return nil') is NULL")
      ---
      - metadata:
        - name: LUA('return box.NULL') is NULL and LUA('return nil') is NULL
          type: boolean
        rows:
        - [true]
      ...
      0a6daf7a
    • Kirill Shcherbatov's avatar
      sql: errors for UDFs returning too many values · 50efe95e
      Kirill Shcherbatov authored
      This patch introduces handling the situation when UDF returns
      too many values. Previously Tarantool used to silently use
      the first value returned. Now an error is raised.
      
      Moreover a test coverage is improved also for the situation when
      no value is returned.
      
      Needed for #4387
      50efe95e
    • Vladislav Shpilevoy's avatar
      wal: drop rows_per_wal option · c6012920
      Vladislav Shpilevoy authored
      Rows_per_wal option was deprecated because it can be covered by
      wal_max_size. In order not to complicate WAL code with that
      option's support this commit drops it completely.
      
      In some tests the option was used to create several small xlog
      files. Now the same is done via wal_max_size. Where it was
      needed, number of rows per wal is estimated as wal_max_size / 50.
      Because struct xrow_header size ~= 50 not counting paddings and
      body.
      
      Note, file box/configuration.result was deleted here, because it
      is a stray result file, and it contained the rows_per_wal option
      mentioning. Its test was dropped much earlier in
      fdc3d1dd.
      
      Closes #3762
      c6012920
    • Vladislav Shpilevoy's avatar
      test: replication/misc cleanup box.cfg.replication · 399899a1
      Vladislav Shpilevoy authored
      In patch c6bea65f I
      added a bug - replication/misc leaves a bad value in
      box.cfg.replication. Before that patch the test was
      resetting this to empty replication. In my patch I
      forgot about that, and left there the value
      
          {box.cfg.listen, "12345"}
      
      This patch cleans it up.
      
      Follow up #3760
      399899a1
  5. Oct 16, 2019
    • Kirill Shcherbatov's avatar
      sql: use name instead of function pointer for UDF · de9a7b1a
      Kirill Shcherbatov authored
      This patch changes OP_Function parameters convention: now a
      function's name is passed instead of pointer to the function
      object. This allows to normally handle the situation, when UDF
      has been deleted to the moment of the VDBE code execution.
      In particular case this may happen with CK constraints that
      refers to a deleted persistent function.
      
      Closes #4176
      de9a7b1a
    • Kirill Shcherbatov's avatar
      sql: add an ability to disable CK constraints · c4781f93
      Kirill Shcherbatov authored
      Closes #4244
      
      @TarantoolBot document
      Title: an ability to disable CK constraints
      
      Now it is possible to disable and enable ck constraints.
      All ck constraints are enabled by default when Tarantool is
      configured. Ck constraints checks are not performed during
      standard recovery, but performed during force_recovery -
      all conflicting tuples are skipped in case of ck_constraint
      conflict.
      
      To change CK constraint "is_enabled" state, call
      -- in LUA
      ck_obj:enable(new_state in {true, false})
      -- in SQL
      ALTER TABLE {TABLE_NAME} {EN, DIS}ABLE CHECK CONSTRAINT {CK_NAME};
      
      Example:
      box.space.T6.ck_constraint.ck_unnamed_T6_1:enable(false)
      box.space.T6.ck_constraint.ck_unnamed_T6_1
      - space_id: 512
        is_enabled: false
        name: ck_unnamed_T6_1
        expr: a < 10
      box.space.T6:insert({11})
      -- passed
      box.execute("ALTER TABLE t6 ENABLE CHECK CONSTRAINT \"ck_unnamed_T6_1\"")
      box.space.T6:insert({12})
      - error: 'Check constraint failed ''ck_unnamed_T6_1'': a < 10'
      c4781f93
    • Kirill Shcherbatov's avatar
      box: add an ability to disable CK constraints · 9a058bb2
      Kirill Shcherbatov authored
      Now it is possible to disable and enable ck constraints in LUA.
      This option is persistent. All ck constraints are constructed
      in enabled state when Tarantool is configured. This ability
      may be usefulwhen processed data is verified and constraints
      validation is not required. For instance, during casual recovery
      process there's no need to provide any checks since data is
      assumed to be consistent.
      
      Persisting is_enabled flag is an important feature.
      If the option is not stored the following scenario is
      possible:
      - the option is turned off
      - data is changed so that a constraint is violated
      - the system is restarted while the option state is lost
      - there is no way (even in theory) to discover it and find that
        data is incorrect.
      
      Part of #4244
      9a058bb2
  6. Oct 14, 2019
    • Mergen Imeev's avatar
      sql: AUTOINCREMENT for multipart PK · 08c7d7c1
      Mergen Imeev authored
      Prior to this patch, the auto-increment feature could only be set
      in an INTEGER field of PRIMARY KEY if the PRIMARY KEY consisted of
      a single field. It was not possible to use this feature if the
      PRIMARY KEY consisted of more than one field. This patch defines
      two ways to set AUTOINCREMENT for any INTEGER or UNSIGNED field of
      PRIMARY KEY.
      
      Closes #4217
      
      @TarantoolBot document
      Title: The auto-increment feature for multipart PK
      The auto-increment feature can be set to any INTEGER or UNSIGNED
      field of PRIMARY KEY using one of two ways:
      1) AUTOINCREMENT in column definition:
      CREATE TABLE t (i INT, a INT AUTOINCREMENT, PRIMARY KEY (i, a));
      CREATE TABLE t (i INT AUTOINCREMENT, a INT, PRIMARY KEY (i, a));
      2) AUTOINCREMENT in PRIMARY KEY definition:
      CREATE TABLE t (i INT, a INT, PRIMARY KEY (i, a AUTOINCREMENT));
      CREATE TABLE t (i INT, a INT, PRIMARY KEY (i AUTOINCREMENT, a));
      08c7d7c1
  7. Oct 12, 2019
    • Vladislav Shpilevoy's avatar
      replication: recfg with 0 quorum returns immediately · c6bea65f
      Vladislav Shpilevoy authored
      Replication quorum 0 not only affects orphan status, but also,
      according to documentation, makes box.cfg() return immediately
      regardless of whether connections to upstreams are established.
      
      It was not so before the patch. What is worse, even with non 0
      quorum the instance was blocked on reconfiguration for connect
      timeout seconds, if at least one node is not connected.
      
      Now quorum is respected on reconfiguration. On a bootstrap it is
      still impossible to return earlier than
      replication_connect_timeout, because nodes need to choose some
      cluster settings. Too early start would make it impossible -
      cluster's participants will just start and choose different
      cluster UUIDs.
      
      Closes #3760
      c6bea65f
  8. Oct 09, 2019
    • Serge Petrenko's avatar
      replication: add is_orphan field to ballot · dc1e4009
      Serge Petrenko authored
      A successfully fetched remote instance ballot isn't updated during
      bootstrap procedure. This leads to a case when different instances
      choose different masters as their bootstrap leaders.
      
      Imagine such a situation.
      You start instance A without replication set up. Instance A successfully
      bootstraps.
      You also have instances B and C both with replication set up to {A, B,
      C} and replication_connect_quorum set to 3
      You first start instance B. It doesn't proceed to choosing a leader
      until one of the events happens: either the replication_connect_timeout
      runs out, or instance C is up and starts listening on its port.
      B has established connection to A and fetched its ballot, with some
      vclock, say, {1: 1}.
      B retries connection to C every replication_timeout seconds.
      Then you start instance C. Instance C succeeds in connecting to A and B
      right away and bootstraps from instance A. Instance A registers C in its
      _cluster table. This registration is replicated to instance C.
      Meanwhile, instance C is trying to sync with quorum instances (which is
      3), and stays in orphan mode.
      Now replication_timeout on instance B finally runs out. It retries a
      previously unsuccessful connection to C and succeeds. C sends its ballot
      to B with vclock = {1: 2, 2:0} (in our example), since it has already
      incremented it after _cluster registration.
      B sees that C has a greater vclock than A, and chooses to bootstrap from
      C instead of A. C is orphan and rejects B's attempt to join. B dies.
      
      To fix such ungentlemanlike behaviour of C, we should at least include
      loading status in ballot and prefer fully bootstrapped instances to the
      ones still syncing with other replicas.
      We also need to use a separate flag instead of ballot's already existent
      is_ro, since we still want to prefer loading instances over the ones
      explicitly configured to be read-only.
      
      Closes #4527
      dc1e4009
  9. Oct 01, 2019
    • Roman Khabibov's avatar
      Fix 53d43160 · 0b9de586
      Roman Khabibov authored
      0b9de586
    • Roman Khabibov's avatar
      json: clarify bad syntax error messages · 53d43160
      Roman Khabibov authored
      Count lines in the json parsing structure. It is needed to print
      the number of line and column where a mistake was made.
      
      Closes #3316
      
      (cherry picked from commit 9f9bd3eb2d064129ff6b1a764140ebef242d7ff7)
      53d43160
    • Vladislav Shpilevoy's avatar
      app: exit gracefully when a main script throws an error · 157a2d88
      Vladislav Shpilevoy authored
      Code to run main script (passed via command line args, or
      interactive console) has a footer where it notifies systemd,
      logs a happened error, and panics.
      
      Before the patch that code was unreachable in case of any
      exception in a main script, because panic happened earlier. Now a
      happened exception is correctly carried to the footer with proper
      error processing.
      
      A first and obvious solution was replace all panics with diag_set
      and use fiber_join on the script runner fiber. But appeared, that
      the fiber running a main script can't be joined. This is because
      normally it exits via os.exit() which never returns and therefore
      its caller never dies = can't be joined.
      
      The patch solves this problem by passing main fiber diag to the
      script runner by pointer, eliminating fiber_join necessity.
      
      Closes #4382
      157a2d88
  10. Sep 27, 2019
    • Ilya Kosarev's avatar
      test: fix replica expectance in broken lsn test · cd01573c
      Ilya Kosarev authored
      xlog/panic_on_broken_lsn includes waiting for replica and then
      box.info.vclock usage. Sometimes box.info.vclock was giving wrong
      values. Now it is stable due to improved replica expectation
      criterion.
      
      Closes #4508
      cd01573c
  11. Sep 25, 2019
    • Vladislav Shpilevoy's avatar
      app: raise an error on too nested tables serialization · d7a8942a
      Vladislav Shpilevoy authored
      Closes #4434
      Follow-up #4366
      
      @TarantoolBot document
      Title: json/msgpack.cfg.encode_deep_as_nil option
      
      Tarantool has several so called serializers to convert data
      between Lua and another format: YAML, JSON, msgpack.
      
      YAML is a crazy serializer without depth restrictions. But for
      JSON, msgpack, and msgpackffi a user could set encode_max_depth
      option. That option led to crop of a table when it had too many
      nested levels. Sometimes such behaviour is undesirable.
      
      Now an error is raised instead of data corruption:
      
          t = nil
          for i = 1, 100 do t = {t} end
          msgpack.encode(t) -- Here an exception is thrown.
      
      To disable it and return the old behaviour back here is a new
      option:
      
          <serializer>.cfg({encode_deep_as_nil = true})
      
      Option encode_deep_as_nil works for JSON, msgpack, and msgpackffi
      modules, and is false by default. It means, that now if some
      existing users have cropping, even intentional, they will get the
      exception.
      d7a8942a
    • Vladislav Shpilevoy's avatar
      tuple: use global msgpack serializer in Lua tuple · 676369b1
      Vladislav Shpilevoy authored
      Tuple is a C library exposed to Lua. In Lua to translate Lua
      objects into tuples and back luaL_serializer structure is used.
      
      In Tarantool we have several global serializers, one of which is
      for msgpack. Tuples store data in msgpack, and in theory should
      have used that global msgpack serializer. But in fact the tuple
      module had its own private serializer because of tuples encoding
      specifics such as never encode sparse arrays as maps.
      
      This patch makes tuple Lua module use global msgpack serializer
      always. But how does tuple handle sparse arrays now? In fact,
      the tuple module still has its own serializer, but it is updated
      each time when the msgpack serializer is changed.
      
      Part of #4434
      676369b1
    • Vladislav Shpilevoy's avatar
      msgpack: make msgpackffi use encode_max_depth option · 4bb253f7
      Vladislav Shpilevoy authored
      Msgpack Lua module is not a simple set of functions. It is a
      global serializer object used by plenty of other Lua and C
      modules. Msgpack as a serializer can be configured, and in theory
      its configuration updates should affect all other modules. For
      example, a user could change encode_max_depth:
      
          require('msgpack').cfg({encode_max_depth = <new_value>})
      
      And that would make tuple:update() accept tables with <new_value>
      depth without a crop.
      
      But in fact msgpack configuration didn't affect some places, such
      as this one. And all the others who use msgpackffi.
      
      This patch fixes it, for encode_max_depth option. Other options
      are still ignored.
      
      Part of #4434
      4bb253f7
    • Vladislav Shpilevoy's avatar
      app: serializers update now is reflected in Lua · fe4a8047
      Vladislav Shpilevoy authored
      There are some objects called serializers - msgpack, cjson, yaml,
      maybe more. They are global objects affecting both Lua and C
      modules.
      
      A serializer have settings which can be updated. But before the
      patch an update changed only C structure of the serializer. It
      made impossible to use settings of the serializers from Lua.
      
      Now any update of any serializer is reflected both in its C and
      Lua structures.
      
      Part of #4434
      fe4a8047
  12. Sep 18, 2019
    • Roman Khabibov's avatar
      sql: set type flag after varbinary to number cast · 5ba5ed37
      Roman Khabibov authored
      It was forgotten to set MEM_Real flag for VDBE memory containing result
      of varbinary to number cast operation. This patch fixes that and sets
      corresponding flag if the cast takes place.
      
      Closes #4356
      5ba5ed37
    • Roman Khabibov's avatar
      sql: do not print binary data in diag_set() · 9535f09f
      Roman Khabibov authored
      Print the data type instead of the data itself in diag_set() in the case
      of binary data. The reason of this patch is that LibYAML converts whole
      error message to base64 in the case of non-printable symbols.
      
      Part of #4356
      9535f09f
    • Vladislav Shpilevoy's avatar
      tuple: rework update error reporting · bae634bb
      Vladislav Shpilevoy authored
      - Unify error code names, they all should start with ER_UPDATE_*;
      
      - Use string field identifier in error messages, because soon they
        will support both field numbers and JSON paths;
      
      - Report all field numbers 1-based. This simplifies error analysis
        because no need to think from where an update has come - Lua or
        C/iproto. Also it allows to drop index_base argument from many
        functions;
      
      - Introduce helper functions to set commonly appearing errors.
        Currently it is not a problem, but next patches will add several
        new files in all of which the same errors can happen;
      
      - Deletion checks that number of fields to delete is > 0 right
        after reading the argument, not during deletion appliance. It
        allows to make such check in only one place even when more
        delete implementations will appear;
      
      - make_arith_operation now takes update_op as an argument
        explicitly, not teared into separate arguments. It allows to use
        error helpers. Also dead code is dropped from this function with
        incorrect usage of some of errors;
      
      Part of #1261
      bae634bb
  13. Sep 17, 2019
    • Mergen Imeev's avatar
      sql: make valueToText() operate on MAP/ARRAY values · 736cdd81
      Mergen Imeev authored
      Since ARRAY and MAP cannot be converted to SCALAR type, this
      operation should throw an error. But when the error is raised in
      SQL, it is displayed in unreadable form. The reason for this is
      that the given array or map is not correctly converted to a
      string. This patch fixes the problem by converting ARRAY or MAP to
      their string representation.
      For example:
      
      box.execute('CREATE TABLE t1(i INT PRIMARY KEY, a SCALAR);')
      format = {}
      format[1] = {type = 'integer', name = 'I'}
      format[2] = {type = 'array', name = 'A'}
      s = box.schema.space.create('T2', {format=format})
      i = s:create_index('ii')
      s:insert({1, {1,2,3}})
      box.execute('INSERT INTO t1 SELECT * FROM t2;')
      
      Should return:
      - error: 'Type mismatch: can not convert [1, 2, 3] to scalar'
      
      Follow-up #4189
      736cdd81
    • Mergen Imeev's avatar
      sql: add ARRAY, MAP and ANY types to mem_apply_type() · de79b714
      Mergen Imeev authored
      Function mem_apply_type() implements implicit type conversion. As
      a rule, tuple to be inserted to the space is exposed to this
      conversion which is invoked during execution of OP_MakeRecord
      opcode (which in turn forms tuple). This function was not adjusted
      to operate on ARRAY, MAP and ANY field types since they are poorly
      supported in current SQL implementation. Hence, when tuple to be
      inserted in space having mentioned field types reaches this
      function, it results in error. Note that we can't set ARRAY or MAP
      types in SQL, but such situation may appear during UPDATE
      operation on space created via Lua interface. This problem is
      solved by extending implicit type conversions with obvious casts:
      array field can be casted to array, map to map and any to any.
      
      Closes #4189
      de79b714
    • Maria's avatar
      Proper error handling for fio.mktree · 8ccfc691
      Maria authored
      Method fio.mktree is used to create given path unconditionally -
      without checking if it was a directory or something else. This
      led to inappropriate error messages or even inconsistent behavior.
      Now check the type of a given path.
      
      Closes #4439
      8ccfc691
  14. Sep 16, 2019
    • Roman Khabibov's avatar
      sql: allow to create view as <WITH> clause · 38bb4caa
      Roman Khabibov authored
      Allow views to use CTEs, which can be in any (nested) select after
      <AS>. Before this patch, during view creation all referenced
      spaces were fetched by name from SELECT and their reference
      counters were incremented to avoid dangling references. It
      occurred in update_view_references(). Obviously, CTE tables
      weren't held in space cache, ergo error "space doesn’t exist" was
      raised. Add check if a space from FROM is CTE. If it is, don't
      increment its reference counter and don't raise the error.
      
      Closes #4149
      38bb4caa
    • Nikita Pettik's avatar
      sql: swap FK masks during altering of space · 33236ecc
      Nikita Pettik authored
      It was forgotten to swap old and new mask (holding fields involved into
      foreign key relation) during space alteration (lists of object
      representing FK metadata are swapped successfully). Since mask is vital
      and depending on its value different byte-codes implementing SQL query
      can be produced, this mistake resulted in assertion fault in debug build
      and wrong constraint check in release build. Let's fix this bug and swap
      masks as well as foreign key lists.
      
      Closes #4495
      33236ecc
    • Nikita Pettik's avatar
      sql: remove ENGINE from list of reserved keywords · 07618dad
      Nikita Pettik authored
      ENGINE became reserved keyword in 1013a744. There's no any actual
      reason why ENGINE should be reserved keyword. What is more, we are going
      to use this word as a name of some fields for tables forming
      informational schema. Hence, until it is too late (it is not documented
      yet), let's remove ENGINE from the list of reserved keywords and allow
      identifiers be that word.
      07618dad
  15. Sep 13, 2019
    • Vladimir Davydov's avatar
      relay: join new replicas off read view · 6332aca6
      Vladimir Davydov authored
      Historically, we join a new replica off the last checkpoint. As a
      result, we must always keep the last memtx snapshot and all vinyl data
      files corresponding to it. Actually, there's no need to use the last
      checkpoint for joining a replica. Instead we can use the current read
      view as both memtx and vinyl support it. This should speed up the
      process of joining a new replica, because we don't need to replay all
      xlogs written after the last checkpoint, only those that are accumulated
      while we are relaying the current read view. This should also allow us
      to avoid creating a snapshot file on bootstrap, because the only reason
      why we need it is allowing joining replicas. Besides, this is a step
      towards decoupling the vinyl metadata log from checkpointing in
      particular and from xlogs in general.
      
      Closes #1271
      6332aca6
  16. Sep 12, 2019
    • Mergen Imeev's avatar
      sql: test suite for BOOLEAN · 7cf84a54
      Mergen Imeev authored
      This patch provides a test suite that allows us to make sure that
      the SQL BOOLEAN type works as intended.
      
      Part of #4228
      7cf84a54
    • Serge Petrenko's avatar
      replication: disallow bootstrap of read-only masters · 037bd58c
      Serge Petrenko authored
      In a configuration with several read-only and read-write instances, if
      replication_connect_quorum is not greater than the amount of read-only
      instances and replication_connect_timeout happens to be small enough
      for some read-only instances to form a quorum and exceed the timeout
      before any of the read-write instaces start, all these read-only
      instances will choose themselves a read-only bootstrap leader.
      This 'leader' will successfully bootstrap itself, but will fail to
      register any of the other instances in _cluster table, since it isn't
      writeable. As a result, some of the read-only instances will just die
      unable to bootstrap from a read-only bootstrap leader, and when the
      read-write instances are finally up, they'll see a single read-only
      instance which managed to bootstrap itself and now gets a
      REPLICASET_UUID_MISMATCH error, since no read-write instance will
      choose it as bootstrap leader, and will rather bootstrap from one of
      its read-write mates.
      
      The described situation is clearly not what user has hoped for, so
      throw an error, when a read-only instance tries to initiate the
      bootstrap. The error will give the user a cue that he should increase
      replication_connect_timeout.
      
      Closes #4321
      
      @TarantoolBot document
      Title: replication: forbid to bootstrap read-only masters.
      
      It is no longer possible to bootstrap a read-only instance in an emply
      data directory as a master. You will see the following error trying to
      do so:
      ```
      ER_BOOTSTRAP_READONLY: Trying to bootstrap a local read-only instance as master
      ```
      Now if you have a fresh instance, which has
      `read_only=true` in an initial `box.cfg` call, you need to set up
      replication from an instance which is either read-write, or has your
      local instance's uuid in its `_cluster` table.
      
      In case you have multiple read-only and read-write instances with
      replication set up, and you still see the aforementioned error message,
      this means that none of your read-write instances managed to start
      listening on their port before read_only instances have exceeded the
      `replication_connect_timeout`. In this case you should raise
      `replication_connect_timeout` to a greater value.
      037bd58c
  17. Sep 10, 2019
    • Igor Munkin's avatar
      test: move luajit-tap suite to luajit repo · 43575303
      Igor Munkin authored
      * All test chunks related to luajit were moved from tarantool source
      tree to the luajit repo
      * Adjusted CMakeLists via creating a symlink to luajit test directory
      to fix out-of-source tests
      
      Closed #4478
      43575303
  18. Sep 09, 2019
    • Kirill Shcherbatov's avatar
      lua_cjson: fix segfault on recursive table encoding · 664788a3
      Kirill Shcherbatov authored
      The json.encode() used to cause a segfault in case of recursive
      table:
        tbl = {}
        tbl[1] = tbl
        json.encode(tbl)
      
      Library doesn't test whether given object on Lua stack parsed
      earlier, because it performs a lightweight in-depth traverse
      of Lua stack. However it must stop when encode_max_depth is
      reached (by design).
      
      Tarantool's lua_cjson implementation has a bug introduced during
      porting original library: it doesn't handle some corner cases:
      entering into a map correctly increases a current depth, while
      entering into an array didn't. This patch adopts author's
      approach to check encode_max_depth limit. Thanks to handling this
      constraint correctly the segfault no longer occurs.
      
      Closes #4366
      664788a3
Loading