Skip to content
Snippets Groups Projects
  1. Oct 30, 2019
    • Vladislav Shpilevoy's avatar
      access: update credentials without reconnect · b53bd593
      Vladislav Shpilevoy authored
      Credentials is a cache of user universal privileges. And that
      cache can become outdated in case user privs were changed after
      creation of the cache.
      
      The patch makes user update all its credentials caches with new
      privileges, via a list of all creds.
      
      That solves a couple of real life problems:
      
      - If a user managed to connect after box.cfg started listening
      port, but before access was granted, then he needed a reconnect;
      
      - Even if access was granted, a user may connect after box.cfg
      listen, but before access *is recovered* from _priv space. It
      was not possible to fix without a reconnect. And this problem
      affected replication.
      
      Closes #2763
      Part of #4535
      Part of #4536
      
      @TarantoolBot document
      Title: User privileges update affects existing sessions and objects
      Previously if user privileges were updated (via
      `box.schema.user.grant/revoke`), it was not reflected in already
      existing sessions and objects like functions. Now it is.
      
      For example:
      ```
              box.cfg{listen = 3313}
              box.schema.user.create('test_user', {password = '1'})
              function test1() return 'success' end
      
              c = require('net.box').connect(box.cfg.listen, {
                      user = 'test_user', password = '1'
              })
              -- Error, no access for this connection.
              c:call('test1')
      
              box.schema.user.grant('test_user', 'execute', 'universe')
              -- Now works, even though access was granted after
              -- connection.
              c:call('test1')
      ```
      
      A similar thing happens now with `box.session.su` and functions
      created via `box.schema.func.create` with `setuid` flag.
      
      In other words, now user privileges update is reflected
      everywhere immediately.
      
      (cherry picked from commit 06dbcec597f14fae6b3a7fa2361f2ac513099662)
      (cherry picked from commit 2b599c0efa9ae265fb7464af6abae3f6a192e30e)
      b53bd593
  2. Oct 21, 2019
    • Ilya Kosarev's avatar
      recovery: build secondary index in hot standby mode · a67aa14c
      Ilya Kosarev authored
      End recovery (which means building secondary indexes) just after
      last known log file was read. This allows fast switch to hot standby
      instance without any delay for secondary index to be built.
      Due to engine_end_recovery carryover, xdir_collect_inprogress,
      previously being called from it, is now moved to garbage collector.
      
      Closes #4135
      
      (cherry picked from commit 5aa243de)
      a67aa14c
  3. Oct 17, 2019
    • Vladislav Shpilevoy's avatar
      wal: drop rows_per_wal option · b2b6eb54
      Vladislav Shpilevoy authored
      Rows_per_wal option was deprecated because it can be covered by
      wal_max_size. In order not to complicate WAL code with that
      option's support this commit drops it completely.
      
      In some tests the option was used to create several small xlog
      files. Now the same is done via wal_max_size. Where it was
      needed, number of rows per wal is estimated as wal_max_size / 50.
      Because struct xrow_header size ~= 50 not counting paddings and
      body.
      
      Note, file box/configuration.result was deleted here, because it
      is a stray result file, and it contained the rows_per_wal option
      mentioning. Its test was dropped much earlier in
      fdc3d1dd.
      
      Closes #3762
      
      (cherry picked from commit c6012920)
      b2b6eb54
  4. Sep 25, 2019
    • Vladislav Shpilevoy's avatar
      app: raise an error on too nested tables serialization · d8fe9316
      Vladislav Shpilevoy authored
      Closes #4434
      Follow-up #4366
      
      @TarantoolBot document
      Title: json/msgpack.cfg.encode_deep_as_nil option
      
      Tarantool has several so called serializers to convert data
      between Lua and another format: YAML, JSON, msgpack.
      
      YAML is a crazy serializer without depth restrictions. But for
      JSON, msgpack, and msgpackffi a user could set encode_max_depth
      option. That option led to crop of a table when it had too many
      nested levels. Sometimes such behaviour is undesirable.
      
      Now an error is raised instead of data corruption:
      
          t = nil
          for i = 1, 100 do t = {t} end
          msgpack.encode(t) -- Here an exception is thrown.
      
      To disable it and return the old behaviour back here is a new
      option:
      
          <serializer>.cfg({encode_deep_as_nil = true})
      
      Option encode_deep_as_nil works for JSON, msgpack, and msgpackffi
      modules, and is false by default. It means, that now if some
      existing users have cropping, even intentional, they will get the
      exception.
      
      (cherry picked from commit d7a8942a)
      d8fe9316
    • Vladislav Shpilevoy's avatar
      tuple: use global msgpack serializer in Lua tuple · 503dcd14
      Vladislav Shpilevoy authored
      Tuple is a C library exposed to Lua. In Lua to translate Lua
      objects into tuples and back luaL_serializer structure is used.
      
      In Tarantool we have several global serializers, one of which is
      for msgpack. Tuples store data in msgpack, and in theory should
      have used that global msgpack serializer. But in fact the tuple
      module had its own private serializer because of tuples encoding
      specifics such as never encode sparse arrays as maps.
      
      This patch makes tuple Lua module use global msgpack serializer
      always. But how does tuple handle sparse arrays now? In fact,
      the tuple module still has its own serializer, but it is updated
      each time when the msgpack serializer is changed.
      
      Part of #4434
      
      (cherry picked from commit 676369b1)
      503dcd14
  5. Sep 12, 2019
    • Serge Petrenko's avatar
      replication: disallow bootstrap of read-only masters · 60354954
      Serge Petrenko authored
      In a configuration with several read-only and read-write instances, if
      replication_connect_quorum is not greater than the amount of read-only
      instances and replication_connect_timeout happens to be small enough
      for some read-only instances to form a quorum and exceed the timeout
      before any of the read-write instaces start, all these read-only
      instances will choose themselves a read-only bootstrap leader.
      This 'leader' will successfully bootstrap itself, but will fail to
      register any of the other instances in _cluster table, since it isn't
      writeable. As a result, some of the read-only instances will just die
      unable to bootstrap from a read-only bootstrap leader, and when the
      read-write instances are finally up, they'll see a single read-only
      instance which managed to bootstrap itself and now gets a
      REPLICASET_UUID_MISMATCH error, since no read-write instance will
      choose it as bootstrap leader, and will rather bootstrap from one of
      its read-write mates.
      
      The described situation is clearly not what user has hoped for, so
      throw an error, when a read-only instance tries to initiate the
      bootstrap. The error will give the user a cue that he should increase
      replication_connect_timeout.
      
      Closes #4321
      
      @TarantoolBot document
      Title: replication: forbid to bootstrap read-only masters.
      
      It is no longer possible to bootstrap a read-only instance in an emply
      data directory as a master. You will see the following error trying to
      do so:
      ```
      ER_BOOTSTRAP_READONLY: Trying to bootstrap a local read-only instance as master
      ```
      Now if you have a fresh instance, which has
      `read_only=true` in an initial `box.cfg` call, you need to set up
      replication from an instance which is either read-write, or has your
      local instance's uuid in its `_cluster` table.
      
      In case you have multiple read-only and read-write instances with
      replication set up, and you still see the aforementioned error message,
      this means that none of your read-write instances managed to start
      listening on their port before read_only instances have exceeded the
      `replication_connect_timeout`. In this case you should raise
      `replication_connect_timeout` to a greater value.
      
      (cherry picked from commit 037bd58c)
      60354954
  6. Aug 29, 2019
    • Alexander V. Tikhonov's avatar
      Set fragile option to flaky tests · 7fb559de
      Alexander V. Tikhonov authored
      Added "fragile" option to the flaky tests that are
      not intended to be run in parallel with others.
      Option set at the suite.ini file at the appropriate
      suites with comments including the issue that stores
      the fail.
      
      (cherry picked from commit 165f8ee6)
      7fb559de
  7. Aug 28, 2019
  8. Jul 31, 2019
    • Vladimir Davydov's avatar
      txn: fix rollback in case DDL and DML are used in the same transaction · 35a48688
      Vladimir Davydov authored
      A txn_stmt keeps a reference to the space it modifies. Memtx uses this
      space reference to revert the statement on error or voluntary rollback
      so the space must stay valid throughout the whole transaction.
      
      The problem is a DML statement may be followed by a DDL statement that
      modifies the target space in the same transaction. If we try to roll
      it back before running the rollback triggers installed by the DDL
      statement, it will access an invalid space object (e.g. missing an
      index), which will result in a crash.
      
      To fix this problem, let's run triggers installed by a statement right
      after rolling back the statement.
      
      Closes #4368
      35a48688
    • Alexander Turenko's avatar
      net.box: fix schema fetching from 1.10/2.1 servers · aa0964ae
      Alexander Turenko authored
      After 2.2.0-390-ga7c855e5b ("net.box: fetch '_vcollation' sysview into
      the module") net.box fetches _vcollation view unconditionally, while the
      view was added in 2.2.0-389-g3e3ef182f and, say, tarantool-1.10 and
      tarantool-2.1 do not have it. This leads to a runtime error "Space '277'
      does not exist" on a newer client that connects to an older server.
      
      Now the view is fetched conditionally depending of a version of a
      server: if it is above 2.2.1, then net.box will fetch it. Note: at the
      time there are no release with a number above 2.2.1.
      
      When _vcollation view is available, a collation in an index part will be
      shown by its name (with 'collation' field), otherwise it will be shown
      by its ID (in 'collation_id' field). For example:
      
      Connect to tarantool 1.10:
      
       | tarantool> connection = require('net.box').connect('localhost:3301')
       | ---
       | ...
       |
       | tarantool> connection.space.s.index.sk.parts
       | ---
       | - - type: string
       |     is_nullable: false
       |     collation_id: 2
       |     fieldno: 2
       | ...
      
      Connect to tarantool 2.2.1 (when it will be released):
      
       | tarantool> connection = require('net.box').connect('localhost:3301')
       | ---
       | ...
       |
       | tarantool> connection.space.s.index.sk.parts
       | ---
       | - - type: string
       |     is_nullable: false
       |     collation: unicode_ci
       |     fieldno: 2
       | ...
      
      Fixes #4307.
      aa0964ae
  9. Jul 30, 2019
    • Vladimir Davydov's avatar
      txn: undo commit/rollback triggers when reverting to savepoint · 7ad71695
      Vladimir Davydov authored
      When reverting to a savepoint inside a DDL transaction, apart from
      undoing changes done by the DDL statements to the system spaces, we also
      have to
      
       - Run rollback triggers installed after the savepoint was set, because
         otherwise changes done to the schema by DDL won't be undone.
       - Remove commit triggers installed after the savepoint, because they
         are not relevant anymore, apparently.
      
      To achieve that let's append DDL triggers right to txn statements.
      This allows us to easily discard commit triggers and run rollback
      triggers when a statement is rolled back.
      
      Note, txn commit/rollback triggers are not removed, because they are
      still used by applier and Lua box.on_commit/on_rollback functions.
      
      Closes #4364
      Closes #4365
      7ad71695
  10. Jul 26, 2019
    • Kirill Shcherbatov's avatar
      box: introduce functional indexes in memxtx · 4177fe17
      Kirill Shcherbatov authored
      Closes #1260
      
      @TarantoolBot document
      Title: introduce func indexes in memtx
      Now you can define a func_index using a registered persistent
      function.
      
      There are restrictions for function and key definition for
      a functional index:
       - the referenced function must be persistent, deterministic
         and must return a scalar type or an array.
       - you must define key parts which describe the function return value
       - the function must return data which types match the
         defined key parts
       - the function may return multiple keys; this would be a multikey
         functional index; each key entry is indexed separately;
       - for multikey functional indexes, the key definition should
         start with part 1 and cover all returned key parts
       - key parts can't use JSON paths.
       - the function used for the functional index can not access tuple
         fields by name, only by index.
      
      Functional index can't be primary.
      It is not possible to change the used function after a functional
      index is defined on it. The index must be dropped first.
      
      Each key returned by functional index function (even when it is a
      single scalar) must be returned as a table i.e. {1} and must
      match the key definition.
      
      To define a multikey functional index, create a function with
      opts = {is_multikey = true} and return a table of keys.
      
      Example:
      s = box.schema.space.create('withdata')
      s:format({{name = 'name', type = 'string'},
                {name = 'address', type = 'string'}})
      pk = s:create_index('name', {parts = {1, 'string'}})
      lua_code = [[function(tuple)
                      local address = string.split(tuple[2])
                      local ret = {}
                      for _, v in pairs(address) do
      			table.insert(ret, {utf8.upper(v)})
      		end
                      return ret
                   end]]
      box.schema.func.create('address', {body = lua_code,
                             is_deterministic = true, is_sandboxed = true,
                             opts = {is_multikey = true}})
      idx = s:create_index('addr', {unique = false,
                           func = 'address',
                           parts = {{1, 'string', collation = 'unicode_ci'}}})
      s:insert({"James", "SIS Building Lambeth London UK"})
      s:insert({"Sherlock", "221B Baker St Marylebone London NW1 6XE UK"})
      idx:select('Uk')
      ---
      - - ['James', 'SIS Building Lambeth London UK']
        - ['Sherlock', '221B Baker St Marylebone London NW1 6XE UK']
      ...
      4177fe17
    • Kirill Shcherbatov's avatar
      box: introduce opts.is_multikey function option · c014e8f2
      Kirill Shcherbatov authored
      Needed for #1260
      
      @TarantoolBot document
      Title: A new option is_multikey for function definition
      
      A new option is_multikey allows to specify wether new function
      returns multiple values packed in a table object. This is a
      native way to define multikey func_index.
      c014e8f2
  11. Jul 24, 2019
    • Cyrill Gorcunov's avatar
      box/lua/console: Add support for lua output format · 42725501
      Cyrill Gorcunov authored
      @TarantoolBot document
      Title: document \set output lua
      
      Historically we use YAML format to print results of operation to
      a console. Moreover our test engine is aiming YAML as a primary format
      to compare results of test runs. Still we need an ability to print
      results in a different fasion, in particular one may need to use
      the console in a REPL way so that the results would be copied and
      pased back to further processing.
      
      For this sake we introduce that named "output" command which allows
      to specify which exactly output format to use. Currently only yaml
      and lua formats are supported.
      
      To specify lua output format type
      
       | tarantool> \set output lua
      
      in the console. lua mode supports line oriented output (default) or
      block mode.
      
      For example
      
       | tarantool> a={1,2,3}
       | tarantool> a
       | ---
       | - - 1
       |   - 2
       |   - 3
       | ...
       | tarantool> \set output lua
       | true
       | tarantool> a
       | {1, 2, 3}
       | tarantool> \set output lua,block
       | true
       | tarantool> a
       | {
       |   1,
       |   2,
       |   3
       | }
      
      By default YAML output format is kept for now, simply to not
      break the test engine. The output is bound to a session, thus every
      new session should setup own conversion if needed.
      
      Since serializing lua data is not a trivial task we use "serpent"
      third party module to convert data.
      
      Part-of #3834
      42725501
    • Maria Khaydich's avatar
      Initial box.cfg call logs changes now · 0c772aae
      Maria Khaydich authored
      In contrast to subsequent calls, the initial call to box.cfg didn't log
      configuration changes to the default state. As a result, by looking at
      a log file we coudn't tell which configuration was being used.
      
      Closes #4236
      0c772aae
  12. Jul 19, 2019
    • Serge Petrenko's avatar
      test: fix another net.box failure · 1a2addb8
      Serge Petrenko authored
      This last error
      ```
      [035]  ...
      [035]  disconnected_cnt
      [035]  ---
      [035] -- 1
      [035] +- 2
      [035]  ...
      [035]  conn:close()
      [035]  ---
      [035]  ...
      [035]  disconnected_cnt
      [035]  ---
      [035] -- 2
      [035] +- 3
      [035]  ...
      [035]  test_run:cmd('stop server connecter')
      [035]  ---
      [035]
      ```
      Happens because net.box is able to connect to tarantool before it has
      finished bootstrap. When connecting, net.box tries to fetch schema
      executing a couple of selects, but fails to pass access check since
      grants aren't applied yet. This is described in detail in
      https://github.com/tarantool/tarantool/issues/2763#issuecomment-499046998
      So, alter the test so that it tolerates multiple connection failures.
      
      Closes #4273
      1a2addb8
  13. Jul 18, 2019
    • Mergen Imeev's avatar
      box: increase connection timeout in "net.box.test.lua" · 79f876ad
      Mergen Imeev authored
      The "box/net.box.test.lua" test contains a check that the error
      received contains a 'timed out'. But in cases when testing was
      conducted on a slow computer or in the case of a very large load,
      it is possible that the connection time-out will be reached
      earlier than the mentioned error. In this case, the error "Invalid
      argument" will be returned. To prevent this from happening,
      this patch will increase the connection timeout.
      
      Closes #4341
      79f876ad
  14. Jul 15, 2019
    • Vladimir Davydov's avatar
      ddl: allow to execute non-yielding DDL statements in transactions · f266559b
      Vladimir Davydov authored
      The patch is pretty straightforward - all it does is moves checks for
      single statement transactions from alter.cc to txn_enable_yield_for_ddl
      so that now any DDL request may be executed in a transaction unless it
      builds an index or checks the format of a non-empty space (those are the
      only two operations that may yield).
      
      There's two things that must be noted explicitly. The first is removal
      of an assertion from priv_grant. The assertion ensured that a revoked
      privilege was in the cache. The problem is the cache is built from the
      contents of the space, see user_reload_privs. On rollback, we first
      revert the content of the space to the original state, and only then
      start invoking rollback triggers, which call priv_grant. As a result, we
      will revert the cache to the original state right after the first
      trigger is invoked and the following triggers will have no effect on it.
      Thus we have to remove this assertion.
      
      The second subtlety lays in vinyl_index_commit_modify. Before the commit
      we assumed that if statement lsn is <= vy_lsm::commit_lsn, then it must
      be local recovery from WAL. Now it's not true, because there may be
      several operations for the same index in a transaction, and they all
      will receive the same signature in on_commit trigger. We could, of
      course, try to assign different signatures to them, but that would look
      cumbersome - better simply allow lsn <= vy_lsm::commit_lsn after local
      recovery, there's actually nothing wrong about that.
      
      Closes #4083
      
      @TarantoolBot document
      Title: Transactional DDL
      
      Now it's possible to group non-yielding DDL statements into
      transactions, e.g.
      
      ```Lua
      box.begin()
      box.schema.space.create('my_space')
      box.space.my_space:create_index('primary')
      box.commit() -- or box.rollback()
      ```
      
      Most DDL statements don't yield and hence can be run from transactions.
      There are just two exceptions: creation of a new index and changing the
      format of a non-empty space. Those are long operations that may yield
      so as not to block the event loop for too long. Those statements can't
      be executed from transactions (to be more exact, such a statement must
      go first in any transaction).
      
      Also, just like in case of DML transactions in memtx, it's forbidden to
      explicitly yield in a DDL transaction by calling fiber.sleep or any
      other yielding function. If this happens, the transaction will be
      aborted and an attempt to commit it will fail.
      f266559b
    • Vladimir Davydov's avatar
      memtx: fix txn_on_yield for DDL transactions · 0ae5a2d7
      Vladimir Davydov authored
      Memtx engine doesn't allow yielding inside a transaction. To achieve
      that, it installs fiber->on_yield trigger that aborts the current
      transaction (rolls it back, but leaves it be so that commit fails).
      
      There's an exception though - DDL statements are allowed to yield.
      This is required so as not to block the event loop while a new index
      is built or a space format is checked. Currently, we handle this
      exception by checking space id and omitting installation of the
      trigger for system spaces. This isn't entirely correct, because we
      may yield after a DDL statement is complete, in which case the
      transaction won't be aborted though it should:
      
        box.begin()
        box.space.my_space:create_index('my_index')
        fiber.sleep(0) -- doesn't abort the transaction!
      
      This patch fixes the problem by making the memtx engine install the
      on_yield trigger unconditionally, for all kinds of transactions, and
      instead explicitly disabling the trigger for yielding DDL operations.
      
      In order not to spread the yield-in-transaction logic between memtx
      and txn code, let's move all fiber_on_yield related stuff to txn,
      export a method to disable yields, and use the method in memtx.
      0ae5a2d7
  15. Jul 12, 2019
    • Kirill Shcherbatov's avatar
      box: introduce Lua persistent functions · 200a492a
      Kirill Shcherbatov authored
      Closes #4182
      Closes #4219
      Needed for #1260
      
      @TarantoolBot document
      Title: Persistent Lua functions
      
      Now Tarantool supports 'persistent' Lua functions.
      Such functions are stored in snapshot and are available after
      restart.
      To create a persistent Lua function, specify a function body
      in box.schema.func.create call:
      e.g. body = "function(a, b) return a + b end"
      
      A Lua persistent function may be 'sandboxed'. The 'sandboxed'
      function is executed in isolated environment:
        a. only limited set of Lua functions and modules are available:
          -assert -error -pairs -ipairs -next -pcall -xpcall -type
          -print -select -string -tonumber -tostring -unpack -math -utf8;
        b. global variables are forbidden
      
      Finally, the new 'is_deterministic' flag allows to mark a
      registered function as deterministic, i.e. the function that
      can produce only one result for a given list of parameters.
      
      The new box.schema.func.create interface is:
      box.schema.func.create('funcname', <setuid = true|FALSE>,
      	<if_not_exists = true|FALSE>, <language = LUA|c>,
      	<body = string ('')>, <is_deterministic = true|FALSE>,
      	<is_sandboxed = true|FALSE>, <comment = string ('')>)
      
      This schema change is also reserves names for sql builtin
      functions:
          TRIM, TYPEOF, PRINTF, UNICODE, CHAR, HEX, VERSION,
          QUOTE, REPLACE, SUBSTR, GROUP_CONCAT, JULIANDAY, DATE,
          TIME, DATETIME, STRFTIME, CURRENT_TIME, CURRENT_TIMESTAMP,
          CURRENT_DATE, LENGTH, POSITION, ROUND, UPPER, LOWER,
          IFNULL, RANDOM, CEIL, CEILING, CHARACTER_LENGTH,
          CHAR_LENGTH, FLOOR, MOD, OCTET_LENGTH, ROW_COUNT, COUNT,
          LIKE, ABS, EXP, LN, POWER, SQRT, SUM, TOTAL, AVG,
          RANDOMBLOB, NULLIF, ZEROBLOB, MIN, MAX, COALESCE, EVERY,
          EXISTS, EXTRACT, SOME, GREATER, LESSER, SOUNDEX,
          LIKELIHOOD, LIKELY, UNLIKELY,
          _sql_stat_get, _sql_stat_push, _sql_stat_init, LUA
      
      A new Lua persistent function LUA is introduced to evaluate
      LUA strings from SQL in future.
      
      This names could not be used for user-defined functions.
      
      Example:
      lua_code = [[function(a, b) return a + b end]]
      box.schema.func.create('summarize', {body = lua_code,
      		is_deterministic = true, is_sandboxed = true})
      box.func.summarize
      ---
      - aggregate: none
        returns: any
        exports:
          lua: true
          sql: false
        id: 60
        is_sandboxed: true
        setuid: false
        is_deterministic: true
        body: function(a, b) return a + b end
        name: summarize
        language: LUA
      ...
      box.func.summarize:call({1, 3})
      ---
      - 4
      ...
      
      @kostja: fix style, remove unnecessary module dependencies,
      add comments
      200a492a
    • Mergen Imeev's avatar
      box: do not check state in case of reconnect · 77051a11
      Mergen Imeev authored
      Test box/net.box.test.lua checks state of the connection in case
      of an error. It should be 'error_reconnect'. But, in cases where
      testing was performed on a slow computer or in the case of a very
      large load, it is possible that the connection status may change
      from the 'error_reconnect' state to another state. This led to the
      failure of the test. Since this check is not the main purpose of
      the test, it is better to simply delete the check.
      
      Closes #4335
      77051a11
  16. Jul 11, 2019
    • Cyrill Gorcunov's avatar
      box/memtx: Skip tuple memory from coredump by default · 9d077bb4
      Cyrill Gorcunov authored
      Quoting feature request
      
       | Tarantool is Database and Application Server in one box.
       |
       | Appserver development process contains a lot of
       | lua/luajit-ffi/lua-c-extension code.
       |
       | Coredump is very useful in case when some part of appserver crashed.
       | If the reason is input - data from database is not necessary. If the reason
       | is output - data from database is already in snap/xlog files.
       |
       | Therefore consider core dumps without data enabled by default.
      
      For info: the strip_core feature has been introduced in
      549140b3
      
      Closes #4337
      
      @TarantoolBot document
      Title: Document box.cfg.strip_core
      
      When Tarantool runs under a heavy load the memory allocated
      for tuples may be very huge in size and to eliminate this
      memory from being present in `coredump` file the `box.cfg.strip_core`
      parameter should be set to `true`.
      
      The default value is `true`.
      9d077bb4
    • avtikhon's avatar
      test: box/net.box test flaky fails on grep_log (#4330) · 9bde3406
      avtikhon authored
      box/net.box test flaky failed on grepping the log file
      for 'ER_NO_SUCH_PROC' pattern on high load running hosts,
      found that the issue can be resolved by updating the
      grep_log to wait_log function to make able to wait the
      needed message for some time.
      
      [008] Test failed! Result content mismatch:
      [008] --- box/net.box.result	Tue Jul  9 17:00:24 2019
      [008] +++ box/net.box.reject	Tue Jul  9 17:03:34 2019
      [008] @@ -1376,7 +1376,7 @@
      [008]  ...
      [008]  test_run:grep_log("default", "ER_NO_SUCH_PROC")
      [008]  ---
      [008] -- ER_NO_SUCH_PROC
      [008] +- null
      [008]  ...
      [008]  box.schema.user.revoke('guest', 'execute', 'universe')
      [008]  ---
      
      Closes #4329
      9bde3406
    • Denis Ignatenko's avatar
      Add distribution info to box.info · 366466eb
      Denis Ignatenko authored
      There is compile time option PACKAGE in cmake to define
      current build distribution info. For community edition
      is it Tarantool by default. For enterprise it is
      Tarantool Enterprise
      
      There were no option to check distribution name in runtime.
      This change adds box.info.package output for CE and TE.
      366466eb
  17. Jul 09, 2019
    • Serge Petrenko's avatar
      test: fix net.box occasional failure. Again · eb0cc50c
      Serge Petrenko authored
      The test regarding logging corrupted rows failed occasionally with
      ```
      [016]  test_run:grep_log('default', 'Got a corrupted row.*')
      [016]  ---
      [016] -- 'Got a corrupted row:'
      [016] +- null
      [016]  ...
      ```
      The logs then had
      ```
      [010] 2019-07-06 19:36:16.857 [13046] iproto sio.c:261 !> SystemError writev(1),
      called on fd 23, aka unix/:(socket), peer of unix/:(socket): Broken pipe
      ```
      instead of the expected message.
      
      This happened, because we closed a socket before tarantool could write a
      greeting to the client, the connection was then closed, and execution
      never got to processing the malformed request and thus printing the
      desired message to the log.
      
      To fix this, actually read the greeting prior to writing new data and
      closing the socket.
      
      Follow-up #4273
      eb0cc50c
    • Vladimir Davydov's avatar
      txn: run on_rollback triggers on txn_abort · 6ac597db
      Vladimir Davydov authored
      When a memtx transaction is aborted on yield, it isn't enough to
      rollback individual statements - we must also run on_rollback triggers,
      otherwise changes done to the schema by an aborted DDL transaction will
      be visible to other fibers until an attempt to commit it is made.
      6ac597db
    • Alexander V. Tikhonov's avatar
      test: net.box: fix case re invalid msgpack warning · 0f9fdd72
      Alexander V. Tikhonov authored
      The test case has two problems that appear from time to time and lead to
      flaky fails. Those fails are look as shown below in a test-run output.
      
       | Test failed! Result content mismatch:
       | --- box/net.box.result	Mon Jun 24 17:23:49 2019
       | +++ box/net.box.reject	Mon Jun 24 17:51:52 2019
       | @@ -1404,7 +1404,7 @@
       |  ...
       |  test_run:grep_log('default', 'ER_INVALID_MSGPACK.*')
       | ---
       | -- 'ER_INVALID_MSGPACK: Invalid MsgPack - packet body'
       | +- 'ER_INVALID_MSGPACK: Invalid MsgPack - packet length'
       | ...
       | -- gh-983 selecting a lot of data crashes the server or hangs the
       | -- connection
      
      'ER_INVALID_MSGPACK.*' regexp should match 'ER_INVALID_MSGPACK: Invalid
      MsgPack - packet body' log message, but if it is not in a log file at a
      time of grep_log() call (just don't flushed to the file yet) a message
      produced by another test case can be matched ('ER_INVALID_MSGPACK:
      Invalid MsgPack - packet length'). The fix here is to match the entire
      message and check for the message periodically during several seconds
      (use wait_log() instead of grep_log()).
      
      Another problem is the race between writing a response to an iproto
      socket on a server side and closing the socket on a client end. If
      tarantool is unable to write a response, it does not produce the warning
      re invalid msgpack, but shows 'broken pipe' message instead. We need
      first grep for the message in logs and only then close the socket on a
      client. The similar problem (with another test case) is described in
      [1].
      
      [1]: https://github.com/tarantool/tarantool/issues/4273#issuecomment-508939695
      
      Closes: #4311
      0f9fdd72
  18. Jul 05, 2019
    • Vladislav Shpilevoy's avatar
      test: redo some swim tests using error injections · a0d6ac29
      Vladislav Shpilevoy authored
      There were tests relying on certain content of SWIM messages.
      After next patches these conditions won't work without an
      explicit intervention with error injections.
      
      The patchset moves these tests to separate release-disabled
      files.
      
      Part of #4253
      a0d6ac29
    • Serge Petrenko's avatar
      lua/trigger: cleanup lua stack after trigger run · febacc4b
      Serge Petrenko authored
      This patch adds a stack cleanup after a trigger is run and its return
      values, if any, have been read.
      
      This problem was found in a case when on_schema_init trigger set an
      on_replace trigger on a space, and the trigger ran during recovery.
      This lead to Lua stack overflows for the aforementioned reasons.
      
      Closes #4275
      febacc4b
    • Vladimir Davydov's avatar
      Replace schema lock with fine-grained locking · e5c4ce75
      Vladimir Davydov authored
      Now, as we don't need to take the schema lock for checkpointing, it is
      only used to synchronize concurrent space modifications (drop, truncate,
      alter). Actually, a global lock is a way too heavy means to achieve this
      goal, because we only care about forbidding concurrent modifications of
      the same space while concurrent modifications of different spaces should
      work just fine. So this patch replaces the global schema lock with per
      space locking.
      
      A space lock is held while alter_space_do() is in progress so as to make
      sure that while AlterSpaceOp::prepare() is performing a potentially
      yielding operation, such as building a new index, the space struct
      doesn't get freed from under our feet. Note, the lock is released right
      after index build is complete, before the transaction is committed to
      WAL, so if the transaction is non-yielding it can modify the space again
      in the next statement (this is impossible now, but will be done in the
      scope of the transactional DDL feature).
      
      If alter_space_do() sees that the space is already locked it bails out
      and throws an error. This should be fine, because long-lasting operation
      involving schema change, such as building an index, are rare and only
      performed under the supervision of the user so throwing an error rather
      than waiting seems to be adequate.
      
      Removal of the schema lock allows us to remove latch_steal() helper and
      on_begin_stmt txn trigger altogether, as they were introduced solely to
      support locking.
      
      This is a prerequisite for transactional DDL, because it's unclear how
      to preserve the global schema lock while allowing to combine several DDL
      statements in the same transaction.
      e5c4ce75
  19. Jul 04, 2019
    • Alexander V. Tikhonov's avatar
      Enable GitLab CI testing · ce623a23
      Alexander V. Tikhonov authored
      Implemented GitLab CI testing process additionally to existing Travis
      CI. The new testing process is added to run tests faster. It requires to
      control a load of machines to avoid flaky fails on timeouts. GitLab CI
      allows us to run testing on our machines.
      
      Created 2 stages for testing and deploying packages.
      
      The testing stage contains the following jobs that are run for all
      branches:
      
      * Debian 9 (Stretch): release/debug gcc.
      * Debian 10 (Buster): release clang8 + lto.
      * OSX 14 (Mojave): release.
      * FreeBSD 12: release gcc.
      
      And the following jobs that are run of long-term branches (release
      branches: for now it is 1.10, 2.1 and master):
      
      * OSX 13 (Sierra): release clang.
      * OSX 14 (Mojave): release clang + lto.
      
      The deployment stage contains the same jobs as we have in Travis CI.
      They however just build tarballs and packages: don't push them to S3 and
      packagecloud.
      
      In order to run full testing on a short-term branch one can name it with
      '-full-ci' suffix.
      
      The additional manual work is needed when dependencies are changed in
      .travis.mk file ('deps_debian' or 'deps_buster_clang_8' goals):
      
       | make GITLAB_USER=foo -f .gitlab.mk docker_bootstrap
      
      This command pushes docker images into GitLab Registry and then they are
      used in testing. Pre-built images speed up testing.
      
      Fixes #4156
      ce623a23
    • Vladimir Davydov's avatar
      Replace ERRINJ_SNAP_WRITE_ROW_TIMEOUT with ERRINJ_SNAP_WRITE_DELAY · 3d5da41c
      Vladimir Davydov authored
      Timeout injections are unstable and difficult to use. Injecting a delay
      is much more convenient.
      3d5da41c
    • Vladimir Davydov's avatar
      ddl: restore sequence value if drop is rolled back · f4306238
      Vladimir Davydov authored
      A sequence isn't supposed to roll back to the old value if the
      transaction it was used in is aborted for some reason. However,
      if a sequence is dropped, we do want to restore the original
      value on rollback so that we don't lose it on an unsuccessful
      attempt to drop the sequence.
      f4306238
    • Vladimir Davydov's avatar
      ddl: fix _space_sequence rollback · 644c20b2
      Vladimir Davydov authored
      _space_sequence changes are not rolled back properly. Fix it keeping in
      mind that our ultimate goal is to implement transactional DDL, which
      implies that all changes to the schema should be done synchronously,
      i.e. on_replace, not on_commit.
      644c20b2
    • Vladimir Davydov's avatar
      ddl: synchronize sequence cache with actual data state · 4be73c92
      Vladimir Davydov authored
      To implement transactional DDL, we must make sure that in-memory schema
      is updated synchronously with system space updates, i.e. on_replace, not
      on_commit.
      
      Note, to do this in case of the sequence cache, we have to rework the
      way sequences are exported to Lua - make on_alter_sequence similar to
      how on_alter_space and on_alter_func triggers are implemented.
      4be73c92
    • Vladimir Davydov's avatar
      ddl: synchronize func cache with actual data state · 01972ca1
      Vladimir Davydov authored
      To implement transactional DDL, we must make sure that in-memory schema
      is updated synchronously with system space updates, i.e. on_replace, not
      on_commit.
      01972ca1
    • Serge Petrenko's avatar
      test: fix box/on_shutdown flakiness · 5046069b
      Serge Petrenko authored
      Replace prints that indicate on_shutdown trigger execution with
      log.warn, which is more reliable. This eliminates occasional test
      failures. Also instead of waiting for the server to start and executing
      grep_log, wait for the desired log entries to appear with wait_log.
      
      Closes #4134
      5046069b
  20. Jul 03, 2019
    • Kirill Shcherbatov's avatar
      box: introduce VARBINARY field type · 59de57d2
      Kirill Shcherbatov authored
      A new VARBINARY field type would be useful for SQL type system.
      
      Closes #4201
      Needed for #4206
      
      @TarantoolBot document
      Title: new varbinary field type
      
      Introduced a new field type varbinary to represent mp_bin values.
      The new type varbinary may be used in format or index definition.
      
      Example:
      s = box.schema.space.create('withdata')
      s:format({{"b", "varbinary"}})
      pk = s:create_index('pk', {parts = {1, "varbinary"}})
      59de57d2
  21. Jul 01, 2019
  22. Jun 28, 2019
    • Mergen Imeev's avatar
      sql: allow to use vectors as left value of IN operator · 7418c373
      Mergen Imeev authored
      In SQL, it is allowed to use vector expressions, that is, an
      operation that uses vectors as operands. For instance, vector
      comparison:
      SELECT (1,2,3) < (1,2,4);
      
      Accidentally, routines handling IN operator contained a bug: in
      cases where we used a vector as the left value in the IN operator,
      we received an assertion in debug build or a segmentation fault in
      release. This was due to some legacy code in which it was assumed
      that the left value of the IN operator can have only one column in
      case it is a vector. Let's fix this by allowing vectors of the
      other sizes as the left value of the IN operator and providing
      check which verifies that both sides of IN operator have the same
      dimension.
      
      Closes #4204
      7418c373
Loading