Skip to content
Snippets Groups Projects
  1. Feb 19, 2020
  2. Feb 18, 2020
    • Alexander V. Tikhonov's avatar
      gitlab-ci: enable performance testing · 87c68344
      Alexander V. Tikhonov authored
      Enabled Tarantool performance testing on Gitlab-CI for release/master
      branches and "*-perf" named branches. For this purpose 'perf' and
      'cleanup' stages were added into Gitlab-CI pipeline.
      
      Performance testing support next benchmarks:
      
      - cbench
      - linkbench
      - nosqlbench (hash and tree Tarantool run modes)
      - sysbench
      - tpcc
      - ycsb (hash and tree Tarantool run modes)
      
      Benchmarks use scripts from repository:
      http://github.com/tarantool/bench-run
      
      Performance testing uses docker images, built with docker files from
      bench-run repository:
      
      - perf/ubuntu-bionic:perf_master           -- parent image with
                                                    benchmarks only
      - perf_tmp/ubuntu-bionic:perf_<commit_SHA> -- child images used for
                                                    testing Tarantool sources
      
      @Totktonada: Harness and workloads are to be reviewed.
      Unverified
      87c68344
    • Oleg Babin's avatar
      lua: handle uri.format empty input properly · 57f6fc93
      Oleg Babin authored
      After 7fd6c809
      (buffer: port static allocator to Lua) uri started to use
      static_allocator - cyclic buffer that also is used in
      several modules.
      
      However situation when uri.format output is zero-length
      string was not handled properly and ffi.string could
      return data that was previously written in static buffer
      because use as string terminator the first zero byte.
      
      To prevent such situation let's pass result length explicitly.
      
      Closes #4779
      57f6fc93
  3. Feb 17, 2020
    • Oleg Babin's avatar
      lua: implement is_decimal check · 36ba19f2
      Oleg Babin authored
      Some of our users want to have a native method
      to check is specified value 'decimal' or not
      
      This patch introduces 'is_decimal' check in 'decimal' module
      
      Closes #4623
      
      @TarantoolBot document
      Title: decimal.is_decimal
      
      is_decimal check function returns "true"
      if specified value is decimal and "false" otherwise
      36ba19f2
  4. Feb 15, 2020
    • Olga Arkhangelskaia's avatar
      json: don't spoil instance with per-call options · f54f4dc0
      Olga Arkhangelskaia authored
      When json.decode is used with 2 arguments, 2nd argument seeps out to the
      json configuration of the instance. Moreover, due to current
      serializer.cfg implementation it remains invisible while checking
      settings using json.cfg table.
      
      This fixes commit 6508ddb7 ('json: fix
      stack-use-after-scope in json_decode()').
      
      Closes #4761
      Unverified
      f54f4dc0
    • Vladislav Shpilevoy's avatar
      box: remove dead code from box_process_call/eval() · f5d51448
      Vladislav Shpilevoy authored
      box_process_call/eval() in the end check if there is an
      active transaction. If there is, it is rolled back, and
      an error is set.
      
      But rollback is not needed anymore, because anyway in
      the end of the request the fiber is stopped, and its
      not finished transaction is rolled back. Just setting
      of the error is enough.
      
      Follow-up #4662
      f5d51448
    • Vladislav Shpilevoy's avatar
      fiber: destroy fiber.storage created by iproto · 7692e08f
      Vladislav Shpilevoy authored
      Fiber.storage was not deleted when created in a fiber started from
      the thread pool used by IProto requests. The problem was that
      fiber.storage was created and deleted in Lua land only, assuming
      that only Lua-born fibers could have it. But in fact any fiber can
      create a Lua storage. Including the ones used to serve IProto
      requests.
      
      Not deletion of the storage led to a possibility of meeting a
      non-empty fiber.storage in the beginning of an iproto request, and
      to not deletion of the memory caught by the storage until its
      explicit nullification.
      
      Now the storage destructor works for any fiber, which managed to
      create the storage. The destructor unrefs and nullifies the
      storage.
      
      For destructor purposes the fiber.on_stop triggers were reworked.
      Now they can be called multiple times during fiber's lifetime.
      After every request done by that fiber.
      
      Closes #4662
      Closes #3462
      
      @TarantoolBot document
      Title: Clarify fiber.storage lifetime
      
      Fiber.storage is a Lua table created when it is first accessed. On
      the site it is said that it is deleted when fiber is canceled via
      fiber:cancel(). But it is not the full truth.
      
      Fiber.storage is destroyed when the fiber is finished. Regardless
      of how is it finished - via :cancel(), or the fiber's function
      did 'return', it does not matter. Moreover, from that moment the
      storage is cleaned up even for pooled fibers used to serve IProto
      requests. Pooled fibers never really die, but nonetheless their
      storage is cleaned up after each request. That makes possible to
      use fiber.storage as a full featured request-local storage.
      
      Fiber.storage may be created for a fiber no matter how the fiber
      itself was created - from C, from Lua. For example, a fiber could
      be created in C using fiber_new(), then it could insert into a
      space, which had Lua on_replace triggers, and one of the triggers
      could create fiber.storage. That storage will be deleted when the
      fiber is stopped.
      
      Another place where fiber.storage may be created - for replication
      applier fiber. Applier has a fiber from which it applies
      transactions from a remote instance. In case the applier fiber
      somehow creates a fiber.storage (for example, from a space trigger
      again), the storage won't be deleted until the applier fiber is
      stopped.
      7692e08f
  5. Feb 14, 2020
    • Vladislav Shpilevoy's avatar
      fiber: unref fiber.storage via global Lua state · 5b3e8a72
      Vladislav Shpilevoy authored
      Fiber.storage is a table, available from anywhere in the fiber. It
      is destroyed after fiber function is finished. That provides a
      reliable fiber-local storage, similar to thread-local in C/C++.
      
      But there is a problem that the storage may be created via one
      struct lua_State, and destroyed via another. Here is an example:
      
          function test_storage()
              fiber.self().storage.key = 100
          end
          box.schema.func.create('test_storage')
          _ = fiber.create(function()
              box.func.test_storage:call()
          end)
      
      There are 3 struct lua_State:
          tarantool_L - global always alive state;
          L1 - Lua coroutine of the fiber, created by fiber.create();
          L2 - Lua coroutine created by that fiber to execute
               test_storage().
      
      Fiber.storage is created on stack of L2 and referenced by global
      LUA_REGISTRYINDEX. Then it is unreferenced from L1 when the fiber
      is being destroyed.
      
      That is generally ok as soon as the storage object is always in
      LUA_REGISTRYINDEX, which is shared by all Lua states.
      
      But soon during destruction of the fiber.storage there will be
      only tarantool_L and the original L2. Original L2 may be already
      deleted by the time the storage is being destroyed. So this patch
      makes unref of the storage via reliable tarantool_L.
      
      Needed for #4662
      5b3e8a72
    • Cyrill Gorcunov's avatar
      test: box/errinj -- sort errors · 95b9a48d
      Cyrill Gorcunov authored
      
      Every new error introduced into error engine cause massive update in
      test even if only one key is introduced.
      
      To minimize diff output better print them in sorted order.
      
      Signed-off-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      Reviewed-by: default avatarVladislav Shpilevoy <v.shpilevoy@tarantool.org>
      Reviewed-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Unverified
      95b9a48d
    • Sergey Kaplun's avatar
      refactoring: drop excess 16Kb bss buffer · 163b8b86
      Sergey Kaplun authored
      We already have 12Kb thread-safe static buffer
      in `lib/small/small/static.h`, that can be used instead of 16Kb
      bss buffer in `src/lib/core/backtrace.cc` for backtrace payload.
      
      Closes #4650
      163b8b86
  6. Feb 12, 2020
    • Nikita Pettik's avatar
      sql: do not force FP representation for NUMBER field · 0dd8be76
      Nikita Pettik authored
      During value decoding fetched from space's field FP representation was
      forced in case type of field was NUMBER. It was so since NUMBER used to
      substitute DOUBLE field type (in fact NUMBER mimicked DOUBLE type). Since
      now DOUBLE is a separate field type, there's no such necessity. Hence from
      now integers from NUMBER field are treated as integers.
      
      Implemented by Mergen Imeev <imeevma@gmail.com>
      
      Closes #4233
      
      @TarantoolBot document
      Title: NUMBER column type changes
      
      From now NUMBER behaves in the same way as in NoSQL Tarantool.
      Previously, NUMBER was rather synonym to what now DOUBLE means: it used
      to force floating point representation of values, even if they were
      integers. A few examples:
      
      1) CAST operation:
      
      Obsolete behaviour:
      SELECT CAST(922337206854774800 AS NUMBER), CAST(5 AS NUMBER) / 10;
      ---
       rows:
      - [922337206854774784, 0.5]
      
      New behaviour:
      SELECT CAST(922337206854774800 AS NUMBER), CAST(5 AS NUMBER) / 10;
      ---
       rows:
      - [922337206854774800, 0]
      
      Obsolete behaviour:
      SELECT CAST(true AS NUMBER);
      ---
      - null
      - 'Type mismatch: can not convert TRUE to number'
      ...
      
      New behaviour:
      SELECT CAST(true AS NUMBER);
      ---
       rows:
      - [1]
      ...
      
      CAST boolean to NUMBER is allowed since it is allowed to convert
      booleans to integers; in turn NUMBER comprises integer type.
      
      2) Preserving integer representation:
      
      Obsolete behaviour:
      CREATE TABLE t (n NUMBER PRIMARY KEY);
      INSERT INTO t VALUES (3), (-4), (5.0);
      SELECT n, n/10 FROM t;
      ---
       rows:
      - [-4, -0.4]
      - [3, 0.3]
      - [5, 0.5]
      
      New behaviour:
      SELECT n, n/10 FROM t;
      ---
       rows:
      - [-4, 0]
      - [3, 0]
      - [5, 0.5]
      0dd8be76
    • Nikita Pettik's avatar
      sql: fix CAST AS NUMBER operator · 7a1e01f3
      Nikita Pettik authored
      NUMBER type is supposed to include values of both integer and FP types.
      Hence, if numeric value is casted to NUMBER it remains unchanged.
      Before this patch cast to NUMBER always resulted in forcing floating
      point representation. Furthermore, CAST of blob values to NUMBER always
      led the floating point result, even if blob value had precise integer
      representation. Since now NUMBER doesn't imply only FP values, let's fix
      this and use vdbe_mem_numerify() which provides unified way of casting
      to NUMBER type.
      
      Part of #4233
      Closes #4463
      7a1e01f3
    • Nikita Pettik's avatar
      sql: rework sqlVdbeMemNumerify() · 0564520b
      Nikita Pettik authored
      Fix codestyle and comment; allow conversion from boolean to number
      (since it is legal to convert boolean to integer, and in turn number
      type completely includes integer type). Note that currently
      sqlVdbeMemNumerify() is never called, so changes applied to it can't be
      tested. It is going to be used in the further patches.
      
      Part of #4233
      0564520b
    • Nikita Pettik's avatar
      sql: remove cast to INT during FP arithmetic ops · 2c16661b
      Nikita Pettik authored
      Arithmetic operations are implemented by OP_Add, OP_Substract etc VDBE
      opcodes which consist of almost the same internal logic: depending on
      type of operands (integer of FP) execution flow jumps to the one of two
      branches.  At this point branch which is responsible for floating point
      operations finishes with next code:
      
        1668			if (((type1|type2)&MEM_Real)==0 && !bIntint) {
        1669				mem_apply_integer_type(pOut);
        1670			}
      
      At least one type of type1 and type2 is supposed to be MEM_Real.
      Otherwise, execution flow either hits branch processing integer
      arithmetic operations or VDBE execution is aborted with
      ER_SQL_TYPE_MISMATCH Thus, condition under 'if' clause is always
      evaluated to 'false' value ergo mem_apply_integer_type() is never
      called. Let's remove this dead code.
      
      Implemented by Mergen Imeev <imeevma@tarantool.org>
      2c16661b
  7. Feb 06, 2020
    • Chris Sosnin's avatar
      sql: fix segfault in pragma table_info · e9aa3784
      Chris Sosnin authored
      We should first check that primary key is not NULL.
      
      Closes #4745
      e9aa3784
    • Nikita Pettik's avatar
      sql: fix off-by-one error while setting bind names · ef5ba746
      Nikita Pettik authored
      Names of bindings are stored in the array indexed from 1 (see struct
      Vdbe->pVList). So to get name of i-th values to be bound, one should
      call sqlVListNumToName(list, i+1) not sqlVListNumToName(list, i).
      For this reason, names of binding parameters returned in meta-information
      in response to :prepare() call are shifted by one. Let's fix it and
      calculate position of binding parameter taking into consideration
      1-based indexing.
      
      Closes #4760
      ef5ba746
  8. Feb 05, 2020
    • Leonid Vasiliev's avatar
      box: rewrite rollback to savepoint to Lua/C · 34234427
      Leonid Vasiliev authored
      
      LuaJIT records traces while interpreting Lua bytecode (considering it's
      hot enough) in order to compile the corresponding execution flow to a
      machine code. A Lua/C call aborts trace recording, but an FFI call does
      not abort it per se. If code inside an FFI call yields to another fiber
      while recording a trace and the new current fiber interpreting a Lua
      bytecode too, then unrelated instructions will be recorded to the
      current trace.
      
      In short, we should not yield a current fiber inside an FFI call.
      
      There is another problem. Machine code of a compiled trace may sink a
      value from a Lua state down to a host register, change it and write back
      only at trace exit. So the interpreter state may be outdated during the
      compiled trace execution. A Lua/C call aborts a trace and so the code
      inside a callee always see an actual interpreter state. An FFI call
      however can be turned into a single machine's CALL instruction in the
      compiled code and if the callee accesses a Lua state, then it may see an
      irrelevant value.
      
      In short, we should not access a Lua state directly or reenter to the
      interpreter from an FFI call.
      
      The box.rollback_to_savepoint() function may yield and another fiber
      will be scheduled for execution. If this fiber touches a Lua state, then
      it may see an inconsistent state and the behaviour will be undefined.
      
      Noted that <struct txn>.id starts from 1, because we lean on this fact
      to use luaL_toint64(), which does not distinguish an unexpected Lua type
      and cdata<int64_t> with zero value. It seems that this assumption
      already exists: the code that prepare arguments for 'on_commit' triggers
      uses luaL_toint64() too (see lbox_txn_pairs()).
      
      Fixes #4427
      
      Co-authored-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Reviewed-by: default avatarIgor Munkin <imun@tarantool.org>
      Unverified
      34234427
  9. Feb 04, 2020
    • Alexander V. Tikhonov's avatar
      gitlab-ci: push Deb/RPM packages to S3 based repos · 05d3ed4b
      Alexander V. Tikhonov authored
      We're going to use S3 compatible storage for Deb and RPM repositories
      instead of packagecloud.io service. The main reason is that
      packagecloud.io provides a limited amount of storage, which is not
      enough for keeping all packages (w/o regular pruning of old versions).
      
      Note: At the moment packages are still pushed to packagecloud.io from
      Travis-CI. Disabling this is out of scope of this patch.
      
      This patch implements saving of packages on an S3 compatible storage and
      regeneration of a repository metadata.
      
      The layout is a bit different from one we have on packagecloud.io.
      
      packagecloud.io:
      
       | - 1.10
       | - 2.1
       | - 2.2
       | - ...
      
      S3 compatible storage:
      
       | - live
       |   - 1.10
       |   - 2.1
       |   - 2.2
       |   - ...
       | - release
       |   - 1.10
       |   - 2.1
       |   - 2.2
       |   - ...
      
      Both 'live' and 'release' repositories track release branches (named as
      <major>.<minor>) and master branch. The difference is that 'live' is
      updated on every push, but 'release' is only for tagged versions
      (<major>.<minor>.<patch>.0).
      
      Packages are also built on '*-full-ci' branches, but only for testing
      purposes: they don't pushed anywhere.
      
      The core logic is in the tools/update_repo.sh script, which implements
      the following flow:
      
      - create metadata for new packages
      - fetch relevant metadata from the S3 storage
      - push new packages to the S3 storage
      - merge and push the updated metadata to the S3 storage
      
      The script uses 'createrepo' for RPM repositories and 'reprepro' for Deb
      repositories.
      
      Closes #3380
      Unverified
      05d3ed4b
  10. Jan 29, 2020
    • Mergen Imeev's avatar
      sql: fix INSTEAD OF DELETE trigger for VIEW · 6ddccda4
      Mergen Imeev authored
      This patch makes the INSTEAD OF DELETE trigger work for every row
      in VIEW. Prior to this patch, it worked only once for each group
      of non-unique rows.
      
      Also, this patch adds tests to check that the INSTEAD OF UPDATE
      trigger work for every row in VIEW.
      
      Closes #4740
      6ddccda4
    • Kirill Yukhin's avatar
      small: bump new version · 8e2dcbe0
      Kirill Yukhin authored
      Revert "Free all slabs on region reset" commit.
      
      Closes #4736
      8e2dcbe0
  11. Jan 24, 2020
  12. Jan 21, 2020
  13. Jan 20, 2020
  14. Jan 17, 2020
  15. Jan 16, 2020
    • Oleg Babin's avatar
      error: add __concat method to error object · 935db173
      Oleg Babin authored
      Usually functions return pair `nil, err` and expected that err is string.
      Let's make the behaviour of error object closer to string
      and define __concat metamethod.
      
      The case of error "error_mt.__concat(): neither of args is an error"
      is not covered by tests because of #4723
      
      Closes #4489
      Unverified
      935db173
  16. Jan 15, 2020
    • Nikita Pettik's avatar
      sql: account prepared stmt cache size right after entry removal · 1f8bd87a
      Nikita Pettik authored
      SQL prepared statement cache is implemented as two data structures: hash
      table <stmt_id : pointer-to-metadata> and GC queue. The latter is
      required to avoid workload spikes on session's disconnect: instead of
      cleaning up memory for all session-local prepared statements, prepared
      statements to be deleted are moved to GC queue. When memory limit for PS
      is reached, all elements from queue are removed at once. If statement
      traps to the GC queue it is assumed to be already dead. Accidentally,
      change of occupied by PS cache takes place only after GC queue clean-up,
      so correct size of PS cache is displayed only after GC cycles. Let's fix
      this and account PS cache size change right after entry removal (i.e. at
      the moment PS gets into GC queue).
      1f8bd87a
  17. Jan 14, 2020
    • Maria's avatar
      Fix use-after-free in memtx_tuple_delete() · c08b94ed
      Maria authored
      
      Struct of type tuple_format is being passed as an argument to
      tuple_format_unref() where it might be freed. On such occasion any
      further references to format fields should not take place.
      
      Acked-by: default avatarCyrill Gorcunov <gorcunov@gmail.com>
      
      Closes #4658
      c08b94ed
    • Chris Sosnin's avatar
      box: frommap() bug fix · f89b5ab0
      Chris Sosnin authored
      - If an optional argument is provided for
        space_object:frommap() (which is {table = true|false}),
        type match for first arguments is omitted, which is
        incorrect. We should return the result only after making
        sure it is possible to build a tuple.
      
      - If there is a type mismatch, however, frommap() does not
        return nil, err as it is mentioned in the description, so we
        change it to be this way.
      
      Closes #4262
      f89b5ab0
  18. Jan 13, 2020
    • HustonMmmavr's avatar
      fio: fix race condition in mktree · 21ae2899
      HustonMmmavr authored
      Despite the lack of documentation, fio.mktree() was designed to work
      similar to mkdir -p: it creates the directory along with it's parents
      and doesn't complain about existing ones.
      
      But this function was subject to a race if two different processes were
      trying to create the same directory at the same time. It was caused by
      the fact that directory existence check and its creation aren't atomic.
      
      This patch fixes the race by impoving error handling: it's not an error
      if directory exists, even if it was created by someone else and mktree
      failed.
      
      Related to https://github.com/tarantool/doc/issues/1063
      Closes #4660
      Unverified
      21ae2899
    • Alexander Turenko's avatar
      test: drop dead code from app-tap/msgpackffi test · ec324247
      Alexander Turenko authored
      It appears due to improper conflict resolution after pushing the
      following commits in the reverse order:
      
      * 2b9ef8d1 lua: don't modify pointer type in msgpack.decode*
      * 84bcba52 lua: keeping the pointer type in msgpackffi.decode()
      
      Originally 84bcba52 (which should land first) fixes the msgpackffi
      module and introduces the test_decode_buffer() function locally for the
      msgpackffi test. Then 2b9ef8d1 fixes the msgpack module in the same
      way, expands and moves the test_decode_buffer() function to
      serializer_test.lua (to use in msgpack and msgpackffi tests both).
      
      After changes made to push the commits in the reverse order, those
      commits doing something weird around tests. However the resulting state
      is different from the right one just in the dead function in
      msgpackffi.test.lua.
      
      Follows up #3926.
      Unverified
      ec324247
    • Chris Sosnin's avatar
      tuple: add argument length check for update() · b73fb421
      Chris Sosnin authored
      Currently tuple_object:update() does not check the length
      of operation string and just takes the first character
      after decoding. This patch fixes this problem.
      
      Follow-up #3884
      b73fb421
    • Chris Sosnin's avatar
      tuple: fix non-informative update() error message · d4fcec0c
      Chris Sosnin authored
      Calling tuple_object:update() with invalid argument number
      yields 'Unknown UPDATE operation' error. Instead, we replace this
      error with explicit "wrong argument number", mentioning which operation
      failed, or poiniting out at invalid operation code.
      
      Fixes #3884
      d4fcec0c
    • Mergen Imeev's avatar
      sql: fix typeof() for double values · 2bc4fe69
      Mergen Imeev authored
      This patch corrects the result of typeof() for double values.
      Previously, it gave the type "number" in the case of a
      floating-point number. Now it gives "double".
      
      Follow-up #3812
      2bc4fe69
  19. Jan 10, 2020
  20. Dec 31, 2019
    • Ilya Kosarev's avatar
      test: fix flaky socket test · 4137134c
      Ilya Kosarev authored
      socket.test had a number of flaky problems:
      - socket readiness expectation & read timeouts
      - race conditions on socket shutdown in emulation test cases
      - UDP datagrams losses on mac os
      - excessive random port searches
      Now they are solved. 127.0.0.1 is now used instead of 0.0.0.0 or
      localhost to prevent wrong connections where appropriate. Socket test
      is not fragile anymore.
      
      Closes #4426
      Closes #4451
      Closes #4469
      4137134c
    • Nikita Pettik's avatar
      sql: add cache statistics to box.info · 5a1a220e
      Nikita Pettik authored
      To track current memory occupied by prepared statements and number of
      them, let's extend box.info submodule with .sql statistics: now it
      contains current total size of prepared statements and their count.
      
      @TarantoolBot document
      Title: Prepared statements in SQL
      
      Now it is possible to prepare (i.e. compile into byte-code and save to
      the cache) statement and execute it several times. Mechanism is similar
      to ones in other DBs. Prepared statement is identified by numeric
      ID, which are returned alongside with prepared statement handle.
      Note that they are not sequential and represent value of hash function
      applied to the string containing original SQL request.
      Prepared statement holder is shared among all sessions. However, session
      has access only to statements which have been prepared in scope of it.
      There's no eviction policy like in any cache; to remove statement from
      holder explicit unprepare request is required. Alternatively, session's
      disconnect also removes statements from holder.
      Several sessions can share one prepared statement, which will be
      destroyed when all related sessions are disconnected or send unprepare
      request. Memory limit for prepared statements is adjusted by
      box.cfg{sql_cache_size} handle (can be set dynamically;
      
      Any DDL operation leads to expiration of all prepared statements: they
      should be manually removed or re-prepared.
      Prepared statements are available in local mode (i.e. via box.prepare()
      function) and are supported in IProto protocol. In the latter case
      next IProto keys are used to make up/receive requests/responses:
      IPROTO_PREPARE - new IProto command; key is 0x13. It can be sent with
      one of two mandatory keys: IPROTO_SQL_TEXT (0x40 and assumes string value)
      or IPROTO_STMT_ID (0x43 and assumes integer value). Depending on body it
      means to prepare or unprepare SQL statement: IPROTO_SQL_TEXT implies prepare
      request, meanwhile IPROTO_STMT_ID - unprepare;
      IPROTO_BIND_METADATA (0x33 and contains parameters metadata of type map)
      and IPROTO_BIND_COUNT (0x34 and corresponds to the count of parameters to
      be bound) are response keys. They are mandatory members of result of
      IPROTO_PREPARE execution.
      
      To track statistics of used memory and number of currently prepared
      statements, box.info is extended with SQL statistics:
      
      box.info:sql().cache.stmt_count - number of prepared statements;
      box.info:sql().cache.size - size of occupied by prepared statements memory.
      
      Typical workflow with prepared statements is following:
      
      s = box.prepare("SELECT * FROM t WHERE id = ?;")
      s:execute({1}) or box.execute(s.sql_str, {1})
      s:execute({2}) or box.execute(s.sql_str, {2})
      s:unprepare() or box.unprepare(s.query_id)
      
      Structure of object is following (member : type):
      
      - stmt_id: integer
        execute: function
        params: map [name : string, type : integer]
        unprepare: function
        metadata: map [name : string, type : integer]
        param_count: integer
      ...
      
      In terms of remote connection:
      
      cn = netbox:connect(addr)
      s = cn:prepare("SELECT * FROM t WHERE id = ?;")
      cn:execute(s.sql_str, {1})
      cn:unprepare(s.query_id)
      
      Closes #2592
Loading