Skip to content
Snippets Groups Projects
  1. Dec 30, 2019
    • Alexander Turenko's avatar
      lua: don't modify pointer type in msgpack.decode* · 2b9ef8d1
      Alexander Turenko authored
      msgpackffi.decode_unchecked([const] char *) returns two values: a
      decoded result and a new pointer within passed buffer. After #3926 a
      cdata type of the returned pointer follows a type of passed buffer.
      
      This commit modifies behaviour of msgpack module in the same way. The
      following functions now returns cdata<char *> or cdata<const char *>
      depending of its argument:
      
      * msgpack.decode(cdata<[const] char *>, number)
      * msgpack.decode_unchecked(cdata<[const] char *>)
      * msgpack.decode_array_header(cdata<[const] char *>, number)
      * msgpack.decode_map_header(cdata<[const] char *>, number)
      
      Follows up #3926.
      2b9ef8d1
    • Vladislav Shpilevoy's avatar
      tuple: JSON path update intersection at maps · 6e97d6a9
      Vladislav Shpilevoy authored
      Previous commits introduced isolated JSON updates, and then
      allowed intersection at array. This one completes the puzzle,
      adding intersection at maps, so now both these samples work:
      
      Allowed in the previous commit:
      
          [1][2][3].a.b.c = 20
          [1][2][4].e.f.g = 30
                 ^
      
          First difference is [3] vs [4] - intersection by an array.
      
      Allowed in this commit:
      
          [1][2][3].a.b.c = 20
          [1][2][3].a.e.f = 30
                      ^
      
          First difference is 'b' vs 'e' - intersection by a map.
      
      Now JSON updates are fully available.
      
      Closes #1261
      
      @TarantoolBot document
      Title: JSON updates
      Tuple/space/index:update/upsert now support JSON paths. All the
      same paths as allowed in tuple["..."].
      
      Example:
      box.cfg{}
      format = {}
      format[1] = {'field1', 'unsigned'}
      format[2] = {'field2', 'map'}
      format[3] = {'field3', 'array'}
      format[4] = {'field4', 'string', is_nullable = true}
      s = box.schema.create_space('test', {format = format})
      _ = s:create_index('pk')
      t = {
          1,
          {
              key1 = 'value',
              key2 = 10
          },
          {
              2,
              3,
              {key3 = 20}
          }
      }
      t = s:replace(t)
      
      tarantool> t:update({{'=', 'field2.key1', 'new_value'}})
      ---
      - [1, {'key1': 'new_value', 'key2': 10}, [2, 3, {'key3': 20}]]
      ...
      
      tarantool> t:update({{'+', 'field3[2]', 1}})
      ---
      - [1, {'key1': 'value', 'key2': 10}, [2, 4, {'key3': 20}]]
      ...
      
      tarantool> s:update({1}, {{'!', 'field4', 'inserted value'}})
      ---
      - [1, {'key1': 'value', 'key2': 10}, [2, 3, {'key3': 20}], 'inserted value']
      ...
      
      tarantool> s:update({1}, {{'#', '[2].key2', 1}, {'=', '[3][3].key4', 'value4'}})
      ---
      - [1, {'key1': 'value'}, [2, 3, {'key3': 20, 'key4': 'value4'}], 'inserted value']
      ...
      
      tarantool> s:upsert({1, {k = 'v'}, {}}, {{'#', '[2].key1', 1}})
      ---
      ...
      
      tarantool> s:select{}
      ---
      - - [1, {}, [2, 3, {'key3': 20, 'key4': 'value4'}], 'inserted value']
      ...
      
      Note, that there is the same rule, as in tuple field access by
      JSON, for field names looking like JSON paths. The rule is that
      firstly the whole path is interpreted as a field name. If such a
      name does not exist, then it is treated as a path. For example,
      if there is a field name 'field.name.like.json', then this update
      
          <obj>:update({..., 'field.name.like.json', ...})
      
      will update this field, instead of keys 'field' -> 'name' ->
      'like' -> 'json'. If such a name is needed as a part of a bigger
      path, then it should be wrapped in quotes and []:
      
          <obj>:update({..., '["field.name.like.json"].next.fields', ...})
      
      There are some new rules for JSON updates:
      
      - Operation '!' can't be used to create all intermediate nodes of
        a path. For example, {'!', 'field1[1].field3', ...} can't
        create fields 'field1' and '[1]', they should exist.
      
      - Operation '#', when applied to maps, can't delete more than one
        key at once. That is, its argument should be always 1 for maps.
      
          {'#', 'field1.field2', 1} - this is allowed;
          {'#', 'field1.field2', 10} - this is not.
      
        That limitation originates from a problem, that keys in a map
        are not ordered anyhow, and '#' with more than 1 key would lead
        to undefined behaviour.
      
      - Operation '!' on maps can't create a key, if it exists already.
      
      - If a map contains non-string keys (booleans, numbers, maps,
        arrays - anything), then these keys can't be updated via JSON
        paths. But it is still allowed to update string keys in such a
        map.
      
      Why JSON updates are good, and should be preferred when only a
      part of a tuple needs to be updated?
      
      - They consume less space in WAL, because for an update only its
        keys, operations, and arguments are stored. It is cheaper to
        store update of one deep field, than the whole tuple.
      
      - They are faster. Firstly, this is because they are implemented
        in C, and have no problems with Lua GC and dynamic typing.
        Secondly, some cases of JSON paths are highly optimized. For
        example, an update with a single JSON path costs O(1) memory
        regardless of how deep that path goes (not counting update
        arguments).
      
      - They are available from remote clients, as well as any other
        DML. Before JSON updates to update one deep part of a tuple it
        would be necessary to download that tuple, update it in memory,
        send it back - 2 network hops. With JSON paths it can be 1 when
        the update can be described in paths.
      6e97d6a9
    • Vladislav Shpilevoy's avatar
      tuple: JSON path update intersection at arrays · 8cad025a
      Vladislav Shpilevoy authored
      Before the patch only isolated JSON updates were supported as the
      simplest and fastest to implement. This patch allows update
      operations with paths having the same prefix. But difference of
      the paths should start from an array index.
      
      For example, this is allowed:
      
          [1][2][3].a.b.c = 20
          [1][2][4].e.f.g = 30
      
          First difference is [3] vs [4] - intersection by an array, ok.
      
      This is not allowed yet:
      
          [1][2][3].a.b.c = 20
          [1][2][3].a.e.f = 30
      
          First difference is 'b' vs 'e' - intersection by a map,
          not ok.
      
      For that a new update tree node type is added - XUPDATE_ROUTE.
      When several update operations have the same prefix, this prefix
      becomes an XUPDATE_ROUTE tree field. It stores the prefix and a
      subtree with these operations.
      
      Bar and route update nodes can branch and produce more bars and
      routes, when new operations come.
      
      Part of #1261
      8cad025a
    • Vladislav Shpilevoy's avatar
      tuple: make update operation tokens consumable · 35084f66
      Vladislav Shpilevoy authored
      There is a case: [1][2][3][4] = 100. It is not a problem when it
      is a single operation, not intersecting with anything. It is an
      isolated update then, and works ok. But the next patch allows
      several update operations have the same prefix, and the path
      [1][2][3][4] can become a tree of updated arrays. For example, a
      trivial tree like this:
      
          root: [ [1] ]
                   |
                   [ [1] [2] ]
                          |
                          [ [1] [2] [3] ]
                                     |
                                     [ [1] [2] [3] [4] ]
                                                   =100
      
      When the update is applied to root, the JSON path [1][2][3][4]
      is decoded one part by one. And the operation goes down the tree
      until reaches the leaf, where [4] = 100 is applied. Each time when
      the update goes one level down, somebody should update
      xrow_update_op.field_no so as on the first level it would be 1,
      then 2, 3, 4.
      
      Does it mean that each level of the update [1][2][3][4] should
      prepare field_no for the next child? No, because then they would
      need to check type of the child if it is an array or map, or
      whatever expects a valid field_no/key in xrow_update_op, and
      ensure that map-child gets a key, array-child gets a field_no.
      That would complicate the code to a totally unreadable
      state, and would break encapsulation between
      xrow_update_array/map/bar... . Each array update operation would
      check a child for all existing types to ensure that the next token
      matches it. The same would happen to map updates.
      
      This patch goes another way - let each level of update check if
      its field_no/key is already prepared by the caller. And if not,
      extract a next token from the operation path. So the map update
      will ensure that it has a valid key, an array update will ensure
      that it has a valid field no.
      
      Part of #1261
      35084f66
    • Serge Petrenko's avatar
      replication: introduce anonymous replica. · e17beed8
      Serge Petrenko authored
      This commit introduces anonymous replicas. Such replicas do not pollute
      _cluster table (they can only be read-only and have a zero id in return).
      An anonymous replica can be promoted to a normal one if needed.
      
      Closes #3186
      
      @TarantoolBot document
      Title: Document anonymous replica
      
      There is a new type of replica in tarantool, anonymous one. Anonymous
      replica is read-only (but you still can write to temporary and
      replica-local spaces), and it isn't present in _cluster table.
      
      Since anonymous replica isn't registered in _cluster table, there is no
      limitation for anonymous replica count in a replicaset. You can have as
      many of them as you want.
      
      In order to make a replica anonymous, you have to pass an option
      `replication_anon=true` to `box.cfg`. You also have to set 'read_only'
      to true.
      
      Let's go through anonymous replica bootstrap.
      Suppose we have a master configured with
      ```
      box.cfg{listen=3301}
      ```
      And created a local space called "loc"
      ```
      box.schema.space.create('loc', {is_local=true})
      box.space.loc:create_index("pk")
      ```
      Now, to configure an anonymous replica, we have to issue `box.cfg`,
      as usual.
      ```
      box.cfg{replication_anon=true, read_only=true, replication=3301}
      ```
      As mentioned above, `replication_anon` may be set to true only together
      with `read_only`
      The instance will fetch masters snapshot and proceed to following its
      changes. It will not receive an id so its id will remain zero.
      ```
      tarantool> box.info.id
      ---
      - 0
      ...
      ```
      ```
      tarantool> box.info.replication
      ---
      - 1:
          id: 1
          uuid: 3c84f8d9-e34d-4651-969c-3d0ed214c60f
          lsn: 4
          upstream:
            status: follow
            idle: 0.6912029999985
            peer:
            lag: 0.00014615058898926
      ...
      ```
      Now we can use the replica.
      For example, we may do inserts into the local space:
      ```
      tarantool> for i = 1,10 do
               > box.space.loc:insert{i}
               > end
      ---
      ...
      ```
      Note, that while the instance is anonymous, it will increase the 0-th
      component of its vclock:
      ```
      tarantool> box.info.vclock
      ---
      - {0: 10, 1: 4}
      ...
      ```
      Let's now promote the replica to a normal one:
      ```
      tarantool> box.cfg{replication_anon=false}
      2019-12-13 20:34:37.423 [71329] main I> assigned id 2 to replica 6a9c2ed2-b9e1-4c57-a0e8-51a46def7661
      2019-12-13 20:34:37.424 [71329] main/102/interactive I> set 'replication_anon' configuration option to false
      ---
      ...
      
      tarantool> 2019-12-13 20:34:37.424 [71329] main/117/applier/ I> subscribed
      2019-12-13 20:34:37.424 [71329] main/117/applier/ I> remote vclock {1: 5} local vclock {0: 10, 1: 5}
      2019-12-13 20:34:37.425 [71329] main/118/applierw/ C> leaving orphan mode
      ```
      The replica just received id 2. We can make it read-write now.
      ```
      box.cfg{read_only=false}
      2019-12-13 20:35:46.392 [71329] main/102/interactive I> set 'read_only' configuration option to false
      ---
      ...
      
      tarantool> box.schema.space.create('test')
      ---
      - engine: memtx
        before_replace: 'function: 0x01109f9dc8'
        on_replace: 'function: 0x01109f9d90'
        ck_constraint: []
        field_count: 0
        temporary: false
        index: []
        is_local: false
        enabled: false
        name: test
        id: 513
      - created
      ...
      
      tarantool> box.info.vclock
      ---
      - {0: 10, 1: 5, 2: 2}
      ...
      ```
      Now replica tracks its changes in 2nd vclock component, as expected.
      It can also become replication master from now on.
      
      Side notes:
        * You cannot replicate from an anonymous instance.
        * To promote an anonymous instance to a regular one,
          you first have to start it as anonymous, and only
          then issue `box.cfg{replication_anon=false}`
        * In order for the deanonymization to succeed, the
          instance must replicate from some read-write instance,
          otherwise noone will be able to add it to _cluster table.
      e17beed8
    • sergepetrenko's avatar
      vclock: ignore 0th component in comparisons · 1a2037b1
      sergepetrenko authored
      0th vclock component will be used to count replica-local rows of an
      anonymous replica. These rows won't be replicated and different
      instances will have different values in vclock[0].
      
      Add a function vclock_compare_ingore0, which doesn't order vclocks by 0
      component and use it where appropriate.
      
      Part of #3186
      1a2037b1
    • Serge Petrenko's avatar
      applier: split join processing into two stages · 5962ddb0
      Serge Petrenko authored
      We already have 'initial join' and 'final join' stages in applier logic.
      The first actually means fetching master's snapshot, and the second one
      -- receiving the rows which should contain replica's registration in
      _cluster.
      These stages will be used separately once anonymous replica is
      implemented, so split them as a preparation.
      
      Prerequisite #3186
      5962ddb0
    • Serge Petrenko's avatar
      replication: do not decode replicaset uuid when processing a subscribe · 269295cc
      Serge Petrenko authored
      After moving cluster id check to replica (7f8cbde3)
      we do not check it on master side, so no need to decode it.
      
      Prerequisite #3186
      269295cc
    • Serge Petrenko's avatar
      box: update comment describing join protocol · d3031e47
      Serge Petrenko authored
      The comment states that relay sends the latest snapshot to replica
      during initial join, however, this was changed in commit
      6332aca6 (relay: join new replicas off
      read view).
      Now relay sends rows from the read view created at the moment of join.
      Update the comment to match.
      
      Follow-up #1271
      d3031e47
    • Nikita Pettik's avatar
      sql: move sql_stmt_busy() declaration to box/execute.h · dff6f5dd
      Nikita Pettik authored
      We are going to use it in box/execute.c and in SQL prepared statement
      cache implementation. So to avoid including whole sqlInt.h let's move it
      to relative small execute.h header. Let's also fix codestyle of this
      function.
      
      Needed for #2592
      dff6f5dd
    • Nikita Pettik's avatar
      sql: introduce sql_stmt_query_str() method · a14d1df3
      Nikita Pettik authored
      It is getter to fetch string of SQL query from prepared statement.
      
      Needed for #2592
      a14d1df3
    • Nikita Pettik's avatar
      box: increment schema_version on ddl operations · 68094e8b
      Nikita Pettik authored
      Some DDL operations such as SQL trigger alter, check and foreign
      constraint alter don't result in schema version change. On the other
      hand, we are going to rely on schema version to determine expired
      prepared statements: for instance, if FK constraint has been created
      after DML statement preparation, the latter may ignore FK constraint
      (instead of proper "statement has expired" error). Let's fix it and
      account schema change on each DDL operation.
      
      Needed for #2592
      68094e8b
    • Nikita Pettik's avatar
      sql: introduce sql_stmt_est_size() function · fb745eb2
      Nikita Pettik authored
      To implement memory quota of prepared statement cache, we have to
      estimate size of prepared statement. This function attempts at that.
      
      Needed of #2592
      fb745eb2
    • Nikita Pettik's avatar
      sql: add sql_stmt_schema_version() · 26f3fbd3
      Nikita Pettik authored
      Let's introduce interface function to get schema version of prepared
      statement. It is required since sturct sql_stmt (i.e. prepared
      statement) is an opaque object and in fact is an alias to struct Vdbe.
      Statements with schema version different from the current one are
      considered to be expired and should be re-compiled.
      
      Needed for #2592
      26f3fbd3
    • Nikita Pettik's avatar
      sql: resurrect sql_bind_parameter_name() · f1c0dcdf
      Nikita Pettik authored
      We may need to get name of parameter to be bound by its index position.
      So let's resurrect sql_bind_parameter_name() - put its prototype to
      sql/sqlInt.h header and update codestyle.
      
      Needed for #2592
      f1c0dcdf
    • Nikita Pettik's avatar
      sql: resurrect sql_bind_parameter_count() function · 99707feb
      Nikita Pettik authored
      This function is present in sql/vdbeapi.c source file, its prototype is
      missing in any header file. It makes impossible to use it. Let's add
      prototype declaration to sql/sqlInt.h (as other parameter
      setters/getters) and refactor a bit in accordance with our codestyle.
      
      Needed for #2592
      99707feb
    • Nikita Pettik's avatar
      port: add result set format and request type to port_sql · e3362690
      Nikita Pettik authored
      Result set serialization formats of DQL and DML queries are different:
      the last one contains number of affected rows and optionally list of
      autoincremented ids; the first one comprises all meta-information
      including column names of resulting set and their types. What is more,
      serialization format is going to be different for execute and prepare
      requests. So let's introduce separate member to struct port_sql
      responsible for serialization format to be used.
      
      Note that C standard specifies that enums are integers, but it does not
      specify the size. Hence, let's use simple uint8 - mentioned enum are
      small enough to fit into it.
      
      What is more, prepared statement finalization is required only for
      PREPARE-AND-EXECUTE requests. So let's keep flag indicating required
      finalization as well.
      
      Needed for #2592
      e3362690
    • Nikita Pettik's avatar
      port: increase padding of struct port · 8bb97807
      Nikita Pettik authored
      We are going to extend context of struct port_sql. One already inherits
      struct port_tuple, which makes it size barely fits into 48 bytes of
      padding of basic structure (struct port). Hence, let's increase padding
      a bit to be able to add at least one more member to struct port_sql.
      
      Needed for #2592
      8bb97807
    • Nikita Pettik's avatar
      sql: move sql_stmt_finalize() to execute.h · 1c3a01b6
      Nikita Pettik authored
      We are going to introduce prepared statement cache. On statement's
      deallocation we should release all resources which is done by
      sql_stmt_finalize(). Now it is declared in sql/sqlInt.h header, which
      accumulates almost all SQL related functions. To avoid including such a
      huge header to use single function, let's move its signature to
      box/execute.h
      
      Needed for #2592
      1c3a01b6
    • Nikita Pettik's avatar
      022db041
    • Nikita Pettik's avatar
      sql: rename sql_finalize() to sql_stmt_finalize() · 79363de7
      Nikita Pettik authored
      Let's follow unified naming rules for SQL high level API which
      manipulates on statements objects. To be more precise, let's use
      'sql_stmt_' prefix for interface functions operating on statement
      handles.
      79363de7
    • Nikita Pettik's avatar
      sql: rename sqlPrepare() to sql_stmt_compile() · 0827aaf1
      Nikita Pettik authored
      sql_prepare() is going not only to compile statement, but also to save it
      to the prepared statement cache. So we'd better rename sqlPrepare()
      which is static wrapper around sql_prepare() and make it non-static.
      Where it is possible let's use sql_stmt_compile() instead of sql_prepare().
      
      Needed for #2592
      0827aaf1
    • Nikita Pettik's avatar
      sql: move sql_prepare() declaration to box/execute.h · 7ad005e7
      Nikita Pettik authored
      We are going to split sql_prepare_and_execute() into several explicit
      and logically separated steps:
      
      1. sql_prepare() -- compile VDBE byte-code
      2. sql_bind() -- bind variables (if there are any)
      3. sql_execute() -- query (byte-code) execution in virtual machine
      
      For instance, for dry-run we are interested only in query preparation.
      Contrary, if we had prepared statement cache, we could skip query
      preparation and handle only bind and execute steps.
      
      To avoid inclusion of sql/sqlInt.h header (which gathers almost all SQL
      specific functions and constants) let's move sql_prepare() to
      box/execute.h header (which already holds sql_prepare_and_execute()).
      
      Needed for #3292
      7ad005e7
    • Nikita Pettik's avatar
      sql: refactor sql_prepare() and sqlPrepare() · a46fdc69
      Nikita Pettik authored
      - Removed saveSqlFlag as argument from sqlPrepare(). It was used to
        indicate that its caller is sql_prepare_v2() not sql_prepare().
        Since in previous commit we've left only one version of this function
        let's remove this flag at all.
      
      - Removed struct db from list of sql_prepare() arguments. There's one
        global database handler and it can be obtained by sql_get() call.
        Hence, it makes no sense to pass around this argument.
      
      Needed for #3292
      a46fdc69
    • Nikita Pettik's avatar
      sql: remove sql_prepare_v2() · d520ffc7
      Nikita Pettik authored
      There are two versions of the same function (sql_prepare()) which are
      almost identical. Let's keep more relevant version sql_prepare_v2() but
      rename it to sql_prepare() in order to avoid any mess.
      
      Needed for #3292
      d520ffc7
  2. Dec 29, 2019
    • Nikita Pettik's avatar
      sql: extend result set with span · f89d6565
      Nikita Pettik authored
      Each column of result set features its name span (in full metadata
      mode). For instance:
      
      SELECT x + 1 AS add FROM ...;
      
      In this case real name (span) of resulting set column is "x + 1"
      meanwhile "add" is its alias. This patch extends metadata with
      member which corresponds to column's original expression.
      It is worth mentioning that in most cases span coincides with name, so
      to avoid overhead and sending the same string twice, we follow the rule
      that if span is encoded as MP_NIL then its value is the same as name.
      Also note that span is always presented in full metadata mode.
      
      Closes #4407
      
      @TarantoolBot document
      Title: extended SQL metadata
      
      Before this patch metadata for SQL DQL contained only two fields:
      name and type of each column of result set. Now it may contain
      following properties:
       - collation (in case type of resulting set column is string and
                    collation is different from default "none");
         is encoded with IPROTO_FIELD_COLL (0x2) key in IPROTO_METADATA map;
         in msgpack is encoded as string and held with MP_STR type;
       - is_nullable (in case column of result set corresponds to space's
                      field; for expressions like x+1 for the sake of
                      simplicity nullability is omitted);
         is encoded with IPROTO_FIELD_IS_NULLABLE key (0x3) in IPROTO_METADATA;
         in msgpack is encoded as boolean and held with MP_BOOL type;
         note that absence of this field implies that nullability is unknown;
       - is_autoincrement (is set only for autoincrement column in result
                           set);
         is encoded with IPROTO_FIELD_IS_AUNTOINCREMENT (0x4) key in IPROTO_METADATA;
         in msgpack is encoded as boolean and held with MP_BOOL type;
       - span (is always set in full metadata mode; it is an original
         expression forming result set column. For instance:
         SELECT a + 1 AS x; -- x is a name, meanwhile a + 1 is a span);
         is encoded with IPROTO_FIELD_SPAN (0x5) key in IPROTO_METADATA map;
         in msgpack is encoded as string and held with MP_STR type OR
         as NIL with MP_NIL type. The latter case indicates that span
         coincides with name. This simple optimization allows to avoid
         sending the same string twice.
      
      This extended metadata is send only when PRAGMA full_metadata is
      enabled. Otherwise, only basic (name and type) metadata is processed.
      f89d6565
    • Mergen Imeev's avatar
      test: trim field name update from serializer test · 408b9ad4
      Mergen Imeev authored
      
      The test re encoding of -2^63 Lua number value did use update by a field
      name, which does not supported in 1.10 and 2.2 branches. Field name
      updates are orthogonal to Lua number serialization and we don't intend
      to test them here. So it is safe and logical to get rid of them in the
      test.
      
      This change allow the test to pass on 1.10 and 2.2 branches.
      
      Follows up #4672.
      
      Reviewed-by: default avatarAlexander Tikhonov <avtikhon@tarantool.org>
      Reviewed-by: default avatarAlexander Turenko <alexander.turenko@tarantool.org>
      Unverified
      408b9ad4
  3. Dec 28, 2019
    • Nikita Pettik's avatar
      sql: display line and position in syntax errors · ea958f41
      Nikita Pettik authored
      When it comes for huge queries, it may turn out to be useful to see
      exact position of occurred error. Hence, let's now display line and
      position within line near which syntax error takes place. Note that it
      can be done only during parsing process (since AST can be analysed only
      after its construction is completed), so most of semantic errors still
      don't contain position. A few errors have been reworked to match new
      formatting patterns.
      
      First iteration of this patch is implemented by @romanhabibov
      
      Closes #2611
      ea958f41
  4. Dec 27, 2019
    • Mergen Imeev's avatar
      sql: introduce DOUBLE type · 64745b10
      Mergen Imeev authored
      This patch introduces type DOUBLE in SQL.
      
      Closes #3812
      Needed for #4233
      
      @TarantoolBot document
      Title: Tarantool DOUBLE field type and DOUBLE type in SQL
      The DOUBLE field type was added to Tarantool mainly for adding the
      DOUBLE type to SQL. Values of this type are stored as MP_DOUBLE in
      msgpack. The size of the encoded value is always 9 bytes.
      
      In Lua, only non-integer numbers and CDATA of type DOUBLE can be
      inserted in this field. You cannot insert integers of type Lua
      NUMBER or CDATA of type int64 or uint64 in this field. The same
      rules apply to key in get(), select(), update() and upsert()
      methods. It was done this way to avoid unwanted implicit casts
      that could affect performance.
      
      It is important to note that you can use the ffi.cast() function
      to cast numbers to CDATA of type DOUBLE. An example of this can be
      seen below.
      
      Another very important point is that CDATA of type DOUBLE in lua
      can be used in arithmetic, but arithmetic for them does not work
      correctly. This comes from LuaJIT and most likely will not be
      fixed.
      
      Example of usage in Lua:
      s = box.schema.space.create('s', {format = {{'d', 'double'}}})
      _ = s:create_index('ii')
      s:insert({1.1})
      ffi = require('ffi')
      s:insert({ffi.cast('double', 1)})
      s:insert({ffi.cast('double', tonumber('123'))})
      s:select(1.1)
      s:select({ffi.cast('double', 1)})
      
      In SQL, DOUBLE type behavior is different due to implicit casting.
      In a column of type DOUBLE, the number of any supported type can
      be inserted. However, it is possible that the number that will be
      inserted will be different from that which is inserted due to the
      rules for casting to DOUBLE. In addition, after this patch, all
      floating point literals will be recognized as DOUBLE. Prior to
      that, they were considered as NUMBER.
      
      Example of usage in SQL:
      box.execute('CREATE TABLE t (d DOUBLE PRIMARY KEY);')
      box.execute('INSERT INTO t VALUES (10), (-2.0), (3.3);')
      box.execute('SELECT * FROM t;')
      box.execute('SELECT d / 100 FROM t;')
      box.execute('SELECT * from t WHERE d < 15;')
      box.execute('SELECT * from t WHERE d = 3.3;')
      64745b10
    • Mergen Imeev's avatar
      box: introduce DOUBLE field type · d8193eb1
      Mergen Imeev authored
      This patch creates DOUBLE field type in Tarantool. The main
      purpose of this field type is to add DOUBLE type to SQL.
      
      Part of #3812
      d8193eb1
    • Nikita Pettik's avatar
      sql: fix index consideration with INDEXED BY clause · 49fedfe3
      Nikita Pettik authored
      Accidentally, number of indexes to be considered during query planning
      in presence of INDEXED BY is calculated wrong. Instead of one (INDEXED
      BY is not a hint but requirement) index to be used (which is indicated
      in INDEXED BY clause), all space indexes take part in query planning.
      There are not so many tests checking this feature, so unfortunately this
      bug was hidden. Let's fix it and force only one index to be used in QP
      in case of INDEXED BY clause.
      49fedfe3
  5. Dec 25, 2019
    • Nikita Pettik's avatar
      sql: extend result set with autoincrement · fd271dc7
      Nikita Pettik authored
      If result set contains column which features attached sequence
      (AUTOINCREMENT in terms of SQL) then meta-information will contain
      corresponding field ('is_autoicrement' : boolean) in response.
      
      Part of #4407
      fd271dc7
    • Nikita Pettik's avatar
      sql: extend result set with nullability · 8553115e
      Nikita Pettik authored
      If member of result set is (solely) column identifier, then metadata
      will contain its corresponding field nullability as boolean property.
      Note that indicating nullability for other expressions (like x + 1)
      may make sense but it requires derived nullability calculation which in
      turn seems to be overkill (at least in scope of current patch).
      
      Part of #4407
      8553115e
    • Nikita Pettik's avatar
      sql: extend result set with collation · 808cd12e
      Nikita Pettik authored
      If resulting set column is of STRING type and features collation (no
      matter explicit or implicit) different from "none", then metadata will
      contain its name.
      
      This patch also introduces new pragma: full_metadata. By default it is
      not set. If it is turned on, then optional metadata (like collation) is
      pushed to Lua stack. Note that via IProto protocol always full metadata
      is send, but its decoding depends on session SQL settings.
      
      Part of #4407
      808cd12e
    • Nikita Pettik's avatar
      sql: refactor resulting set metadata · c017c5fa
      Nikita Pettik authored
      Move names and types of resulting set to a separate structure. Simplify
      their storage by introducing separate members for name and type
      (previously names and types were stored in one char * array). It will
      allow us to add new metadata properties with ease.
      
      Needed for #4407
      c017c5fa
  6. Dec 24, 2019
    • Nikita Pettik's avatar
      sql: allow nil to be returned from UDF · 1b39cbcf
      Nikita Pettik authored
      Any user defined function features assumed type of returned value
      (if it is not set explicitly during UDF creation, then it is ANY).
      After function's invocation in SQL, type of returned value is checked to
      be compatible with mentioned type of returned value specified in
      function's definition. It is done by means of
      field_mp_plain_type_is_compatible(). This functions accepts
      'is_nullable' arguments which indicates whether value can be nullable or
      not. For some reason 'is_nullable' is set to 'false' in our particular
      case. Hence, nils can't be returned from UDF for SCALAR types.
      
      Since there's no reason why nils can't be returned from UDFs,
      let's fix this unfortunate bug.
      1b39cbcf
    • Chris Sosnin's avatar
      sql: drop only generated sequence in DROP TABLE · a1155c8b
      Chris Sosnin authored
      It is possible to create a sequence manually, and give it to a newly
      created index as a source of unique identifiers. Such sequences are not
      owned by a space, and therefore shouldn't be deleted when the space is
      dropped. They are not dropped when space:drop() in Lua is called, but
      were dropped in SQL 'DROP TABLE' before this patch. Now Lua and SQL are
      consistent in that case.
      a1155c8b
    • Nikita Pettik's avatar
      sql: fix empty-body warning · 1d95fc04
      Nikita Pettik authored
      GCC features warning diagnostics which allows to detect wrong ; right
      after 'if' operator:
      
      if (X == Y); {
          ...
      }
      
      In this case despite evaluation of 'if' argument expression, statements
      after it will be always executed.
      
      According to our codestyle, we neglect bracers around 'if' body in case
      it consists of single statement and fits into one line. On the other
      hand, in SQL debug macros like VdbeComment() are defined as empty, so
      statements like:
      
      if (X)
          VdbeComment();
      
      turn into
      
      if (X) ;
      
      in release builds. As a result, we get 'false' warning (which is
      compilation error in -Werror mode). To fix it let's make VdbeComment()
      macros be non-empty in release mode and expand them into (void) 0.
      1d95fc04
    • Chris Sosnin's avatar
      sql: remove grants associated with the table · d2ea6e41
      Chris Sosnin authored
      Dropping table with sql removes everything associated with it but
      grants, which is inconsistent. Generating code for it fixes this bug.
      
      Closes #4546
      d2ea6e41
  7. Dec 23, 2019
Loading