- Dec 29, 2019
-
-
Nikita Pettik authored
Each column of result set features its name span (in full metadata mode). For instance: SELECT x + 1 AS add FROM ...; In this case real name (span) of resulting set column is "x + 1" meanwhile "add" is its alias. This patch extends metadata with member which corresponds to column's original expression. It is worth mentioning that in most cases span coincides with name, so to avoid overhead and sending the same string twice, we follow the rule that if span is encoded as MP_NIL then its value is the same as name. Also note that span is always presented in full metadata mode. Closes #4407 @TarantoolBot document Title: extended SQL metadata Before this patch metadata for SQL DQL contained only two fields: name and type of each column of result set. Now it may contain following properties: - collation (in case type of resulting set column is string and collation is different from default "none"); is encoded with IPROTO_FIELD_COLL (0x2) key in IPROTO_METADATA map; in msgpack is encoded as string and held with MP_STR type; - is_nullable (in case column of result set corresponds to space's field; for expressions like x+1 for the sake of simplicity nullability is omitted); is encoded with IPROTO_FIELD_IS_NULLABLE key (0x3) in IPROTO_METADATA; in msgpack is encoded as boolean and held with MP_BOOL type; note that absence of this field implies that nullability is unknown; - is_autoincrement (is set only for autoincrement column in result set); is encoded with IPROTO_FIELD_IS_AUNTOINCREMENT (0x4) key in IPROTO_METADATA; in msgpack is encoded as boolean and held with MP_BOOL type; - span (is always set in full metadata mode; it is an original expression forming result set column. For instance: SELECT a + 1 AS x; -- x is a name, meanwhile a + 1 is a span); is encoded with IPROTO_FIELD_SPAN (0x5) key in IPROTO_METADATA map; in msgpack is encoded as string and held with MP_STR type OR as NIL with MP_NIL type. The latter case indicates that span coincides with name. This simple optimization allows to avoid sending the same string twice. This extended metadata is send only when PRAGMA full_metadata is enabled. Otherwise, only basic (name and type) metadata is processed.
-
Mergen Imeev authored
The test re encoding of -2^63 Lua number value did use update by a field name, which does not supported in 1.10 and 2.2 branches. Field name updates are orthogonal to Lua number serialization and we don't intend to test them here. So it is safe and logical to get rid of them in the test. This change allow the test to pass on 1.10 and 2.2 branches. Follows up #4672. Reviewed-by:
Alexander Tikhonov <avtikhon@tarantool.org> Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-
- Dec 28, 2019
-
-
Nikita Pettik authored
When it comes for huge queries, it may turn out to be useful to see exact position of occurred error. Hence, let's now display line and position within line near which syntax error takes place. Note that it can be done only during parsing process (since AST can be analysed only after its construction is completed), so most of semantic errors still don't contain position. A few errors have been reworked to match new formatting patterns. First iteration of this patch is implemented by @romanhabibov Closes #2611
-
- Dec 27, 2019
-
-
Mergen Imeev authored
This patch introduces type DOUBLE in SQL. Closes #3812 Needed for #4233 @TarantoolBot document Title: Tarantool DOUBLE field type and DOUBLE type in SQL The DOUBLE field type was added to Tarantool mainly for adding the DOUBLE type to SQL. Values of this type are stored as MP_DOUBLE in msgpack. The size of the encoded value is always 9 bytes. In Lua, only non-integer numbers and CDATA of type DOUBLE can be inserted in this field. You cannot insert integers of type Lua NUMBER or CDATA of type int64 or uint64 in this field. The same rules apply to key in get(), select(), update() and upsert() methods. It was done this way to avoid unwanted implicit casts that could affect performance. It is important to note that you can use the ffi.cast() function to cast numbers to CDATA of type DOUBLE. An example of this can be seen below. Another very important point is that CDATA of type DOUBLE in lua can be used in arithmetic, but arithmetic for them does not work correctly. This comes from LuaJIT and most likely will not be fixed. Example of usage in Lua: s = box.schema.space.create('s', {format = {{'d', 'double'}}}) _ = s:create_index('ii') s:insert({1.1}) ffi = require('ffi') s:insert({ffi.cast('double', 1)}) s:insert({ffi.cast('double', tonumber('123'))}) s:select(1.1) s:select({ffi.cast('double', 1)}) In SQL, DOUBLE type behavior is different due to implicit casting. In a column of type DOUBLE, the number of any supported type can be inserted. However, it is possible that the number that will be inserted will be different from that which is inserted due to the rules for casting to DOUBLE. In addition, after this patch, all floating point literals will be recognized as DOUBLE. Prior to that, they were considered as NUMBER. Example of usage in SQL: box.execute('CREATE TABLE t (d DOUBLE PRIMARY KEY);') box.execute('INSERT INTO t VALUES (10), (-2.0), (3.3);') box.execute('SELECT * FROM t;') box.execute('SELECT d / 100 FROM t;') box.execute('SELECT * from t WHERE d < 15;') box.execute('SELECT * from t WHERE d = 3.3;')
-
Mergen Imeev authored
This patch creates DOUBLE field type in Tarantool. The main purpose of this field type is to add DOUBLE type to SQL. Part of #3812
-
Nikita Pettik authored
Accidentally, number of indexes to be considered during query planning in presence of INDEXED BY is calculated wrong. Instead of one (INDEXED BY is not a hint but requirement) index to be used (which is indicated in INDEXED BY clause), all space indexes take part in query planning. There are not so many tests checking this feature, so unfortunately this bug was hidden. Let's fix it and force only one index to be used in QP in case of INDEXED BY clause.
-
- Dec 25, 2019
-
-
Nikita Pettik authored
If result set contains column which features attached sequence (AUTOINCREMENT in terms of SQL) then meta-information will contain corresponding field ('is_autoicrement' : boolean) in response. Part of #4407
-
Nikita Pettik authored
If member of result set is (solely) column identifier, then metadata will contain its corresponding field nullability as boolean property. Note that indicating nullability for other expressions (like x + 1) may make sense but it requires derived nullability calculation which in turn seems to be overkill (at least in scope of current patch). Part of #4407
-
Nikita Pettik authored
If resulting set column is of STRING type and features collation (no matter explicit or implicit) different from "none", then metadata will contain its name. This patch also introduces new pragma: full_metadata. By default it is not set. If it is turned on, then optional metadata (like collation) is pushed to Lua stack. Note that via IProto protocol always full metadata is send, but its decoding depends on session SQL settings. Part of #4407
-
- Dec 24, 2019
-
-
Nikita Pettik authored
Any user defined function features assumed type of returned value (if it is not set explicitly during UDF creation, then it is ANY). After function's invocation in SQL, type of returned value is checked to be compatible with mentioned type of returned value specified in function's definition. It is done by means of field_mp_plain_type_is_compatible(). This functions accepts 'is_nullable' arguments which indicates whether value can be nullable or not. For some reason 'is_nullable' is set to 'false' in our particular case. Hence, nils can't be returned from UDF for SCALAR types. Since there's no reason why nils can't be returned from UDFs, let's fix this unfortunate bug.
-
Chris Sosnin authored
It is possible to create a sequence manually, and give it to a newly created index as a source of unique identifiers. Such sequences are not owned by a space, and therefore shouldn't be deleted when the space is dropped. They are not dropped when space:drop() in Lua is called, but were dropped in SQL 'DROP TABLE' before this patch. Now Lua and SQL are consistent in that case.
-
Chris Sosnin authored
Dropping table with sql removes everything associated with it but grants, which is inconsistent. Generating code for it fixes this bug. Closes #4546
-
- Dec 19, 2019
-
-
Mergen Imeev authored
This patch fixes a bug that appeared after commit 3a13be1d ('lua: fix lua_tofield() for 2**64 value') . Due to this bug, -2^63 was serialized as double, although it should be serialized as integer. Closes #4672
-
Vladislav Shpilevoy authored
Isolated tuple update is an update by JSON path, which hasn't a common prefix with any other JSON update operation in the same set. For example, these JSON update operations are isolated: {'=', '[1][2][3]', 100}, {'+', '[2].b.c', 200} Their JSON paths has no a common prefix. But these operations are not isolated: {'=', '[1][2][3]', 100}, {'+', '[1].b.c', 200} They have a common prefix '[1]'. Isolated updates are a first part of fully functional JSON updates. Their feature is that their implementation is relatively simple and lightweight - an isolated JSON update does not store each part of the JSON path as a separate object. Isolated update stores just string with JSON and pointer to the MessagePack object to update. Such isolated updates are called 'bar update'. They are a basic brick of more complex JSON updates. Part of #1261
-
- Dec 17, 2019
-
-
Nikita Pettik authored
Some built-in functions can accept different number of arguments. Check of argument count for such functions takes place right before its execution. So it is possible that expression-list representing arguments for built-in function is NULL. On the other hand, in sql_expr_coll() (which returns collation of expression) it is assumed that if function features SQL_FUNC_DERIVEDCOLL flag (it implies that resulting collation depends on collation of arguments) then it has at least one argument. The last assumption is wrong considering for example SUBSTR() function: it may have 1 or 2 arguments, so check of argument count doesn't occur during compilation. Hence, if it is required to calculate collation for SUBSTR() function and there's no arguments, Tarantool crashes due to null-dereference. This patch fixes this bug with one additional check in sql_expr_coll().
-
- Dec 11, 2019
-
-
Vladislav Shpilevoy authored
The test started failing after my commit: ca07088c (func: fix not unloading of unused modules), because I forgot to update the result file. Follow up #4648
-
Mergen Imeev authored
This patch fixes a bug that prevented the conversion of real values that are greater than INT64_MAX and less than UINT64_MAX to INTEGER and UNSIGNED. Closes #4526
-
- Dec 10, 2019
-
-
Vladislav Shpilevoy authored
C functions are loaded from .so/.dylib dynamic libraries. A library is loaded when any function from there is called first time. And was supposed to be unloaded, when all its functions are dropped from the schema (box.schema.func.drop()), and none of them is still in a call. But the unloading part was broken. In fact, box.schema.func.drop() never unloaded anything. Moreover, when functions from the module were added again without a restart, it led to a second mmap of the same module. And so on, the same library could be loaded any number of times. The problem was in a useless flag in struct module preventing its unloading even when it is totally unused. It is dropped. Closes #4648
-
Vladislav Shpilevoy authored
Error injections are used to simulate an error. They are represented as a flag, or a number, and are used in Lua tests. But they don't have any feedback. That makes impossible to use the injections to check that something has happened. Something very needed to be checked, and impossible to check in a different way. More certainly, the patch is motivated by a necessity to count loaded dynamic libraries to ensure, that they are loaded and unloaded when expected. This is impossible to do in a platform independent way. But an error injection as a debug-only counter would solve the problem. Needed for #4648
-
Vladislav Shpilevoy authored
There is a bug in XCode 11 which makes some standard C headers not self sufficient when compile with gcc. At least <stdlib.h> and <algorithm> are affected. When they are included first, compilation fails with creepy errors like this: In file included from /Applications/Xcode.app/Contents/Developer/ Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/ sys/wait.h:110, from /Applications/Xcode.app/Contents/Developer/ Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/ stdlib.h:66, from tarantool/third_party/zstd/lib/common/zstd_common.c:16: /Applications/Xcode.app/Content/Developer/ Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/ sys/resource.h: In function 'getiopolicy_np': /Applications/Xcode.app/Contents/Developer/ Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/ sys/resource.h:447:34: error: expected declaration specifiers before '__OSX_AVAILABLE_STARTING' 447 | int getiopolicy_np(int, int) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_2_0); The patch workarounds the bug by deleting the buggy header includes where possible, and by changing include order in other cases. Also there was a second compilation problem. This was about different definitions of the same standard functions: via extern "C" and without. It looked like this: In file included from tarantool/src/trivia/util.h:36, from tarantool/src/tt_pthread.h:35, from tarantool/src/lib/core/fiber.h:38, from tarantool/src/lib/core/coio.h:33, from tarantool/src/lib/core/coio.cc:31: /usr/local/Cellar/gcc/9.2.0_1/lib/gcc/9/gcc/x86_64-apple-darwin18/9.2.0 include-fixed/stdio.h:222:7: error: conflicting declaration of 'char* ctermid(char*)' with 'C' linkage 222 | char *ctermid(char *); | ^~~~~~~ In file included from /Applications/Xcode.app/Contents/Developer/Platforms/ MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/unistd.h:525, from tarantool/src/lib/core/fiber.h:37, from tarantool/src/lib/core/coio.h:33, from tarantool/src/lib/core/coio.cc:31: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/ Developer/SDKs/MacOSX.sdk/usr/include/_ctermid.h:26:10: note: previous declaration with 'C++' linkage 26 | char *ctermid(char *); | ^~~~~~~ This bug is workarounded by deletion of the conflicting includes, because anyway they appeared to be not needed. Closes #4580 Conflicts: third_party/decNumber
-
Chris Sosnin authored
Unicode_ci collation breaks the general rule for objects naming, so we remove it in version 2.3.1 Closes #4561
-
- Dec 05, 2019
-
-
Vladislav Shpilevoy authored
The problem was in that the test uses the global trigger box.session.on_disconnect() to set a global variable by one connection. But test-run can do multiple connects/reconnects to the same instance. That led to multiple invocations of box.session.on_disconnect(), which could override the global variable in unexpected ways and moments. The patch makes only one session execute that trigger. Probably related to https://github.com/tarantool/test-run/issues/46 Follow up #4627 Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-
- Dec 02, 2019
-
-
Ilya Kosarev authored
There were some pass conditions in quorum test which could take some time to be satisfied. Now they are wrapped using test_run:wait_cond to make the test stable. Closes #4586
-
Ilya Kosarev authored
In replicaset_follow we iterate anon replicas list: list of replicas that haven't received an UUID. In case of successful connect replica link is being removed from anon list. If it happens immediately, without yield in applier, iteration breaks. Now it is fixed by rlist_foreach_entry_safe instead of common rlist_foreach_entry. Relevant test case is added. Part of #4586 Closes #4576 Closes #4440
-
- Nov 26, 2019
-
-
Vladislav Shpilevoy authored
Binary session disconnect trigger yield could lead to use after free of the session object. That happened because iproto thread sent two requests to TX thread at disconnect: - Close the session and run its on disconnect triggers; - If all requests are handled, destroy the session. When a connection is idle, all requests are handled, so both these requests are sent. If the first one yielded in TX thread, the second one arrived and destroyed the session right under the feet of the first one. This can be solved in two ways - in TX thread, and in iproto thread. Iproto thread solution (which is chosen in the patch): just don't send destroy request until disconnect returns back to iproto thread. TX thread solution (alternative): add a flag which says whether disconnect is processed by TX. When destroy request arrives, it checks the flag. If disconnect is not done, the destroy request waits on a condition variable until it is. The iproto is a bit tricker to implement, but it looks more correct. Closes #4627
-
- Nov 21, 2019
-
-
Vladislav Shpilevoy authored
Replication's applier encoded an auth request with exactly the same parameters as extracted by the URI parser. I.e. when no password was specified, the parser returned it as NULL, and it was not encoded. The relay, received such an auth request, complained that IPROTO_TUPLE field is not specified (this is password). Such an error confuses - a user didn't do anything illegal, he just used URI like 'login@host:port', without a password after the login. The patch makes the applier use an empty string as a default password. An alternative was to force a user always set a password even if it is an empty string, like that: 'login:@host:port'. And if a password was not found in an auth request, then reject it with a password mismatch error. But in that case a URI of kind 'login@host:port' becomes useless - it can never pass. In addition, netbox already uses an empty string as a default password. So the only way to make it consistent, and don't break anything - repeat netbox logic for replication URIs. Closes #4605 Conflicts: test/replication/suite.cfg
-
Vladislav Shpilevoy authored
Box.info.replication shows applier/relay's latest error message. But it didn't include errno description for system errors, even though it was included in the logs. Now box.info shows the errno description as well, when possible. Closes #4402 Conflicts: test/replication/suite.cfg
-
Vladislav Shpilevoy authored
The only error type having an errno as a part of it was SystemError (and its descendants SocketError, TimedOut, OOM, ...). That was used in logs (SystemError::log() method), and exposed to Lua (if type was SystemError, an error object had 'errno' field). But actually errno might be useful not only there. For example, box.info.replication exposes the latest error message of applier/relay as 'message' field of 'upstream/downstream' fields, lacking errno description. Before the patch it was impossible to obtain an errno code from C, because it was necessary to check whether an error has SystemError type, cast to SystemError class, and call SystemError::get_errno() method. Now errno is available as a part of struct error object (available from C), and is not 0 for system errors. Part of #4402
-
Vladislav Shpilevoy authored
Box.session.su() raised 'SystemError' when a user was not found due to a too long user name. That was obviously wrong, because SystemError is always something related to libraries (standard, curl, etc), and it has an errno code. Now a ClientError is raised.
-
Serge Petrenko authored
fiber.top() fills in statistics every event loop iteration, so if it was just enabled, fiber.top() returns zero in fiber cpu usage statistics because total time consumed by the main thread was not yet accounted for. Same stands for viewing top() results for a freshly created fiber: its metrics will be zero since it hasn't lived a full ev loop iteration yet. Fix this by delaying the test till top() results are meaningful and add minor refactoring. Follow-up #2694
-
- Nov 15, 2019
-
-
Alexander Turenko authored
The problem appears after 6c627af3 ('test: tarantoolctl: verify delayed box.cfg()'), where the test case was changed and it doesn't more assume an error at the instance start. So we need to stop it to prevent a situation when instances are stay after `make test`. Fixes #4600. Reviewed-by:
Vladislav Shpilevoy <v.shpilevoy@tarantool.org>
-
- Nov 14, 2019
-
-
Alexander Turenko authored
Before commit 03f85d4c ('app: fix boolean handling in argparse module') the module does not expect a value after a 'boolean' argument. However there was the problem: a 'boolean' argument can be passed only at end of an argument list, otherwise it wrongly consumes a next argument and gives a confusing error message. The mentioned commit fixes this behaviour in the following way: it still allows to pass a 'boolean' argument at end of the list w/o a value, but requires a value ('true', 'false', '1', '0') if a 'boolean' argument is not at the end to be provided using {'--foo=true'} or {'--foo', 'true'} syntax. Here this behaviour is changed: a 'boolean' argument does not assume an explicitly passed value despite its position in an argument list. If a 'boolean' argument appears in the list, then argparse.parse() returns `true` for its value (a list of `true` values in case of 'boolean+' argument), otherwise it will not be added to the result. This change also makes the behaviour of long (--foo) and short (-f) 'boolean' options consistent. The motivation of the change is simple: it is easier and more natural to type, say, `tarantoolctl cat --show-system 00000000000000000000.snap` then `tarantoolctl cat --show-system true 00000000000000000000.snap`. This commit adds several new test cases, but it does not mean that we guarantee that the module behaviour will not be changed around some corner cases, say, handling of 'boolean+' arguments. This is internal module. Follows up #4076. Reviewed-by:
Vladislav Shpilevoy <v.shpilevoy@tarantool.org>
-
- Nov 12, 2019
-
-
Vladislav Shpilevoy authored
Bootstrap and recovery work on behalf of admin. Without the universe access they are not able to even fill system spaces with data. It is better to forbid this ability until someone made their cluster unrecoverable.
-
Vladislav Shpilevoy authored
The admin user has universal privileges before bootstrap or recovery are done. That allows to, for example, bootstrap from a remote master, because to do that the admin should be able to insert into system spaces, such as _priv. But after the patch on online credentials update was implemented (#2763, 48d00b0e) the admin could loose its universal access if, for example, a role was granted to him before universal access was recovered. That happened by two reasons: - Any change in access rights, even in granted roles, led to rebuild of universal access; - Any change in access rights updated the universal access in all existing sessions, thanks to #2763. What happened: two tarantools were started. One of them master, granted 'replication' role to admin. Second node, slave, tried to bootstrap from the master. The slave created an admin session and started loading data. After it loaded 'grant replication role to admin' command, this nullified admin universal access everywhere, including this session. Next rows could not be applied. Closes #4606
-
- Nov 11, 2019
-
-
Alexander V. Tikhonov authored
After the issue #4537 fixed for the data segment size limit, the temporary blocked tests because of it unblocked. Part of #4271
-
- Nov 09, 2019
-
-
Serge Petrenko authored
Implement a new function in Lua fiber library: top(). It returns a table containing fiber cpu usage stats. The table has two entries: "cpu_misses" and "cpu". "cpu" itself is a table listing all the alive fibers and their cpu consumtion. The patch relies on CPU timestamp counter to measure each fiber's time share. Closes #2694 @TarantoolBot document Title: fiber: new function `fiber.top()` `fiber.top()` returns a table of all alive fibers and lists their cpu consumption. Let's take a look at the example: ``` tarantool> fiber.top() --- - cpu: 107/lua: instant: 30.967324490456 time: 0.351821993 average: 25.582738345233 104/lua: instant: 9.6473633128437 time: 0.110869897 average: 7.9693406131877 101/on_shutdown: instant: 0 time: 0 average: 0 103/lua: instant: 9.8026528631511 time: 0.112641118 average: 18.138387232255 106/lua: instant: 20.071174377224 time: 0.226901357 average: 17.077908441831 102/interactive: instant: 0 time: 9.6858e-05 average: 0 105/lua: instant: 9.2461986412164 time: 0.10657528 average: 7.7068458630827 1/sched: instant: 20.265286315108 time: 0.237095335 average: 23.141537169257 cpu_misses: 0 ... ``` The two entries in a table returned by `fiber.top()` are `cpu_misses` and `cpu`. `cpu` itself is a table whose keys are strings containing fiber ids and names. The three metrics available for each fiber are: 1) instant (per cent), which indicates the share of time fiber was executing during the previous event loop iteration 2) average (per cent), which is calculated as an exponential moving average of `instant` values over all previous event loop iterations. 3) time (seconds), which estimates how much cpu time each fiber spent processing during its lifetime. More info on `cpu_misses` field returned by `fiber.top()`: `cpu_misses` indicates the amount of times tx thread detected it was rescheduled on a different cpu core during the last event loop iteration. fiber.top() uses cpu timestamp counter to measure each fiber's execution time. However, each cpu core may have its own counter value (you can only rely on counter deltas if both measurements were taken on the same core, otherwise the delta may even get negative). When tx thread is rescheduled to a different cpu core, tarantool just assumes cpu delta was zero for the latest measurement. This loweres precision of our computations, so the bigger `cpu misses` value the lower the precision of fiber.top() results. Fiber.top() doesn't work on arm architecture at the moment. Please note, that enabling fiber.top() slows down fiber switching by about 15 per cent, so it is disabled by default. To enable it you need to issue `fiber.top_enable()`. You can disable it back after you finished debugging using `fiber.top_disable()`. "Time" entry is also added to each fibers output in fiber.info() (it duplicates "time" entry from fiber.top().cpu per fiber). Note, that "time" is only counted while fiber.top is enabled.
-
Vladislav Shpilevoy authored
Before the patch update was implemented as a set of operations applicable for arrays only. It was ok until field names and JSON paths appearance, because tuple is an array on the top level. But now there are four reasons to allow more complex updates of tuple field internals by JSON paths: - tuple field access by JSON path is allowed so for consistency JSON paths should be allowed in updates as well; - JSON indexes are supported. JSON update should be able to change an indexed field without rewriting half of a tuple, and its full replacement; - Tarantool is going to support documents in storage so JSON path updates is one more step forward; - JSON updates are going to be faster and more compact in WAL than get + in-memory Lua/connector update + replace (or update of a whole tuple field). The patch reworks the current update code in such a way, that now update is not just an array of operations, applied to tuple's top level fields. Now it is a tree, just like tuples are. The concept is to build a tree of xrow_update_field objects. Each updates a part of a tuple. Leafs in the tree contain update operations, specified by a user, as xrow_update_op objects. To make the code support and understanding simpler, the patch splits update implementation into several independent files-modules for each type of an updated field. One file describes how to update an array field, another file - how to update a map field, etc. This commit introduces only array. Just because it was already supported before the patch. Next commits will introduce more types one by one. Besides, the patch makes some minor changes, not separable from this commit: - The big comment about xrow updates in xrow_update.c is updated. Now it describes the tree-idea presented above; - Comments were properly aligned by 66 symbols in all the moved or changed code. Not affected code is kept as is so as not to increase the diff even more; - Added missing comments to moved or changed structures and their attributes such as struct xrow_update, struct xrow_update_op_meta, struct xrow_update_op. - Struct xrow_update_field was significantly reworked. Now it is not just a couple of pointers at tuple's top level array. From now it stores type of the updated field, range of its source data in the original tuple, and a subtree of other update fields applied to the original data. - Added missing comments to some functions which I moved and decided worth commenting alongside, such as xrow_update_op_adjust_field_no(), xrow_update_alloc(). - Functions xrow_update_op_do_f, xrow_update_op_read_arg_f, xrow_update_op_store_f are separated from struct xrow_update, so as they could be called on any updated field in the tree. From this moment they are methods of struct xrow_update_op. They take an op as a first argument (like 'this' in C++), and are applied to a given struct xrow_update_field. Another notable, but not separable, change is a new naming schema for the methods of struct xrow_update_field and struct xrow_update_op. This is motivated by the fact that struct xrow_update_field now has a type, and might be not a terminal. There are now two groups of functions. Generic functions working with struct xrow_update_field of any type: xrow_update_field_sizeof xrow_update_field_store xrow_update_op_do_field_<operation> And typed functions: xrow_update_<type>_sizeof xrow_update_<type>_store xrow_update_op_do_<type>_<operation> Where operation = insert/delete/set/arith ... type = array/map/bar/scalar ... Common functions are used when type of a field to update is not known in advance. For example, when an operation is applied to one of fields of an array, it is not known what a type this field has: another array, scalar, not changed field, map, etc. Common functions do nothing more than just a switch by field type to choose a more specific function. Typed functions work with a specific type. They may change the given field (add a new array element, replace it with a new value, ...), or may forward an operation deeper in case they see that its JSON path is not fully passed yet. Part of #1261
-
Vladislav Shpilevoy authored
That patch finishes transformation of tuple_update public API to xrow_update. Part of #1261
-
Vladislav Shpilevoy authored
Tuple_update is a too general name for the updates implemented in these files. Indeed, a tuple can be updated from Lua, from SQL, from update microlanguage. Xrow_update is a more specific name, which is already widely used in tuple_update.c. Part of #1261
-
- Nov 08, 2019
-
-
Cyrill Gorcunov authored
When invalid command is passed we should send an error message to a client. Instead a nil dereference occurs that causes abnormal exit of a console. This is the regression from 96dbc49d ('box/console: Refactor command handling'). Reported-by:
Mergen Imeev <imeevma@tarantool.org> Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by:
Alexander Turenko <alexander.turenko@tarantool.org>
-