- May 28, 2019
-
-
Kirill Yukhin authored
-
Nikita Pettik authored
It also fixes misbehaviour during insertion of boolean values to integer field: explicit CAST operator converts boolean to integer, meanwhile implicit cast doesn't.
-
Nikita Pettik authored
As previous commit says, we've decided to allow all meaningful explicit casts. One of these - conversion from string consisting from quoted float literal to integer. Before this patch, mentioned operation was not allowed: SELECT CAST('1.123' AS INTEGER); --- - error: 'Type mismatch: can not convert 1.123 to integer' ... But anyway it can be done in two steps: SELECT CAST(CAST('1.123' AS REAL) AS INTEGER); So now this cast can be done in one CAST operation. Closes #4229
-
Nikita Pettik authored
It was decided that all explicit casts for which we can come up with meaningful semantics should work. If a user requests an explicit cast, he/she most probably knows what they are doing. CAST from REAL to BOOLEAN is disallowed by ANSI rules. However, we allow CAST from INT to BOOLEAN which is also prohibited by ANSI. So, basically it is possible to covert REAL to BOOLEAN in two steps: SELECT CAST(CAST(1.123 AS INT) AS BOOLEAN); For the reason mentioned above, now we allow straight CAST from REAL to BOOLEAN. Anything different from 0.0 is evaluated to TRUE. Part of #4229
-
Nikita Pettik authored
OP_AddImm adds constant defined by P2 argument to memory cell P1. Before addition, content of memory cell is converted to MEM_Int. However, according to the usages of this opcode in source code, memory cell always initially contains integer value. Hence, conversion to integer can be replaced with simple assertion.
-
- May 27, 2019
-
-
Georgy Kirichenko authored
Encode all statements to be written out to wal onto a transaction memory region. This relaxes a relation between transaction and fiber state and required for an autonomous transaction feature. Prerequisites: #1254
-
Konstantin Osipov authored
Issue pending, gh-4254.
-
Vladimir Davydov authored
Even if a statement isn't marked as VY_STMT_DEFERRED_DELETE, e.g. it's a REPLACE produced by an UPDATE request, it may overwrite a statement in the transaction write set that is marked so, for instance: s = box.schema.space.create('test', {engine = 'vinyl'}) pk = s:create_index('pk') sk = s:create_index('sk', {parts = {2, 'unsigned'}}) s:insert{1, 1} box.begin() s:replace{1, 2} s:update(1, {{'=', 2, 3}}) box.commit() If we don't mark REPLACE{3,1} produced by the update operatoin with VY_STMT_DEFERRED_DELETE flag, we will never generate a DELETE statement for INSERT{1,1}. That is, we must inherit the flag from the overwritten statement when we insert a new one into a write set. Closes #4248
-
Vladimir Davydov authored
Consider the following example: s = box.schema.space.create('test', {engine = 'vinyl'}) s:create_index('primary') s:create_index('secondary', {parts = {2, 'unsigned'}}) s:insert{1, 1, 1} s:replace{1, 1, 2} When REPLACE{1,1} is committed to the secondary index, the overwritten tuple, i.e. INSERT{1,1}, is found in the primary index memory, and so deferred DELETE{1,1} is generated right away and committed along with REPLACE{1,1}. However, there's no need to commit anything to the secondary index in this case, because its key isn't updated. Apart from eating memory and loading disk, this also breaks index stats, as vy_tx implementation doesn't expect two statements committed for the same key in a single transaction. Fix this by checking if there's a statement in the log for the deleted key and if there's skipping them both as we do in the regular case, see the comment in vy_tx_set. Closes #3693
-
Vladimir Davydov authored
If an UPDATE request doesn't touch key parts of a secondary index, we don't need to re-index it in the in-memory secondary index, as this would only increase IO load. Historically, we use column mask set by the UPDATE operation to skip secondary indexes that are not affected by the operation on commit. However, there's a problem here: the column mask isn't precise - it may have a bit set even if the corresponding column value isn't changed by the update operation, e.g. consider {'+', 2, 0}. Not taking this into account may result in appearance of phantom tuples on disk as the write iterator assumes that statements that have no effect aren't written to secondary indexes (this is needed to apply INSERT+DELETE "annihilation" optimization). We fixed that by clearing column mask bits in vy_tx_set in case we detect that the key isn't changed, for more details see #3607 and commit e72867cb ("vinyl: fix appearance of phantom tuple in secondary index after update"). It was rather an ugly hack, but it worked. However, it turned out that apart from looking hackish this code has a nasty bug that may lead to tuples missing from secondary indexes. Consider the following example: s = box.schema.space.create('test', {engine = 'vinyl'}) s:create_index('pk') s:create_index('sk', {parts = {2, 'unsigned'}}) s:insert{1, 1, 1} box.begin() s:update(1, {{'=', 2, 2}}) s:update(1, {{'=', 3, 2}}) box.commit() The first update operation writes DELETE{1,1} and REPLACE{2,1} to the secondary index write set. The second update replaces REPLACE{2,1} with DELETE{2,1} and then with REPLACE{2,1}. When replacing DELETE{2,1} with REPLACE{2,1} in the write set, we assume that the update doesn't modify secondary index key parts and clear the column mask so as not to commit a pointless request, see vy_tx_set. As a result, we skip the first update too and get key {2,1} missing from the secondary index. Actually, it was a dumb idea to use column mask to skip statements in the first place, as there's a much easier way to filter out statements that have no effect for secondary indexes. The thing is every DELETE statement inserted into a secondary index write set acts as a "single DELETE", i.e. there's exactly one older statement it is supposed to purge. This is, because in contrast to the primary index we don't write DELETE statements blindly - we always look up the tuple overwritten in the primary index first. This means that REPLACE+DELETE for the same key is basically a no-op and can be safely skip. Moreover, DELETE+REPLACE can be treated as no-op, too, because secondary indexes don't store full tuples hence all REPLACE statements for the same key are equivalent. By marking both statements as no-op in vy_tx_set, we guarantee that no-op statements don't make it to secondary index memory or disk levels. Closes #4242
-
- May 23, 2019
-
-
Cyrill Gorcunov authored
Backport of openresty/luajit2-test-suite commit ce2c916d5582914edeb9499f487d9fa812632c5c To test hash chain bug. Part-of #4171
-
Kirill Yukhin authored
-
Kirill Shcherbatov authored
The test for multikey index prefix compatibility was insufficient because JSON path is relative for some fieldno. Those root field identifiers also must coincide. Follow up #1257
-
- May 22, 2019
-
-
Vladislav Shpilevoy authored
One another problem discovered with UDP broadcast test is that it can affect other tests, even after termination. Doing swim:broadcast() on one test a programmer can't be sure, who will listen it, answer, and break the test scenario. This commit reduces probability of such a problem by * allowance to set a codec before swim:cfg(). It allows to protect SWIM nodes of different tests from each other - they will not understand messages from other tests. By the way, the same problem can appear in real applications too; * do not binding again a URI passed by test-run into the test and closed here. If a test closes a URI given to it, it can't be sure, that next bind() will be successful - test-run could already reuse it. Follow up #3234
-
Vladislav Shpilevoy authored
First of all, the problem in a nutshell was that ev_timer with non-zero 'repeat' field in fact is a ev_periodic. It is restarted *automatically*, even if a user does not write ev_timer_again() nor ev_timer_start(). This led to a situation, that a round message send is scheduled, and next round step timer alarm happens before the message is actually sent. It, in turn, led to an assertion on attempt to schedule a task twice. This patch fixes the swim test harness to behave like ev_timer with 'repeat' > 0, and on first idle round step stops the timer - it will be restarted once the currently hanging task will be finally sent. Follow up #3234
-
Vladislav Shpilevoy authored
They are caused by * too slow network, when SWIM tests are run under high load; * UDP packets late arrival or drop. Follow up #3234
-
Vladislav Shpilevoy authored
Follow up #3234
-
Vladislav Shpilevoy authored
Follow up #3234
-
Vladimir Davydov authored
It's too early to assert msgpack type as an array when a multikey field is encountered - we haven't checked the field type yet so the type might as well be a map, in which case we will raise an error just a few lines below. Remove the assertion and add a test case.
-
Vladimir Davydov authored
If an indexed field expects array/map, it shouldn't be allowed to insert null instead, because this might break expectations of field accessors. For unikey indexes inserting null instead of array/map works fine though somewhat confusing: for a non-nullable field you get a wrong error message ("field is missing" instead of "array/map expected, got nil"); for a nullable field, this silently works, just looks weird as there's a clear type mismatch here. However, for a multikey field you get a crash as tuple_multikey_count() doesn't expect to see null where an array should be according to the format: tuple_raw_multikey_count: Assertion `mp_typeof(*array_raw) == MP_ARRAY' failed. This issue exists, because we assume all fields are nullable by default for some reason. Fix that and add some tests. Note, you can still omit nullable fields, e.g. if field "[2].a[1]" is nullable you may insert tuple [1, {a = {}}] or [1, {b = 1}] or even [1], you just can't pass box.NULL instead of an array/map.
-
Kirill Shcherbatov authored
Tarantool used to assume that offset_slot has an extension iff field_map_get_offset is called with multikey_idx >= 0. In fact, when some part of the index contains a multikey index placeholder, tuple_compare_* routines pass a tuple_hint in meaning of multikey index for each tuple_field_raw_by_part call, even for regular key_part that doesn't have array index placeholder (and, correspondingly, field_map extension). Thus this assumption is invalid. This patch uses the fact that field_map slots that have extension store negative offset to distinguish multikey and normal usage of the field_map_get_offset routine. Closes #4234
-
- May 21, 2019
-
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
Empty string as a no-payload-flag was not a good idea, because then a user can't write something like: if not member:payload() then ... Follow up #3234
-
Vladislav Shpilevoy authored
Encryption with an arbitrary algorithm and any mode with a configurable private key. Closes #3234
-
Vladislav Shpilevoy authored
SWIM is going to be used in and between datacenters, which means, that its packets will go through public networks. Therefore raw SWIM packets are vulnerable to attacks. An attacker can do any and all of the following things: 1) Extract secret information from member payloads, like credentials to Tarantool binary ports; 2) Change UUIDs and addresses in the packets and break a topology; 3) Catch the packets and pretend being a Tarantool instance, which could lead to undefined behaviour depending on an application logic. SWIM packets need a protection layer. This commit introduces it. SWIM transport level allows to choose an encryption algorithm with a private key to encrypt each packet with that key. Besides, each packet is encrypted using a random public key prepended to the packet. SWIM now provides a public API to choose an encryption algorithm and a private key. Part of #3234
-
Vladislav Shpilevoy authored
At this moment swim_scheduler_on_output() is a relatively simple function. It takes a task, builds its meta and flushes a result into the network. But soon SWIM will be able to encrypt messages. It means, that in addition to regular preprocessing like building meta headers a new phase will appear - encryption. What is more - conditional encryption, because a user may want to do not encrypt messages. All the same is about swim_scheduler_on_input() - if a SWIM instance uses encryption, it should decrypt incoming messages before forwarding them into the SWIM core logic. The chosen strategy - lets reuse on_output/on_input virtuality and create two version of on_input/on_output functions: swim_on_plain_input() | swim_on_encrypted_input() swim_on_plain_output() | swim_on_encrypted_output() One of these pairs is chosen depending on if the instance uses encryption. To make these 4 functions as simple and short as possible this commit creates two sets of functions, doing all the logic except encryption: swim_begin_send() swim_do_send() swim_complete_send() swim_begin_recv() swim_do_recv() swim_complete_recv() These functions will be used by on_input/on_output functions with different arguments. Part of #3234
-
Vladislav Shpilevoy authored
Each time a member was returned from a SWIM instance object, it was wrapped by a table with a special metatable, cached payload. But next the same lookup returned a new table. It - created garbage as a new member wrapper; - lost cached decoded payload. This commit caches in a private table all wrapped members and returns an existing wrapper on a next lookup. A microbenchmark showed, that cached result retrieval is 10 times faster, than each time create a new table. Cache table keeps week references - it means, that when a member object looses all its references in a user's application, it is automatically dropped from the table. Part of #3234
-
Vladislav Shpilevoy authored
Users of Lua SWIM module likely will use Lua objects as a payload. Lua objects are serialized into MessagePack automatically, and deserialized back on other instances. But deserialization of 1.2Kb payload on each member:payload() invocation is quite heavy operation. This commit caches decoded payloads to return them again until change. A microbenchmark showed, that cached payload is returned ~100 times faster, than it is decoded each time. Even though a tested payload was quite small and simple: s:set_payload({a = 100, b = 200}) Even this payload is returned 100 times faster, and does not affect GC. Part of #3234
-
Vladislav Shpilevoy authored
Sometimes, especially in tests, it is useful to make something like this: s:add_member({uuid = member:uuid(), uri = member:uri()}) But member:uuid() is cdata struct tt_uuid. This commit allows that. Part of #3234
-
Vladislav Shpilevoy authored
Expose iterators API to be able to iterate over a member table in a 'for' loop like it would just be a Lua table. Part of #3234
-
Vladislav Shpilevoy authored
Expose API to search members by UUID, to read their attributes, to set payload. Part of #3234
-
Vladislav Shpilevoy authored
Expose methods to add, remove, probe members by uri, uuid. Expose broadcast method to probe multiple members by port. Part of #3234
-
Vladislav Shpilevoy authored
SWIM as a library can be useful not only for server internals, but for users as well. This commit exposes Lua bindings to SWIM C API. Here only basic bindings are introduced to create, delete, quit, check a SWIM instance. With sanity tests. Part of #3234
-
Vladislav Shpilevoy authored
Similar methods validate their arguments: add_member, remove_member. Validate here as well for consistency. Part of #3234
-
Vladislav Shpilevoy authored
Firstly, I thought that there is an error - swim_begin_step() does not reschedules round timer, when new_round() fails. But then new_round() appeared never failing. This commit makes it void to eliminate confusion. Probably it is a legacy since the shuffled members array was allocated and freed in new_round(). Part of #3234
-
Vladislav Shpilevoy authored
Appeared, that libev does not allow to change ev_timer values in flight. A timer, reset via ev_timer_set(), should be restarted, because the function changes 'ev_timer.at', which in turn is used internally by timer routines. Part of #3234
-
Vladislav Shpilevoy authored
Lua, which suffers from lack of ability to pass values by pointers into FFI functions, nor has an address operator '&' to take an address of integer or char or anything. Because of that a user need to either use ffi.new(type[1]) or use static buffer, but for such small allocations they are both too expensive and aggravate GC problem. Now buffer module provides preallocated basic types to use in FFI functions. The commit is motivated by one another place where ffi.new('int[1]') appeared, in SWIM module, to obtain payload size as an out parameter of a C function.
-
Vladislav Shpilevoy authored
Static allocator gives memory blocks from cyclic BSS memory block of 3 pages 4096 bytes each. It is much faster than malloc, when a temporary buffer is needed. Moreover, it does not complicate GC job. Despite being faster than malloc, it is still slower, than ffi.new() of size <= 128 known in advance (according to microbenchmarks). ffi.new(size<=128) works either much faster or with the same speed as static_alloc, because internally FFI allocator library caches small blocks and can return them without malloc(). A simple micro benchmark showed, that ffi.new() vs buffer.static_alloc() is ~100 times slower on allocations of > 128 size, on <= 128 when size is not inlined. To better understand what is meant as 'inlined': this ffi.new('char[?]', < <=128 >) works ~100 times faster than this: local size = <= 128 ffi.new('char[?]', size) ffi.new() with inlined size <= 128 works faster than light, and even static allocator can't beat it.
-
Vladimir Davydov authored
Certain kinds of DML requests don't update secondary indexes, e.g. UPDATE that doesn't touch secondary index parts or DELETE for which generation of secondary index statements is deferred. For such a request vy_is_committed(env, space) may return false on recovery even if it has actually been dumped: since such a statement is not dumped for secondary indexes, secondary index's vy_lsm::dump_lsn may be less than such statement's signature, which makes vy_is_committed() assume that the statement hasn't been dumped. Further in the code we have checks that ensure that if we execute a request on recovery, it must not be dumped for the primary index (as the primary index is always dumped after secondary indexes for the sake of recovery), which fires in this case. To fix that, let's refactor the code basing on the following two facts: - Primary index is always updated by a DML request. - Primary index may only be dumped after secondary indexes. Closes #4222
-
Alexander Turenko authored
Yet another fix for building of small library as part of tarantool. Before this commit slab_arena test fails: | [019] Test failed! Result content mismatch: | [019] --- small/slab_arena.result Mon May 20 21:37:46 2019 | [019] +++ small/slab_arena.reject Mon May 20 21:47:01 2019 | [019] @@ -23,3 +23,4 @@ | [019] arena->maxalloc = 2000896 | [019] arena->used = 0 | [019] arena->slab_size = 65536 | [019] +ERROR: Expected dd flag on VMA address 0x7f3ec2080000 See the corresponding commit in the small submdoule for more info.
-