- Sep 28, 2022
-
-
Georgiy Lebedev authored
If a statement becomes prepared, the story it adds must be 'sunk' to the level of prepared stories: refactor this loop into a separate function. Needed for #7343 NO_CHANGELOG=refactoring NO_DOC=refactoring NO_TEST=refactoring
-
Serge Petrenko authored
Change error from "ER_QUORUM_WAIT: fiber is cancelled" to FiberIsCancelled for consistency with other places that check for fiber cancellation. Same of "ER_QUORUM_WAIT: timed out". Change it to TimedOut. While I'm at it, make box_wait_quorum tolerate spurious wake-ups. NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
Linearizability is a property of operations when operation performed on any node sees all the operations performed earlier on any other node of the cluster. More strictly speaking, it's a property demanding that if a response for some write request arrived earlier than some read request was made, this read request must see the results of that (or any earlier) write request. This patch introduces a new transaction isolation level: 'linearizable'. When the option is set, box.begin() is stalled until the node receives the latest data from at least one member of the quorum. This is needed to make sure that the node sees all the writes committed on a quorum. The transaction is served only after the node sees the relevant data, thus implementing linearizable semantics. The node working on a linearizable request uses its' relays vclock sync mechanism in order to know the fresh vclock of remote nodes. Closes #6707 @TarantoolBot document Title: New transaction isolation level - linearizable There is a new transaction isolation level - linearizable. You may call `box.begin` with `txn_isolation = 'linearizable'`, but you can't set the default transaction isolation level to 'linearizable'. Linearizable transactions may only perform requests to synchronous, local or temporary memtx spaces (vinyl engine support will be added later). Starting a linearizable transaction requires `box.cfg.memtx_use_mvcc_engine` to be on. Note: starting a linearizable transaction requires that the node is the replication **source** for at least N - Q + 1 remote replicas. Here `N` is the count of registered nodes in the cluster and `Q` is `replication_synchro_quorum` value (the same as `box.info.synchro.quorum`). This is the implementation limitation. For example, you may start linearizable transactions on any node of a cluster in full-mesh topology, but you can't perform linearizable transactions on anonymous replicas, because noone replicates **from** them. When a transcaction is linearizable it sees the latest changes performed on the quorum of nodes in the cluster. For example, if you use linearizable transactions to read data on a replica, such a transaction will never read stale data: all the committed writes performed on the master will be seen by the transaction. Making a transaction linearizable requires some waiting until the node receives all the committed data. In case the node can't contact enough remote peers to determine which data is committed an error is returned. Waiting for committed data may time out: if the data isn't received during the timeout specified by `timeout` option of `box.begin()`, an error is returned. When called with `{txn_isolation = 'linearizable'}`, `box.begin()` yields until the instance receives enough data from remote peers to be sure that the transaction is linearizable.
-
Serge Petrenko authored
Currently replication acks deliver vclock the replica has reached to the master. But there is no guarantee on how fresh that vclock is: by the time the ack arrives, real replica vclock might be much greater. In some applications relay can't rely on arbitrarily old ack vclock. It needs an upper bound on vclock which could be reached by the replica at some specific point in time. In order to achieve this, introduce vclock syncs. A vclock sync is a monotonically growing heartbeat identifier, unique for a live master-replica connection. This identifier may now be appended to heartbeats coming from the master. Once receiving a heartbeat with some non-zero vclock sync value, replica starts responding with the same vclock sync value in all its acks. Here's how it can be used: for example, in multi-master configuration, one master wants to wait until it receives all the data, that was present on a remote master at some point in time. In order to do so, the first master issues a vclock sync request. Once it receives an ack with the same vclock sync it has sent, it knows for sure that the remote master's vclock was not greater than the one received in the ack. This may be used to implement linearizable reads. Prerequisite #6707 NO_TEST=tested in next commit NO_CHANGELOG=internal change @TarantoolBot document Title: Binary protocol: new key - IPROTO_VCLOCK_SYNC Binary protocol receives a new key: IPROTO_VCLOCK_SYNC = 0x5a This key holds a MP_UINT value and is used only by replication heartbeats. A master sends the monotonically growing vclock sync together with some of its heartbeats, and replica replies with the greatest vclock sync it has seen yet in its ACKs.
-
Serge Petrenko authored
Reuse check_param_table() funciton in box begin instead of some custom parameter checking code. As one of the side effects, unknown options are now declined by `box.begin{}`, which becomes more and more important with new options introduction. In-scope-of #6707 NO_DOC=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
To match space_is_temporary. In-scope-of #6707 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
GROUP_LOCAL and GROUP_DEFAULT were defined in vclock.h, which isn't the right place for them, so move them somewhere more appropriate. In-scope-of #6707 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
Make the function arguments const, and, while I'm at it, fix function formatting. In-scope-of #6707 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
relay->tx.is_raft_enabled flag is used to track when relay is ready to accept raft pushes. That is, when tx and relay thread endpoints are paired. Actually, there is a designated callback, which is run right at the moment of pairing: pair_cb of cbus_pair(). Let's use it and its counterpart, unpair_cb of cbus_unpair() to notify tx when relay is ready to accept raft pushes. This approach is notably simpler and allows to reuse the notification for other systems which might want to access relay pipe. In-scope of #6707 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
As a counterpart to vclock_min_ignore0 NO_DOC=internal change NO_CHANGELOG=internal change
-
Serge Petrenko authored
All of the current cbus_call() invocations pass TIMEOUT_INFINITY as a timeout and don't provide any free_cb. Since commit bd6fb06a ("core: allow spurious wakeups in cbus_call") cbus_call() with TIMEOUT_INFINITY can't be interrupted at all, so there's no point to pass free_cb for it. Let's simplify cbus_call() usage by removing the last two arguments, free_cb and timeout. These arguments will be present in a new function, cbus_call_timeout. Simplify all current cbus_call() usages which have TIMEOUT_INFINITY. Remove free_cb, where present, and stop allocating call messages dynamically, since static allocation works just fine now. NO_DOC=refactoring NO_CHANGELOG=refactoring
-
Serge Petrenko authored
The commit bd6fb06a ("core: allow spurious wakeups in cbus_call") mistakenly removed the msg->caller cleanup after a spurious wakeup or a timeout. Fix that. NO_DOC=internal change NO_CHANGELOG=internal change
-
- Sep 27, 2022
-
-
Nikolay Shirokovskiy authored
Feedback daemon is reconfiguread as many times as number of options changed during single box.cfg call which is suboptimal. Add proper support for modules which configure all its options in a single call. Part of https://github.com/tarantool/tarantool-ee/issues/200 NO_DOC=refactoring NO_TEST=refactoring NO_CHANGELOG=refactoring
-
- Sep 26, 2022
-
-
Georgiy Lebedev authored
Some luatest framework tests use Lua `assert`s, which are incomprehensible when failed (the only information provided is 'assertion failed!'), making debugging difficult: replace them with luatest `assert`s and their context-specific varieties. NO_CHANGELOG=<code health> NO_DOC=<code health>
-
Vladislav Shpilevoy authored
If an update operation tried to insert a new key into a map or an array which was created by a previous update operation, then the process would fail an assertion. That was because the first operation was stored as a bar update. The second operation tried to branch it assuming that the entire bar update's JSON path must exist, but it wasn't so for the newly created part of the path. The solution is to fallback to branching earlier than the entire bar path ends, if can see that the next part of the path can't be found. Closes #7705 NO_DOC=bugfix
-
- Sep 23, 2022
-
-
Georgiy Lebedev authored
TREE (HASH) index implements `random` method: if the space is empty from the transaction's perspective, which means we have to return nothing, add gap tracking of whole range (full scan tracking), since this result is equivalent to `index:select{}`, otherwise repeatedly call `random` and clarify result, until we get a non-empty one. We do not care about performance here, since all operations in context of transaction management currently have O(number of dirty tuples) complexity. Closes #7670 NO_DOC=bugfix
-
Georgiy Lebedev authored
Since `key_def_merge` sets the merged key definition's unique part count equal to the new part count, the extra assignment in case the index is not unique is redundant: remove it. NO_CHANGELOG=<refactoring> NO_DOC=<refactoring> NO_TEST=<refactoring>
-
Georgiy Lebedev authored
If TREE index `get` result is empty, the key part count is incorrectly compared to the tree's `cmp_def->part_count`, though it should be compared with `cmp_def->unique_part_count`. But we can actually assume that by the time we get to the index's `get` method the part count is equal to the unique part count (partial keys are rejected and `get` is not supported for non-unique indexes): change check to correct assertion. Closes #7685 NO_DOC=<bugfix>
-
- Sep 21, 2022
-
-
Boris Stepanenko authored
Replaced assertions, that no one started new elections/promoted while acquiring limbo, with checks that raft term and limbo term didn't change. In case they did - don't write DEMOTE/PROMOTE and just release limbo, because it's already owned/will soon be by someone else. Closes #7086 NO_DOC=Bugfix
-
- Sep 16, 2022
-
-
Sergey Bronnikov authored
@TarantoolBot document Title: Document encoding parameters in http client New option "params" passed to HTTP request allows a user to add query parameters into URI. When option "params" contains a Lua table with key-value pairs these parameters encoded to a string and passed as an URL path in GET/HEAD/DELETE methods and as a HTTP body with POST method. In a latter case error will be raised when body is not empty. ``` > uri = require("uri") > httpc = require("httpc") > params = { key1 = 'value1', key2 = uri.values('value1', 'value2') } > r = http.client.get("http://httpbin.org/get", { params = params }) > r.url --- - http://httpbin.org/get?key1=value1&key2=value1&key2=value2 ... ``` Key and values could be a Lua number, string, boolean, anything that has a `__serialize` or `__tostring` metamethod. It is possible to pass datetime, decimal and number64 values. Limitations: - order of keys with values in a result string is not deterministic - percent encoding is not supported at the moment Closes #6832
-
Sergey Bronnikov authored
@TarantoolBot document Title: Document encoding HTTP parameters to a query string New method uri.values() allows a user to represent multivalue parameter's. Setting multivalue parameter with `uri.parse()` and `uri.format()`: ``` > params = {q1 = uri.values("v1", "v2")}} > uri.format({host = 'brnkv.ru', params = params}) --- - http://x.html?q1=v1&q1=v2 ... > uri.parse({"/tmp/unix.sock", params = params) --- - host: unix/ service: /tmp/unix.sock unix: /tmp/unix.sock params: q1: - v1 - v2 ... ``` Key and values could be a Lua number, string, boolean, anything that has a `__serialize` or `__tostring` metamethod. It is possible to pass `datetime`, `decimal` and `number64` values too. NOTE: Order of keys with values in a result string is not deterministic. Needed for #6832
-
Sergey Bronnikov authored
Patch introduces two internal functions: `uri.params()` and `uri.encode_kv()`. NO_CHANGELOG=internal NO_DOC=internal Needed for #6832
-
Sergey Bronnikov authored
NO_CHANGELOG=internal NO_DOC=internal NO_TEST=internal
-
Sergey Bronnikov authored
NO_CHANGELOG=internal NO_DOC=internal NO_TEST=internal
-
Ilya Verbin authored
Currently, it is possible to create a constraint with a name that does not match the rules for identifiers. Fix this by validating them by identifier_check. Closes #7201 NO_DOC=bugfix NO_CHANGELOG=minor bug
-
- Sep 15, 2022
-
-
Ilya Verbin authored
Introduce cmake option ENABLE_HARDENING, which is TRUE by default for non-debug regular and static builds, excluding AArch64 and FreeBSD. It passess compiler flags that harden Tarantool (including the bundled libraries) against memory corruption attacks. The following flags are passed: * -Wformat - Check calls to printf and scanf, etc., to make sure that the arguments supplied have types appropriate to the format string specified. * -Wformat-security -Werror=format-security - Warn about uses of format functions that represent possible security problems. And make the warning into an error. * -fstack-protector-strong - Emit extra code to check for buffer overflows, such as stack smashing attacks. * -fPIC -pie - Generate position-independent code (PIC). It allows to take advantage of the Address Space Layout Randomization (ASLR). * -z relro -z now - Resolve all dynamically linked functions at the beginning of the execution, and then make the GOT read-only. Also do not disable hardening for Debian and RPM-based Linux distros. Closes #5372 Closes #7536 NO_DOC=build NO_TEST=build
-
Yaroslav Lobankov authored
Bump test-run to new version with the following improvements: - Improve getting iproto port for tarantool < 2.4.1 [1] [1] https://github.com/tarantool/test-run/pull/349 NO_DOC=testing stuff NO_TEST=testing stuff NO_CHANGELOG=testing stuff
-
Sergey Bronnikov authored
TAP tests could be running by test-run.py, but it is often convenient to run these tests by tarantool only: tarantool test/app-tap/yaml.test.lua. Most TAP tests returns non-zero exit code when TAP asserts are failed. But exit code is not changed when TAP plan is bad (number of planned testcases is not equal to executed). Proposed patch adds test:check() to TAP tests so tests will return non-zero exit code when plan is bad. NO_CHANGELOG=it is not a user-visible change NO_DOC=tests
-
Georgiy Lebedev authored
`directly_replaced` stories can potentially get garbage collected in `memtx_tx_handle_gap_write`, which is unexpected and leads to 'use after free': in order to fix this, limit garbage collection points only to external API calls. Wrap all possible garbage collection points with explicit warnings (see c9981a56). Closes #7449 NO_DOC=bugfix
-
- Sep 14, 2022
-
-
Sergey Bronnikov authored
NO_CHANGELOG=internal NO_DOC=internal NO_TEST=internal
-
Sergey Bronnikov authored
Commit 9adedc1f ("test: add new `make` test targets") introduced new targets for running test-run.py with unit tests. However, this target doesn't depend on changes in tested libraries and changes in unit tests. Proposed patch introduces a function that creates a build targets for unit tests and fixes the described problem with dependencies. Additionally, patch added missed dependence for popen and popen-child. NO_CHANGELOG=internal NO_DOC=internal
-
Sergey Bronnikov authored
NO_CHANGELOG=internal NO_DOC=internal NO_TEST=refactoring Needed for the next commit
-
Alexander Turenko authored
All merge sources (including the merger itself) share the same `<merge source>:pairs()` implementation, which returns `gen, param, state` triplet. `gen` is `lbox_merge_source_gen()`, `param` is `nil`, `state` in the merge source. The `lbox_merge_source_gen()` returns `source, tuple`. The returned source is supposed to be the same object as a one passed to the function (`gen(param, state)`), so the function assumes the object as alive and don't increment source's refcounter at entering, don't decrease it at exitting. This logic is perfect, but there was a mistake in the implementation: the function returns a new cdata object (which holds the same pointer to the merge source structure) instead of the same cdata object. The new cdata object neither increases the source's refcounter at pushing to Lua, nor decreases it at collecting. At result, if we'll loss the original merge source object (and the first `state` that is returned from `:pairs()`), the source structure may be freed. The pointer in the new cdata object will be invalid so. A sketchy code that illustrates the problem: ```lua gen, param, state0 = source:pairs() assert(state0 == source) source = nil state1, tuple = gen(param, state0) state0 = nil -- assert(state1 == source) -- would fails collectgarbage() -- The cdata object that is referenced as `source` and as `state` -- is collected. The GC handler is called and dropped the merge -- source structure refcounter to zero. The structure is freed. -- The call below will crash. gen(param, state1) ``` In the fixed code `state1 == source`, so the GC handler is not called prematurely: we have the merge source object alive till the end of the iterator or till the stop of the traversal. Fixes #7657 NO_DOC=a crash is definitely not what we want to document
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-
Timur Safin authored
NO_DOC=internal NO_CHANGELOG=internal
-