- Feb 04, 2022
-
-
Georgiy Lebedev authored
The _session_settings space is a virtual space which only contains information about the current session, hence, it should be accessible by everyone without granting any additional privileges. New users are granted public role by default: grant read,write access on _session_settings space to public role. Closes #6310 @TarantoolBot document Title: public role rw access on _session_settings space Public role (which is granted to new users by default) now has read, write access on _session_settings_space.
-
Vladimir Davydov authored
According to the readline library documentation, the callback passed to rl_callback_handler_install is supposed to free the string passed to it: > As with readline(), the handler function should free the line when it > it finished with it. ( See https://tiswww.case.edu/php/chet/readline/readline.html#SEC41 ) Our callback leaks it. Closes #6817 NO_DOC=bug fix
-
Georgiy Lebedev authored
tarantool/luatest#214 adds copy directory semantics to luatest_helpers.Server: reuse it in the scope of this test. Follow-up #tarantool/luatest#214 NO_DOC=refactoring NO_CHANGELOG=refactoring
-
Georgiy Lebedev authored
Currently, one can only specify a work directory that will be directly passed to box.cfg (via workdir option passed to luatest.Server, via the TARANTOOL_WORKDIR environment variable, or via explicitly specifying work_dir in box.cfg), which implies that the test server will be run in the specified directory. One might need to start a Tarantool instance from a certain snapshot or set of xlogs — for instance, to test Tarantool over an older schema version. For diff-tests the test-run harness provided a 'workdir=' option when calling inspector:cmd('create server ...'), which solved this issue by copying the specified directory's contents into the test server's actual working directory (see tarantool/test-run/blob/ d16cbdc2702e7d939d5d34e41730fd83d2c65879/lib/preprocessor.py#L274-L284). To solve this issue using the luatest_helpers harness we introduce a datadir option for luatest_helpers.Server with the same semantics. Needed for #6571 Closes tarantool/luatest#214 NO_DOC=luatest helpers NO_CHANGELOG=luatest helpers
-
- Feb 03, 2022
-
-
Georgiy Lebedev authored
The MVCC transaction manager story garbage collection introduced a subtle code dependency: TREE index iterators can get broken (see definition of “broken” at struct bps_tree_iterator), because the elements they are referencing can change during story garbage collection. We coined the notion “MVCC TRANSACTION MANAGER STORY GARBAGE COLLECTION BOUND” to refer to this issue explicitly: iterators and the tree elements they reference must not be used after this point in code. Closes #6344 NO_DOC=bug fix
-
mechanik20051988 authored
The results of diff tests depend on specific numbers (tuple size for example). Remove this dependence for some tests, because after tuple compression implementation, tuple size will be change. Part of #2695 NO_CHANGELOG=test fix NO_DOC=test fix
-
mechanik20051988 authored
Due to the fact that we used `const char**` to save a pointer to the original msgpack header, when restoring the pointer, we received an incorrect value and as a consequence assertion failure later. Correctly saving and restoring the pointer fixed this problem. NO_DOC=bug fix NO_CHANGELOG=bug fix
-
mechanik20051988 authored
There were two problems with resource release in performance test: - because of manually zeroing of `box_tuple_last`, tuple_format structure was not deleted. `box_tuple_last` should be zeroed in `tuple_free` function. - invalid loop for resource release in one of the test cases. This patch fix both problems. NO_CHANGELOG=test fix NO_DOC=test fix
-
Vladimir Davydov authored
A memtx tree iterator remembers the last tuple returned to the user and its hint so that it can restore iteration if the index is changed. To prevent the tuple from being freed, it references it. The problem is it's not enough for a functional index, because the latter allocates key parts separately from tuples (it stores pointers to them in memtx tree hints). As a result, if a tuple is deleted from the tree, its key parts will be immediately freed, even if the tuple itself is referenced. Since key parts are necessary to restore an iterator, this results in a use after free bug. To fix the issue, let's store a copy of the current tuple key part in the iterator along with the tuple. If a key part is small, the copy is stored in a preallocated fixed-size buffer, otherwise it's allocated with malloc. Closes #6786
-
Vladimir Davydov authored
So as not to set tree_iterator::current directly. Currently, we only need to ref/unref tuple when we update current, but in the future we will also alloc/free func index key part, a pointer to which is stored in a hint. Needed for #6786
-
Vladimir Davydov authored
These functions are used only by memtx functional indexes and they depend on neither tuple_format nor tuple. No need to keep them in tuple_format_vtab. Needed for #6786
-
- Feb 02, 2022
-
-
Igor Munkin authored
* memprof: report JIT-side allocations as internal Closes #5679
-
Vladimir Davydov authored
This patch drops IPROTO_SHUTDOWN and IPROTO_FEATURE_GRACEFUL_SHUTDOWN (the feature hasn't been officially released yet so it's okay). Instead, the server now generates IPROTO_EVENT{key='box.shutdown', value=true} before running on_shutdown triggers. The on_shutdown trigger installed by IPROTO has been greatly simplified - now it just stops listening for new connections. A new on_shutdown trigger is installed by the session infrastructure code - the trigger waits for all sessions that subscribed to 'box.shutdown' event to close. Net.box uses conn:watch to subscribe internally to the new event. As before, it runs user-defined on_shutdown triggers and then waits for active requests to finish before closing the connection. Notes about the tests: 1. We don't need graceful_shutdown_errinj test, because a. The feature can be disabled in net.box without errinj by adding internal _disable_graceful_shutdown connection option. b. Since there's no new packet type, we don't need to check that IPROTO_ID/AUTH handle it correctly in net.box. 2. The graceful_shutdown.test_shutdown_aborted_on_reconnect_2 changed: since triggers use the watcher infrstructure, the same trigger can't run in parallel with itself. The main benefit of reusing IPROTO_EVENT instead of introducing a new request type is that connections will have to do less work to support the feature (to some extent, it'd be supported even if the connector doesn't do any work and just lets the user subscribe to events). Suggested-by @unera Follow-up a33f3cc7 ("net.box: support graceful shutdown protocol") Follow-up 6f29f9d7 ("iproto: introduce graceful shutdown protocol") Follow-up #5924 NO_CHANGELOG=already added NO_DOC=already added, will be updated manually
-
Vladimir Davydov authored
Currently, to resubscribe registered watchers net.box installs an on_connect trigger that writes IPROTO_WATCH requests to the send buffer. The requests will be sent by the net.box state machine once all on_connect triggers return. The problem is this trigger may run before a user-defined trigger that closes the connection, in which case close() will hang forever, because since #6338, the close method blocks until the send buffer is emptied, while it can't be emptied in case close is called by the state machine itself (see #5358). Let's fix this issue by making close() omit blocking if called from the state machine fiber. No user is affected, because IPROTO_WATCH hasn't been officially released yet so neither changelog nor doc is required. Closes #6824 NO_CHANGELOG=bug not released NO_DOC=NO_CHANGELOG
-
Vladimir Davydov authored
This commit fixes the following assertion failure that occurrs on an attempt to register a watcher for a connection to a server that doesn't support watchers (e.g. 2.8): src/box/lua/net_box.c:2283: netbox_transport_dispatch_response: Assertion `transport->inprogress_request_count > 0' failed. The problem is that an IPROTO_WATCH request isn't accounted in inprogress_request_count, because the server isn't supposed to reply to it. However, if the server doesn't support the request type, it will reply to it with an error, breaking the assumption made by the client. We fix the assertion itself by explicitly excluding requests with sync=0 from accounting on the client side (IPROTO_WATCH has sync=0), because such requests aren't supposed to have a matching reply. Also, to avoid errors on the server side, we make the client code not send IPROTO_WATCH and IPROTO_WATCH if the server doesn't set the 'watchers' feature bit in reply to the IPROTO_ID request from the client. No user is affected, because IPROTO_WATCH hasn't been officially released yet so neither changelog nor doc is required. Closes #6819 NO_CHANGELOG=bug not released NO_DOC=NO_CHANGELOG
-
Vladimir Davydov authored
IPROTO triggers input processing after sending output, provided the output channel isn't clogged, which totally makes sense. There's one nuance here: currently, input is triggered only if something was written to the output channel. Before the IPROTO_WATCH request type was added by commit 4be5de4c ("iproto: add watchers support"), every request had a response so it worked just fine. The new request type is special - the server doesn't reply to it. As a result, if input was blocked upon reaching the box.cfg.net_msg_max limit, it might never get resumed, because input processing might never be triggered. Fix this issue by triggering input processing even if there was no output to flush. No user is affected, because IPROTO_WATCH hasn't been officially released yet so neither changelog nor doc is required. Closes #6818 NO_CHANGELOG=bug not released NO_DOC=NO_CHANGELOG
-
- Feb 01, 2022
-
-
Yaroslav Lobankov authored
This change splits the debug_coverage.yml workflow into two independent workflows: debug.yml and coverage.yml. The first one will be run on push to a branch and creating/updating external pool requests. In other words, it will be run always. The second one will be run on push to a branch only if the branch has the `full-ci` suffix (some-branch-full-ci) or the `full-ci` label is set on a pool request, internal or external - it doesn't matter. Why do we need it? 1. The coverage testing is heavy testing due to enabling long tests and takes at least 2 times more time than debug build testing. So testing that is done at the first stage (lint, release, debug) will pass faster. 2. The coverage report as a pool request comment from coveralls.io can be obtained only if a pool request event happened. It is a limitation of the `coverallsapp/github-action` action [1]. So there is no sense to run the coverage workflow on push events. But the coverage report on push still can be seen on coveralls.io if the branch has the `full-ci` suffix. 3. The logic of the debug_coverage.yml workflow is tricky enough and no one understands how it works. So we would like to have an absolutely transparent and simple workflow. [1] https://github.com/coverallsapp/github-action/tree/8cbef1dea373ebce56de0a14c68d6267baa10b44#coveralls-github-action Closes tarantool/tarantool-qa#147 NO_DOC=ci NO_CHANGELOG=ci
-
- Jan 31, 2022
-
-
Andrei Sidorov authored
Fix static build for macOS 11.5 or higher. On macOS SDK ver. 11.5 some `*.dylib` files was replaced with `*.tbd`. So we replace `libunwind.dylib` on `libunwind.tbd`. Because of macOS 10.15 support being dropped conditional is not needed. Closes #6052
-
Vladimir Davydov authored
Because of the typo, the test fails on my localhost like this: There is no test with name 'test_ignore_with_force_recovery' but hook 'after_test' is defined for it (I use luatest installed with luarocks). Follow-up #6794 Follow-up 8c8b7739 ("recovery: fix recovering from 1.6 xlogs/snaps")
-
Serge Petrenko authored
Tarantool 1.6 didn't rotate xlogs on snap creation. This means, that Tarantool 1.6 data dir may contain files like 0...0.xlog and 0...01.snap, where 0...0.xlog covers changes both before and after the snap creation. New Tarantool versions fail to recover from such files: they recover the data correctly, but fail to update the vclock accordingly. Let's allow recovering from such files and update the vclock accordingly, but only when force_recovery is set. Patch 634f59c7 ("recovery: panic in case of recovery and replicaset vclock mismatch") added the panic, which helped to discover the issue. Before that patch Tarantool would silently recover with an outdated vclock. Also, while we're at it, refactor the check leading to a panic: it never happens after hot_standby, so move it into "else" branch of hot_standby. Closes #6794
-
- Jan 30, 2022
-
-
Georgiy Lebedev authored
Privileges passed to box.schema.user.grant are resolved in quite a naïve manner: we do not check that all of them have been resolved. Also, the privilege_resolve function itself is poorly designed (instead of generic string processing, we have a bunch of if's): extract parsing of the privileges string into privilege_parse and process it through a generic FSA. Closes #6199
-
- Jan 29, 2022
-
-
Alexander Turenko authored
Well, the diff is not nice, so I'll summarize the changes for humans. General changes: * Moved patch submission and so on to the end of the document. * Reformatted to usual 80 columns, inlined and actualized links. * Reformatted lists: start from a capital letter, end with a period. * Minor wording tweaks to make it sound better. Badges: * Travis CI and GitLab CI -> GitHub Actions. * Removed Slack and Gitter. * Google Groups -> GitHub Discussions. * Added Stack Overflow. Define Tarantool as in-memory computing platform as on the website. Added license information. Appserver: * Rewrote the sentence regarding compatibility with Lua 5.1 and highlighted key LuaJIT features instead. There are doubts within the team about declaring 100% compatibility with Lua 5.1, let's go to the safe ground so. * Splitted the list in a more granular way. * Added links to mysql and pg connectors. * Added queue, vshard, cartridge with links. * Added a link to the modules page. Database: * Replaced MsgPack with MessagePack. It is called so on the official website. * Replaced 'optional persistence' with 'complete WAL-based persistence' in the memtx description to make it less confusing. There are users, which believe that memtx does not offer persistency. * Moved SQL down. * Added JSON path indexes. * Added synchronous quorum-based replication. * Added RAFT-based automatic leader election. * Added a link to the connectors list. Platforms: * Dropped OpenBSD from the list. It is not verified in CI, so I feel it like 'we have no commitments here'. * Replaced x86 with more explicit x86_64. * Moved Mac OS before FreeBSD (we have more users here). * Added aarch64/M1. Tarantool is ideal for... * Kept intact, just reformatted to 80 columns. Download: * Added 'or using Docker' in addition to 'as a binary package to your OS'. Added a link to the 'Awesome Tarantool' list. Report bugs / send feedback or patch: * Replaced Google Groups with GitHub Discussions. * Added Stack Overflow. * Highlighted that we accept pull requests and linked 'How to get involved' guide. Fixes #6579
-
- Jan 28, 2022
-
-
Igor Munkin authored
* Actually implement maxirconst trace limit. * Fix string.char() recording with no arguments. Closes #6371 Part of #6548
-
Vladimir Davydov authored
This commit adds a new script, tools/check-commits. The script takes git revisions in the same format as git-rev-list (actually, it feeds all its arguments to git-rev-list) and runs some checks on each of the commits. For example, to check the head commit, run: ./tools/check-commits -1 HEAD To check the last five commits, run: ./tools/check-commits HEAD~5..HEAD Currently, there are only two checks, but in future we may add more checks, e.g. check diffs for trailing spaces: - The commit message contains a documentation request. Can be suppressed with NO_DOC=<reason> in the commit message. - A new changelog entry is added to changelog/unreleased by the commit. Can be suppressed with NO_CHANGELOG=<reason> in the commit message. This commit also adds a new workflow triggered on pull request, lint/commits. The workflow runs the tools/check-commits script on all the commits added by the pull request. The workflow doesn't run on push, because it's problematic to figure out what commits are new on a branch. Besides, we don't want to run it on push to release branches, because it's a pure dev workflow. Example output: Checking commit a33f3cc7 PASS Checking commit 6f29f9d7 FAIL SHA: 6f29f9d7 SUBJECT: iproto: introduce graceful shutdown protocol ERROR: Changelog not found in changelog/unreleased. If this commit doesn't require changelog, please add NO_CHANGELOG=<reason> to the commit message. Checking commit fbc25aae FAIL SHA: fbc25aae SUBJECT: Update small submodule ERROR: Missing documentation request ('@TarantoolBot document' not found in the commit message). If this commit doesn't need to be documented, please add NO_DOC=<reason> to the commit message. ERROR: Changelog not found in changelog/unreleased. If this commit doesn't require changelog, please add NO_CHANGELOG=<reason> to the commit message. NO_DOC=ci NO_CHANGELOG=ci
-
- Jan 27, 2022
-
-
Yaroslav Lobankov authored
Now the tarantool/testing:debian-stretch image from Docker Hub is used. Closes tarantool/tarantool-qa#136
-
- Jan 26, 2022
-
-
Nick Volynkin authored
-
Vladimir Davydov authored
Closes #5924 @TarantoolBot document Title: Document graceful shutdown of net.box connections If a Tarantool server supports the IPROTO graceful shutdown protocol (`connection.peer_protocol_features` contains `graceful_shutdown` or `peer_protocol_version` is >= 4), then when the server is asked to exit (`os.exit()` is called on the server or `SIGTERM` signal is received), it won't terminate net.box connections immediately. Instead here's what will happen: 1. The server stops accepting new connections and sends a special 'shutdown' packet to all connection that support the graceful shutdown protocol. 2. Upon receiving a 'shutdown' packet, a net.box connection executes shutdown triggers. The triggers are installed by `on_shutdown()` method of a net.box connection. The method follows the same protocol as `on_connect()`, `on_disconnect()`, and `on_schema_reload()`. Triggers are executed asynchronously in a new fiber. The connection remains active while triggers are running so a trigger callback may send new requests over the net.box connection. 3. After shutdown triggers have returned, the connection is switched to `graceful_shutdown` state, in which all new requests fail with an error. The connection will remain in this state until all requests have been completed. 4. Once all in-progress requests have completed, the connection is closed and switched to `error` or `error_reconnect` state, depending on whether `reconnect_after` option is set. 5. Once all connections that support the graceful shutdown protocol are closed, the server exits. Note, the graceful shutdown protocol is best-effort: there's no guarantee that the server doesn't exit before all active connections are gracefully closed; the server may still exit on timeout or just be killed. The timeout is configured by `box.ctl.set_on_shutdown_timeout()` on a server. Please also update the net.box state machine diagram: ``` initial -> auth -> fetch_schema <-> active fetch_schema, active -> graceful_shutdown (any state, on error) -> error_reconnect -> auth -> ... \ -> error (any state, but 'error') -> closed ```
-
Vladimir Davydov authored
This commit adds the graceful shutdown feature to the IPROTO protocol. Net.box is patched by the next commit, which also adds tests. Part of #5924 @TarantoolBot document Title: Document IPROTO graceful shutdown A new IPROTO request type was introduced - `IPROTO_SHUTDOWN`, code 77. When asked to shut down (`os.exit()` is called or `SIGTERM` signal is received), a server stops accepting new connections and sends a packet of this type to each of its clients that support the graceful shutdown feature (see below how a server figures out if a client supports the feature). The server won't exit until all the clients that were sent the packets close connections. If all the clients don't close connections within the shutdown timeout, the server will exit anyway. The default shutdown timeout is 3 seconds, and it can be configured with `box.ctl.set_on_shutdown_timeout()`, which also determines the timeout of `box.ctl.on_shutdown()` triggers. An `IPROTO_SHUTDOWN` packet doesn't have any keys in its headers (not even sync number or schema version) nor a body. A client isn't supposed to reply to an `IPROTO_SHUTDOWN` packet. Instead it's supposed to close its connection as soon as possible. A client may wait for pending requests to complete and even send new requests after receiving an `IPROTO_SHUTDOWN` packet. The server will serve them as usual until it exits on timeout. Clients that support the graceful shutdown feature are supposed to set the `IPROTO_FEATURE_GRACEFUL_SHUTDOWN` feature (bit 4) when sending an `IPROTO_ID` request to a server. Servers that support the feature set the same bit in reply to an `IPROTO_ID` request. Introduction of this feature bumped the IPROTO protocol version up to 4.
-
Vladimir Davydov authored
Needed to fix rlist_foreach_entry ASAN runtime error in case the rlist member is aligned.
-
- Jan 25, 2022
-
-
Vladislav Shpilevoy authored
Raft needs to know cluster size in order to detect and handle split vote. The patch uses registered server count as cluster size. It is not documented nor has a changelog file because this is an optimization. Can't be observed except in logs or with a watch. Closes #5285
-
Vladislav Shpilevoy authored
Split vote is a situation when during election nobody can win and will not win in this term for sure. Not a single node could get enough votes. For example, with 4 nodes one could get 2 votes and other also 2 votes. Nobody will get quorum 3 in this term. The patch makes raft able to notice that situation and speed up the term bump. It is not bumped immediately though, because nodes might do that simultaneously and will get the split vote again after voting for self. There is a random delay. But it is just max 10% of election timeout, so it should speed up split vote resolution on 90% at least. Part of #5285
-
Vladislav Shpilevoy authored
To detect split vote a node needs to see that number of free votes is not enough for anyone to win even if it gets them all. Hence every node needs to count votes for all other nodes. The patch makes raft store votes in a bit more complicated manner than a simple bitmap for just the current instance. Part of #5285
-
Vladislav Shpilevoy authored
ev_timer.at was used as timeout. But after ev_timer_start() it turns into the deadline - totally different value. The patch makes sure ev_timer.at is not used in raft at all. To test that the fakeev subsystem is patched to start its time not from 0. Otherwise ev_timer.at often really matched the timeout even for an active timer.
-
Vladislav Shpilevoy authored
It used to crash if done during election on a node voted for anybody, it is a candidate, it doesn't know a leader yet, but has a WAL write in progress. Thus it could only happen if the term was bumped by a message from a non-leader node and wasn't flushed to the disk yet. The patch makes the reconfig check if there is a WAL write in progress. Then don't do anything. Could also check for volatile vote instead of persistent, but it would create the same problem for the case when started writing vote for self and didn't finish yet. Reconfig would crash.
-
artembo authored
Add GitHub Actions workflows for Fedora 35 Closes: #6692
-
- Jan 20, 2022
-
-
Serge Petrenko authored
The test has a function wait_repl() which tries to wait until the data is replicated to the other node for a tiny timeout (0.2 seconds) This didn't cause issues in the past, but it became an issue after applier in thread introduction. Let's use the default test_run:wait_cond() approach instead of a custom one. This way we increase waiting timeout (making the test pass with a higher chance). Follow-up #6329
-
Serge Petrenko authored
Follow-up #6329
-
Serge Petrenko authored
It's reported that in master-slave setup replicas often have higher CPU usage than their replication masters. Moreover, it's easy for a replica to start falling behind the master. The reason for that is the additional work load on replica's tx thread as compared to master. While master typically splits request processing into 2 threads: iproto, which decodes the requests, and tx, which applies them, replica performs both tasks in the tx thread. This is due to replication architecture: replica handles master connection by a single fiber in the tx thread. The fiber first decodes the incoming requests and then applies them. Teach replicas to decode the incoming replication stream in a separate thread. This way tx thread doesn't waste processing time on row decoding. Each applier thread may serve several appliers, and the total number of applier threads is controlled by a new configuration option - `replication_threads`, with default value `1` (meaning, a single thread is spawned to handle all the appliers. Closes #6329 @TarantoolBot document Title: New configuration option - `replication_threads` It's now possible to specify how many threads will be spawned to decode the incoming replication stream. The default value is '1', meaning a single thread will handle all incoming replication streams. You may set the value to anything from 1 to 1000. When there are multiple replication threads, connections to serve are evenly distributed between the threads.
-
- Jan 19, 2022
-
-
Vladimir Davydov authored
This commit adds SSL context to iostream_ctx and a new kind of error for SSL connections. The SSL context is created if the target URI has transport=ssl parameter. If transport=plain or absent, it will be set to NULL and plain streams will be created as before. For other values of the transport parameter, an error is raised. Note, this is just a stub - an attempt to create an SSL context always currently fails with "SSL not available" error. It is supposed to be implemented in EE build. The new kind of error is currently only used for decoding errors from MsgPack and never raised. It will be raised by the actual implementation of SSL stream.
-
Vladimir Davydov authored
Use table.equals for comparing table-valued configuration parameters instead of ipairs. Needed to trigger reconfiguraiton if the new URI differs from the old one only by parameters, e.g.: box.cfg{listen = {uri = 3301, params = {transport = 'plain'}}} box.cfg{listen = {uri = 3301, params = {transport = 'ssl'}}} No test is added, because currently there's no URI parameters handled by box. The next commit adds the 'transport' parameter and a test that wouldn't pass without this commit.
-