- Jun 16, 2017
-
-
Roman Tsisyk authored
-
Ilya authored
Inpspired by tarantool/curl module by Vasiliy Soshnikov. Reviewed and refactored by Roman Tsisyk. Closes #2083
-
Roman Tsisyk authored
Rename `struct type` to `struct type_info` and `struct method` to `struct method_info` to fix name clash with curl/curl.h
-
- Jun 15, 2017
-
-
Vladimir Davydov authored
We added _truncate space to 1.7.5 and we are going to add new system spaces for storing sequences and triggers. Without upgrade, the corresponding operations won't work. Since 1.7.5 is a minor upgrade, users may not call box.schema.upgrade(), so we need to call it for them automatically. This patch introduces infrastructure for automatic upgrades and sets upgrade to 1.7.5 to be called automatically. While we are at it, rename schema version 1.7.4 to 1.7.5 (1.7.4 has already been released). Closes #2517
-
Roman Tsisyk authored
Follow up #2496
-
Roman Tsisyk authored
Set checkpoint_count = 2, checkpoint_interval = 3600 by default. vinyl/layout.result is updated because checkpoint_count was changed from 6 to 2. Closes #2496
-
Georgy Kirichenko authored
If more than one request rollbacks then vclock_follow cause commit broken order
-
Georgy Kirichenko authored
Compiling with arm raises a signed and unsigned comparison error.
-
Georgy Kirichenko authored
-
- Jun 14, 2017
-
-
Vladislav Shpilevoy authored
Do not call vy_mem_older_lsn on each UPSERT commit. Older lsn statement is used to squash big count of upserts and to turn UPSERT into REPLACE, if the older statement has appeared to be not UPSERT. But n_upserts could be calculated on prepare phase almost free, because the bps has method bps_insert_get_iterator, which returns iterator to the inserted statement. We can move this iterator forward to the older lsn without searching in the tree and update n_upserts. On a commit phase we can get the n_upserts, calculated on a prepare phase, and call vy_mem_older_lsn only if there is a sense to optimize the UPSERT. Closes #1988
-
Vladislav Shpilevoy authored
According to the code, 'replace' tuple can also have UPSERT type. Lets name it 'repsert' = 'replace' + 'upsert'.
-
- Jun 13, 2017
-
-
Konstantin Osipov authored
Improve the error message when selecting from HASH index using a partial key. Fixes gh-1463.
-
Vladislav Shpilevoy authored
tuple_field_...() is a family of functions for retrieving a field from tuple with checking of the specified type. Implement no-throw versions of these functions to use them from C code. Needed #944
-
Vladislav Shpilevoy authored
Needed for #944 and #2285
-
- Jun 12, 2017
-
-
Konstantin Osipov authored
Set of error injection before issuing a select, to avoid its effects on select execution.
-
Vladislav Shpilevoy authored
-
- Jun 10, 2017
-
-
Konstantin Osipov authored
Fix a typo in a status message (gh-2417)
-
Vladislav Shpilevoy authored
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
-
Georgy Kirichenko authored
Space should not be accessible while droping. See #2075
-
Vladimir Davydov authored
A new record type has been added to vylog by commit 353bcdc5 ("Rework space truncation"), VY_LOG_TRUNCATE_INDEX. Update the test.
-
Vladimir Davydov authored
Since vinyl files (run, index, vylog) don't have space id, reading them with tarantoolctl-cat fails with: tarantoolctl:784: attempt to compare nil with number
-
- Jun 09, 2017
-
-
Vladimir Davydov authored
In case of failure, print files that were not deleted and the output of box.internal.gc.info(). Needed for #2486
-
Vladimir Davydov authored
To match box.cfg.vinyl_read_threads introduced by the previous patch.
-
Vladimir Davydov authored
vy_run_iterator_load_page() uses coeio, which is extremely inefficient for our cases: - it locks/unlocks mutexes every time when a task is queued, scheduled, or finished - it invokes ev_async_send(), which writes to eventfd and wakes up TX loop every time on every task completion - it blocks tasks until a free worker is available, which leads to unpredictable delays This patch replaces coeio with cbus in the similar way we do TX <-> WAL interaction. The number of reader threads is set by a new configuration option, vinyl_read_threads, which is set to 1 by default. Note, this patch doesn't bother adjusting cbus queue length, i.e. it is set to INT_MAX as per default. While this is OK when there are a lot of concurrent read requests, this might be suboptimal for low-bandwidth workloads, resulting in higher latencies. We should probably update the queue length dynamically depending on how many clients are out there. Closes #2493
-
- Jun 08, 2017
-
-
bigbes authored
-
Vladimir Davydov authored
-
Vladimir Davydov authored
Space truncation that we have now is not atomic: we recreate all indexes of the truncated space one by one. This can result in nasty failures if a tuple insertion races with the space truncation and sees some indexes truncated and others not. This patch redesigns space truncation as follows: - Truncate is now triggered by bumping a counter in a new system space called _truncate. As before, space truncation is implemented by recreating all of its indexes, but now this is done internally in one go, inside the space alter trigger. This makes the operation atomic. - New indexes are created with Handler::createIndex method, old indexes are deleted with Index::~Index. Neither Index::commitCreate nor Index::commitDrop are called in case of truncation, in contrast to space alter. Since memtx needs to release tuples referenced by old indexes, and vinyl needs to log space truncation in the metadata log, new Handler methods are introduced, prepareTruncateSpace and commitTruncateSpace, which are passed the old and new spaces. They are called before and after truncate record is written to WAL, respectively. - Since Handler::commitTruncateSpace must not fail while vylog write obviously may, we reuse the technique used by commitCreate and commitDrop methods of VinylIndex, namely leave the record we failed to write in vylog buffer to be either flushed along with the next write or replayed on WAL recovery. To be able to detect if truncation was logged while recovering WAL, we introduce a new vylog record type, VY_LOG_TRUNCATE_INDEX which takes truncate_count as a key: if on WAL recovery index truncate_count happens to be <= space truncate_count, then it it means that truncation was not logged and we need to log it again. Closes #618 Closes #2060
-
Vladimir Davydov authored
Space truncate rework done by the next patch requires the ability to swap data stored on disk between two indexes on recovery so as not to reload all runs every time a space gets truncated. Since we can't swap content of two rb tree (due to rbt_nil), convert vy_index->tree to a pointer.
-
Roman Tsisyk authored
-
Georgy Kirichenko authored
Lock schema before any changes to space and index dictionary and unlock only after commit or rollback. This allow many parallel data definition statements. Issue #2075
-
Georgy Kirichenko authored
We need to lock box schema while editing a ddl space. This lock should be done before any changes in a ddl space. Before trigger is the good place to issue a schema lock. See #2075
-
Vladimir Davydov authored
We must store at least one snapshot, otherwise we wouldn't recover after restart, so if checkpoint_count is set to 0, we disable garbage collection. This contravenes the notion followed everywhere else in tarantool: if we want an option value (timeout, checkpoint count, etc) to be infinite, we should set it to a very big number, not to 0. Make checkpoint_count comply.
-
Vladimir Davydov authored
The current gc implementation has a number of flaws: - It tracks checkpoints, not consumers, which makes it impossible to identify the reason why gc isn't invoked. All we can see is the number of users of each particular checkpoint (reference counter), while it would be good to know what references it (replica or backup). - While tracking checkpoints suits well for backup and initial join, it doesn't look good when used for subscribe, because replica is supposed to track a vclock, not a checkpoint. - Tracking checkpoints from box/gc also violates encapsulation: checkpoints are, in fact, memtx snapshots, so they should be tracked by memtx engine, not by gc, as they are now. This results in atrocities, like having two snap xdirs - one in memtx, another in gc. - Garbage collection is invoked by a special internal function, box.internal.gc.run(), which is passed the signature of the oldest checkpoint to save. This function is then used by the snapshot daemon to maintain the configured number of checkpoints. This brings unjustified complexity to the snapshot daemon implementation: instead of just calling box.snapshot() periodically it has to take on responsibility to invoke the garbage collector with the right signature. This also means that garbage collection is disabled unless snapshot daemon is configured to be running, which is confusing, as snapshot daemon is disabled by default. So this patch reworks box/gc as follows: - Checkpoints are now tracked by memtx engine and can be accessed via a new module box/src/checkpoint.[hc], which provides simple wrappers around corresponding MemtxEngine methods. - box/gc.[hc] now tracks not checkpoints, but individual consumers that can be registered, unregistered, and advanced. Each consumer has a human-readable name displayed by box.internal.gc.info(): tarantool> box.internal.gc.info() --- - consumers: - name: backup signature: 8 - name: replica 885a81a9-a286-4f06-9cb1-ed665d7f5566 signature: 12 - name: replica 5d3e314f-bc03-49bf-a12b-5ce709540c87 signature: 12 checkpoints: - signature: 8 - signature: 11 - signature: 12 ... - box.internal.gc.run() is removed. Garbage collection is now invoked automatically by box.snapshot() and doesn't require the snapshot daemon to be up and running.
-
Konstantin Osipov authored
Fix spelling and rephrase a few comments.
-
Vladislav Shpilevoy authored
If the update operation changes a field with number >= 64, the column mask of the update op is set to UINT64_MAX. Lets use the last bit of the column mask as the flag, that all fields with numbers >= 63 could be changed. Then if the indexed positions are less than 64, the column mask will be always working. Closes #1716
-
Vladislav Shpilevoy authored
Remove waiting for end of the dump of the secondary indexes. According to 0d99714f commit the primary is always dumped after secondary and we can wait for the only primary instead of all indexes.
-
- Jun 07, 2017
-
-
Konstantin Osipov authored
Replace coeio_init() with coio_init(). Remove coeio prefix, and use coio for everything.
-
Konstantin Osipov authored
We use _init() suffic for library-wide initializers.
-