- Jun 16, 2017
-
-
Vladimir Davydov authored
It's no use having a separate method for every kind of integer we want to append to box info - int64_t should suit everyone.
-
Vladimir Davydov authored
We compute dump bandwidth basing on the time it takes a run writing task to complete. While it used to work when we didn't have data compression and indexes didn't share in-memory tuples, today the logic behind dump bandwidth calculation is completely flawed: - Due to data compression, the amount of memory we dump may be much greater than the amount of data we write to disk, in which case dump bandwidth will be underestimated. - If a space has several indexes, dumping it may result in writing more data than is actually stored in-memory, because tuples of the same space are shared among its indexes in-memory, but stored separately when written to disk. In this case, dump bandwidth will be overestimated. This results in quota watermark being set incorrectly and, as a result, either stalling transactions or dumping memory non-stop. Obviously, to resolve both issues, we need to account memory freed per unit of time instead of data written to disk. So this patch makes vy_scheduler_trigger_dump() remember the time when dump was started and vy_scheduler_complete_dump() update dump bandwidth basing on the amount of memory dumped and the time dump took.
-
Vladimir Davydov authored
Currently, vy_scheduler_remove_mem() calls vy_scheduler_complete_dump() if vy_scheduler_dump_in_progress() returns false, but the latter doesn't necessarily mean that the dump has just been completed. The point is that vy_scheduler_remove_mem() is called not only for a memory tree that has just been dumped to disk, but also for all memory trees of a dropped index, i.e. dropping an index when there's no dump in progress results in vy_scheduler_complete_dump() invocation. This doesn't do any harm now, but looks ugly. Besides, I'm planning to account dump bandwidth in vy_scheduler_complete_dump(), which must only be done on actual dump completion.
-
Vladimir Davydov authored
Following patches will add more logic to them, so it's better to factor them out now to keep the code clean. No functional changes.
-
Vladimir Davydov authored
Currently, to force dumping all in-memory trees, box.snapshot() increments scheduler->generation directly. If dump is in progress and there's a space that has more than one index and all its secondary indexes have been dumped by the time box.snapshot() is called and its primary index is being dumped, incrementing the generation will force the scheduler to start dumping secondary indexes of this space again (provided, of course, the space has fresh data). Then, creating a dump task for a secondary index will attempt to pin the primary index - see vy_task_dump_new() => vy_scheduler_pin_index() - which will crash, because the primary index is being dumped and hence can't be removed from the scheduler by vy_scheduler_pin_index(): Segmentation fault #0 0x40c3a4 in sig_fatal_cb(int)+214 #1 0x7f6ac7981890 in ? #2 0x4610bd in vy_scheduler_remove_index+46 #3 0x4610fe in vy_scheduler_pin_index+49 #4 0x45f93e in vy_task_dump_new+1478 #5 0x46137e in vy_scheduler_peek_dump+282 #6 0x461467 in vy_schedule+47 #7 0x461bf8 in vy_scheduler_f+1143 To fix that let's trigger dump (by bumping generation) only from the scheduler fiber, from vy_scheduler_peek_dump(). The checkpoint will force the scheduler to schedule dump by setting checkpoint_in_progress flag and setting checkpoint_generation. Closes #2508
-
Konstantin Osipov authored
-
Vladimir Davydov authored
The replace trigger of _truncate system space (on_replace_dd_truncate) does nothing on insertion into or deletion from the space - it only updates space truncate_count when a tuple gets updated. As a result, space truncate_count isn't initialized properly after recovering snapshot. This does no harm to memtx, because it doesn't use space truncate_count at all, but it breaks the assumption made by vinyl that if space truncate_count is less than index truncate_count (which is loaded from vylog), the space will be truncated during WAL recovery and hence there's no point in applying statements to the space (see vy_is_committed_one). As a result, all statements inserted into a vinyl space after snapshot following truncation of the space, are ignored on WAL recovery. To fix that, we must initialize space truncate_count when a tuple is inserted into _truncate system space. Closes #2521
-
Roman Tsisyk authored
-
Ilya authored
Inpspired by tarantool/curl module by Vasiliy Soshnikov. Reviewed and refactored by Roman Tsisyk. Closes #2083
-
Roman Tsisyk authored
Rename `struct type` to `struct type_info` and `struct method` to `struct method_info` to fix name clash with curl/curl.h
-
- Jun 15, 2017
-
-
Vladimir Davydov authored
We added _truncate space to 1.7.5 and we are going to add new system spaces for storing sequences and triggers. Without upgrade, the corresponding operations won't work. Since 1.7.5 is a minor upgrade, users may not call box.schema.upgrade(), so we need to call it for them automatically. This patch introduces infrastructure for automatic upgrades and sets upgrade to 1.7.5 to be called automatically. While we are at it, rename schema version 1.7.4 to 1.7.5 (1.7.4 has already been released). Closes #2517
-
Roman Tsisyk authored
Follow up #2496
-
Roman Tsisyk authored
Set checkpoint_count = 2, checkpoint_interval = 3600 by default. vinyl/layout.result is updated because checkpoint_count was changed from 6 to 2. Closes #2496
-
Georgy Kirichenko authored
If more than one request rollbacks then vclock_follow cause commit broken order
-
Georgy Kirichenko authored
Compiling with arm raises a signed and unsigned comparison error.
-
Georgy Kirichenko authored
-
- Jun 14, 2017
-
-
Vladislav Shpilevoy authored
Do not call vy_mem_older_lsn on each UPSERT commit. Older lsn statement is used to squash big count of upserts and to turn UPSERT into REPLACE, if the older statement has appeared to be not UPSERT. But n_upserts could be calculated on prepare phase almost free, because the bps has method bps_insert_get_iterator, which returns iterator to the inserted statement. We can move this iterator forward to the older lsn without searching in the tree and update n_upserts. On a commit phase we can get the n_upserts, calculated on a prepare phase, and call vy_mem_older_lsn only if there is a sense to optimize the UPSERT. Closes #1988
-
Vladislav Shpilevoy authored
According to the code, 'replace' tuple can also have UPSERT type. Lets name it 'repsert' = 'replace' + 'upsert'.
-
- Jun 13, 2017
-
-
Konstantin Osipov authored
Improve the error message when selecting from HASH index using a partial key. Fixes gh-1463.
-
Vladislav Shpilevoy authored
tuple_field_...() is a family of functions for retrieving a field from tuple with checking of the specified type. Implement no-throw versions of these functions to use them from C code. Needed #944
-
Vladislav Shpilevoy authored
Needed for #944 and #2285
-
- Jun 12, 2017
-
-
Konstantin Osipov authored
Set of error injection before issuing a select, to avoid its effects on select execution.
-
Vladislav Shpilevoy authored
-
- Jun 10, 2017
-
-
Konstantin Osipov authored
Fix a typo in a status message (gh-2417)
-
Vladislav Shpilevoy authored
-
Konstantin Osipov authored
-
Vladislav Shpilevoy authored
-
Georgy Kirichenko authored
Space should not be accessible while droping. See #2075
-
Vladimir Davydov authored
A new record type has been added to vylog by commit 353bcdc5 ("Rework space truncation"), VY_LOG_TRUNCATE_INDEX. Update the test.
-
Vladimir Davydov authored
Since vinyl files (run, index, vylog) don't have space id, reading them with tarantoolctl-cat fails with: tarantoolctl:784: attempt to compare nil with number
-
- Jun 09, 2017
-
-
Vladimir Davydov authored
In case of failure, print files that were not deleted and the output of box.internal.gc.info(). Needed for #2486
-
Vladimir Davydov authored
To match box.cfg.vinyl_read_threads introduced by the previous patch.
-
Vladimir Davydov authored
vy_run_iterator_load_page() uses coeio, which is extremely inefficient for our cases: - it locks/unlocks mutexes every time when a task is queued, scheduled, or finished - it invokes ev_async_send(), which writes to eventfd and wakes up TX loop every time on every task completion - it blocks tasks until a free worker is available, which leads to unpredictable delays This patch replaces coeio with cbus in the similar way we do TX <-> WAL interaction. The number of reader threads is set by a new configuration option, vinyl_read_threads, which is set to 1 by default. Note, this patch doesn't bother adjusting cbus queue length, i.e. it is set to INT_MAX as per default. While this is OK when there are a lot of concurrent read requests, this might be suboptimal for low-bandwidth workloads, resulting in higher latencies. We should probably update the queue length dynamically depending on how many clients are out there. Closes #2493
-
- Jun 08, 2017
-
-
bigbes authored
-
Vladimir Davydov authored
-
Vladimir Davydov authored
Space truncation that we have now is not atomic: we recreate all indexes of the truncated space one by one. This can result in nasty failures if a tuple insertion races with the space truncation and sees some indexes truncated and others not. This patch redesigns space truncation as follows: - Truncate is now triggered by bumping a counter in a new system space called _truncate. As before, space truncation is implemented by recreating all of its indexes, but now this is done internally in one go, inside the space alter trigger. This makes the operation atomic. - New indexes are created with Handler::createIndex method, old indexes are deleted with Index::~Index. Neither Index::commitCreate nor Index::commitDrop are called in case of truncation, in contrast to space alter. Since memtx needs to release tuples referenced by old indexes, and vinyl needs to log space truncation in the metadata log, new Handler methods are introduced, prepareTruncateSpace and commitTruncateSpace, which are passed the old and new spaces. They are called before and after truncate record is written to WAL, respectively. - Since Handler::commitTruncateSpace must not fail while vylog write obviously may, we reuse the technique used by commitCreate and commitDrop methods of VinylIndex, namely leave the record we failed to write in vylog buffer to be either flushed along with the next write or replayed on WAL recovery. To be able to detect if truncation was logged while recovering WAL, we introduce a new vylog record type, VY_LOG_TRUNCATE_INDEX which takes truncate_count as a key: if on WAL recovery index truncate_count happens to be <= space truncate_count, then it it means that truncation was not logged and we need to log it again. Closes #618 Closes #2060
-
Vladimir Davydov authored
Space truncate rework done by the next patch requires the ability to swap data stored on disk between two indexes on recovery so as not to reload all runs every time a space gets truncated. Since we can't swap content of two rb tree (due to rbt_nil), convert vy_index->tree to a pointer.
-
Roman Tsisyk authored
-
Georgy Kirichenko authored
Lock schema before any changes to space and index dictionary and unlock only after commit or rollback. This allow many parallel data definition statements. Issue #2075
-
Georgy Kirichenko authored
We need to lock box schema while editing a ddl space. This lock should be done before any changes in a ddl space. Before trigger is the good place to issue a schema lock. See #2075
-