- Jul 06, 2017
-
-
Konstantin Osipov authored
* update comments * add a test case for altering a primary key on the fly * rename AddIndex to CreateIndex * factor out common code into a function
-
Georgy Kirichenko authored
MoveIndex operation is used to move an existing index from the old space to the new one. Semantically it's a no-op. RebuildIndex is introduced for case when essential index properties are changed, so it is necessary to drop the old index and create a new one in its place in the new space. AlterSpaceOp::prepare() is removed: all checks are moved to on_replace trigger in _index system space from it. All checks are done before any alter operation is created. Necessary for gh-2074 and gh-1796.
-
alyapunov authored
Move the check which decides on an alter strategy whenever a row in _index space is changed, from AlterSpaceOp::preapre() to on_replace trigger on _index space. The check chooses between two options: a heavy-weight index rebuild, invoked when index definition, such as key parts, is changed, vs. lightweight modify, invoked when index name or minor options are modified.. Before this patch, index alteration creates a pair of operations (DropIndex + AddIndex) in all cases, but later replaces two operations with one at AlterSpaceOp::prepare() phase. This is bad by several reasons: - it's done while traversing of a linked list of operations, and it changes the list being traversed. - an order in the list of operations is required for this to work: drop must precede add. - needless allocation and deallocation of operations makes the logic unnecessarily complex. Necessary for gh-1796.
-
Konstantin Osipov authored
Always first create the primary key index in a space. Put the primary key key def first in the array of key_defs, passed into tuple_format_new(). This is necessary for gh-1796.
-
Konstantin Osipov authored
Assert that we can't create a space with secondary key but no primary.
-
alyapunov authored
Now drop primary index checks are made in alter triggers after new space creation. Such an implementation leads to temporary creation of a space with invalid index set. Fix it and check the index set before space_new call.
-
Georgy Kirichenko authored
Non destructive swap_index_def function sould be used because memtx_tree stores a pointer to an index_def used while tree creation. Fixed #2570
-
- Jul 05, 2017
-
-
Konstantin Osipov authored
-
- Jul 04, 2017
-
-
Vladimir Davydov authored
Commit dbfd515f ("vinyl: fix crash if snapshot is called while dump is in progress") introduced a bug that can result in statements inserted after WAL checkpoint being included in a snapshot. This happens, because vy_begin_checkpoint() doesn't force rotation of in-memory trees anymore: it bumps checkpoint_generation, but doesn't touch scheduler->generation, which is used to trigger in-memory tree rotation. To fix this issue, this patch zaps scheduler->checkpoint_generation and makes vy_begin_checkpoint() bump scheduler->generation directly as it used to. To guarantee dump consistency (the issued fixed by commit dbfd515f), scheduler->dump_generation is introduced - it defines the generation of in-memory data that are currently being dumped. The scheduler won't start dumping newer trees until all trees whose generation equals dump_generation have been dumped. The counter is only bumped by the scheduler itself when all old in-memory trees have been dumped. Together, this guarantees that each dump contains data of the same generation, i.e. is consistent. While we are at it, let's also remove vy_scheduler->dump_fifo, the list of all in-memory trees sorted in the chronological order. The scheduler uses it to keep track of the oldest in-memory tree, which is needed to invoke lsregion_gc(). However, since we do not remove indexes from the dump_heap, as we used to not so long ago, we can use the heap for this. The only problem is indexes that are currently being dumped are moved off the top of the heap, but we can detect this case by maintaining a counter of dump tasks in progress: if dump_task_count is > 0 when a dump task is completed, we must not call lsregion_gc() irrespective of the generation of the index at the top of the heap. A good thing about ridding of vy_scheduler->dump_fifo is that it is a step forward towards making vy_index independent of vy_scheduler so that it can be moved to a separate source file. Closes #2541 Needed for #1906
-
Vladimir Davydov authored
vy_index->generation equals to the generation of the oldest in-memory tree, which can be looked up efficiently as vy_index->sealed list is sorted by generation so let's zap it and add vy_index_generation() function instead.
-
Vladimir Davydov authored
Needed for #1906
-
Vladimir Davydov authored
Including index.h just for the sake of iterator_type, as we do in vy_run.h and vy_mem.h, is a bit of overkill. Let's move its definition to a separate source file, iterator_type.h.
-
Vladimir Davydov authored
- Replace vy_range->index with key_def. - Replace vy_range_iterator->index with vy_range_tree_t. Needed for #1906
-
Vladimir Davydov authored
The compact_heap, used by the scheduler to schedule range compaction, contains all ranges except those that are currently being compacted. Since the appropriate vy_index object is required to schedule a range compaction, we have to store a pointer to the index a range belongs to in vy_range->index. This makes it impossible to move vy_range struct and its implementation to a separate source file. To address this, let's rework the scheduler as follows: - Make compact_heap store indexes, not ranges. An index is prioritized by the greatest compact_priority among its ranges. - Add a heap of ranges to each index, prioritized by compact_priority. A range is removed from the heap while it's being compacted. - Do not remove indexes from dump_heap or compact_heap when a task is scheduled (otherwise we could only schedule one compaction per index). Instead just update the index position in the heaps. Needed for #1906
-
- Jul 03, 2017
-
-
Vladimir Davydov authored
Advancing replica->gc on every status update is inefficient as gc can only be invoked when we move to the next xlog file. Currently, it's acceptable, because status is only updated once per second, but there's no guarantee that it won't be updated say every millisecond in future, in which case advancing replica->gc on every status update may become too costly. So introduce a trigger invoked every time an xlog is closed by recover_remaining_wals() and use it in relay to send a special gc message.
-
Vladimir Davydov authored
ipc_cond_wait() always returns 0, so the body of the loop waiting for the endpoint to be ready for destruction is only invoked once.
-
Vladimir Davydov authored
To make sure there is no status message pending in the tx pipe, relay_cbus_detach() waits on relay->status_cond before proceeding to relay destruction. The status_cond is signaled by the status message completion routine (relay_status_update()) handled by cbus on the relay's side. The problem is by the time we call relay_cbus_detach(), the cbus loop has been stopped (see relay_subscribe_f()), i.e. there's no one to process the message that is supposed to signal status_cond. That means, if there happens to be a status message en route when the relay is stopped, the relay thread will hang forever. To fix this issue, let's introduce a new helper function, cbus_flush(), which blocks the caller until all cbus messages queued on a pipe have been processed, and use it in relay_cbus_detach() to wait for in-flight status messages to complete. Apart from source and destination pipes, this new function takes a callback to be used for processing incoming cbus messages, so it can be used even if the loop that is supposed to invoke cbus_process() stopped.
-
Vladimir Davydov authored
- Fold in wal dir scan. It's pretty easy to detect if we need to rescan wal dir - we do iff the current wal is closed (if it isn't, we need to recover it first), so there's no point in keeping it apart. - Close the last recovered wal on eof. We don't close it to avoid rereading it in case recover_remaining_wals() is called again before a new wal is added to wal dir. We can detect this case by checking if the signature of the last wal stored in wal dir has increased after rescanning the dir. - Don't abort recovery and print 'xlog is deleted under our feet' message if current wal file is removed. This is pointless, really - it's OK to remove an open file in Unix. Besides, the check for deleted file is only relevant if wal dir has been rescanned, which is only done when we proceed to the next wal, i.e. it doesn't really detect anything. A good side effect of this rework is that now we can invoke garbage collector right from recovery_close_log().
-
Konstantin Osipov authored
-
- Jun 30, 2017
-
-
Konstantin Osipov authored
* use per-index statistics * remove step_count as it is no longer maintained * add statistics for txw, mem, and index overall
-
alyapunov authored
Old iterator has several problems: - restoration system is too complex and might cause several reads from disk of the same statements. - applying of upserts works in direct way (squash all upserts and apply them to terminal statement) and the code doesn't leave a change to optimize it. Implement iterator for full-key EQ case that fixes problems above.
-
alyapunov authored
There is a version member in vy_index that is incremented on each modification of mem list and range tree. Split it to two members that correspond to mem list and range tree accordingly. It is needed for more precise tracking of changes in iterators.
-
- Jun 29, 2017
-
-
Konstantin Osipov authored
-
Vladimir Davydov authored
If a vylog record doesn't get flushed to disk due to an error, objects it refers to (index->key_def, range->begin and range->end) may get destroyed, resulting in a crash. To avoid that, we must copy those objects to vylog buffer. Closes #2532
-
Konstantin Osipov authored
-
Vladimir Davydov authored
- Remove tx and cursor latencies as they are useless - they actually account how long a tx/cursor was open, not latencies. - Remove vy_stat->get_latency as it doesn't account latency of select, besides we now have per index read latency. - Remove vy_stat->dumped_statements and dump_total as these statistics are reported per index as well. - VY_STAT_TX_OPS is currently unused (always 0). Let's use it for accounting the total number of statements committed in tx instead of vy_stat->write_count.
-
Vladimir Davydov authored
index.info() is supposed to show index stats, not options. box.space.<space_name>.index.<index_name>.options looks like a better place for reporting index options. Needed for #1662
-
Vladimir Davydov authored
This patch adds 'latency' field to index.info. It shows the latency of reads from the index. The latency is computed as 99-percentile of all delays incurred by vy_read_iterator_next(). Needed for #1662
-
Vladimir Davydov authored
Replace box.info.vinyl().performance.upsert_{squashed,applied} with per index index.info().upsert.{squashed,applied}. Needed for #1662
-
Vladimir Davydov authored
Add the following counters to index.info: lookup # number of lookups (read iter start) get # number of statements read (read iter next) rows bytes put # number of statements written rows bytes Needed for #1662
-
Eugine Blikh authored
closes gh-2516
-
Konstantin Osipov authored
-
- Jun 27, 2017
-
-
Konstantin Osipov authored
-
Vladimir Davydov authored
This patch adds the following counters to index.info: txw count # number of statements in the TX write set rows bytes iterator lookup # number of lookups in the TX write set get # number of statements returned by the iterator rows bytes Needed for #1662
-
Vladimir Davydov authored
This patch adds the cache section to index.info with the following counters in it: cache rows # number of tuples in the cache bytes # cache memory size lookup # lookups in the cache get # reads from the cache rows bytes put # write to the cache rows bytes invalidate # overwrites in the cache rows bytes evict # evictions due to memory quota rows bytes Needed for #1662
-
Vladimir Davydov authored
Using vy_quota, which was implemented to support watermarking, throttling, timeouts, for accounting cached tuples is an overkill. Replace it with mem_used and mem_quota counters.
-
Vladimir Davydov authored
This patch adds the following counters to the disk section index.info: dump # dump statistics: count # number of invocations in # number of input statements rows bytes out # number of output statements rows bytes compact # compaction statistics: count # number of invocations in # number of input statements rows bytes out # number of output statements rows bytes Needed for #1662
-
Vladimir Davydov authored
Replace box.info.vinyl().performance.iterator.{run,mem} global counters with the following per index counters: memory iterator lookup # number of lookups in the memory tree get # number of statements returned by mem iterator rows bytes disk iterator lookup # number of lookups in the page index get # number of statements returned by run iterator rows bytes bloom # number of times bloom filter hit # allowed to avoid a disk read miss # failed to prevent a disk read read # number of statements actually read from disk rows bytes bytes_compressed pages Needed for #1662
-
Vladimir Davydov authored
It's useless - it only checks that certain counters are present, but doesn't give a damn about what they show. At the same time, I have to update its output every time I modify vinyl statistics, which I'm doing pretty often these days. So let's ditch it - I'll rewrite it from scratch after I'm done reworking vinyl statistics.
-
Konstantin Osipov authored
gh-2520: add comments explaining the test case for gh-2520 (upsert caching).
-