- May 22, 2017
-
-
bigbes authored
Original behavior can be enable with `./test-run -j -1` See https://github.com/tarantool/test-run/issues/56
-
Roman Tsisyk authored
Follow up previous commits. No semantic changes.
-
Roman Tsisyk authored
Disable tuple_hash() optimization for non-sequential keys. See #2084
-
Ilya authored
* Refactor templates in tuple_compare.cc to generate optimized versions for various field types * Add a function pointer in key_def to select optimized version at runtime Fixes #2084
-
Ilya authored
* Implement tuple_extract_key_sequential() for sequential keys * Add a function pointer to in key_def to select optimized version at runtime Closes #2048
-
- May 19, 2017
-
-
Vladimir Davydov authored
It uses error injection which is compiled out for release builds. Also, improve the test output by adding s:count() after each space modification for checking if the statement was actually inserted.
-
- May 18, 2017
-
-
Konstantin Osipov authored
* update comments, messages * add a test for changing index id
-
Vladislav Shpilevoy authored
-
Vladislav Shpilevoy authored
Allow to alter any index opts which don't change key_def of the index. Closes #1931 Closes #2109 Closes #2149
-
Alexandr Lyapunov authored
Needed for extractiong write iterator to a separate file. Also move stat account from the function to caller code. Change-Id: I73af73e6a34f9d7431a3c8056030c9858226b0ad
-
Roman Tsisyk authored
time_t is long on all major platforms, not int. Fixes #2443 Change-Id: I7ca029ccfe87124bdc7cdd2ee1db06404f38fe39
-
Konstantin Osipov authored
Simplify check_param_table() arguments in index.create() and index.alter().
-
Vladislav Shpilevoy authored
Closes #2148 Change-Id: I10d8d93a12e8bbd91c36f6eefc0f0e8e07636de5
-
- May 17, 2017
-
-
Vladimir Davydov authored
Currently, only keys from vy_log_key_mask[type] may be present in a record of the given type. Extra keys will result in an error. In order to rework space truncate, we will have to add a new key for VY_LOG_CREATE_INDEX record (space_version). To be able to start from vylog generated by an older version, we need to allow extra keys. So this patch deletes vy_log_key_mask[] and makes vy_log_record_encode() encode only those keys whose value is different from the default one (similarly to request_encode()). Checking for mandatory keys is now up to vy_recovery_process_record() (currently there's the only that needs to be checked - VY_LOG_KEY_DEF)
-
Vladimir Davydov authored
vy_recovery_iterate() doesn't clean vy_log_record before proceeding to the next log entry. As a result, a run record passed to the callback by vy_recovery_iterate() contains extra info left from the index this run is for: index_lsn, index_id and space_id. We use this in gc and backup callbacks to format file name. The problem is vy_recovery_iterate() is also used internally for log rotation. Including extra keys in records doesn't result in writing them to the log file on rotation, because per each record type we have a mask of keys corresponding to the record (vy_log_key_mask). In order to allow optional keys in vylog, the following patch will change the meaning of the mask so that it only contains mandatory keys, while a key will be written to the log only if its value differs from the default (similarly to request_encode). Thus, to avoid writing keys not relevant to a record type we need to clean vy_log_record within vy_recovery_iterate() before jumping to the next record. So this patch weans gc and backup from exploiting this feature - let them save index_id and space_id in the context, as we do on replication and recovery.
-
Vladislav Shpilevoy authored
Info_append_double is needed to print vinyl index.info.bloom_fpr later.
-
Vladimir Davydov authored
It was helpful when a vinyl index could have a custom path. Currently, it's forbidden, so we can format index path in place.
-
Vladimir Davydov authored
We can format it in place when needed.
-
Vladimir Davydov authored
There must be a reasonable timeout on quota wait time, otherwise we risk throttling a client forever, e.g. in case of a disk error. This patch introduces quota timeout. The timeout is configured via vinyl_timeout configuration option and set to 60 seconds by default. Closes #2014
-
Vladimir Davydov authored
Use separate function for each kind of callback. The next patch will add an argument and a return value to the throttle callback, so it isn't apt anymore to use the only callback to handle all cases.
-
Vladimir Davydov authored
Currently, we reserve quota after allocating memory, which can result in exceeding the memory limit, e.g. after the following script box.cfg{vinyl_memory = 1024 * 1024} s = box.schema.space.create('test', {engine = 'vinyl'}) s:create_index('pk') pad = string.rep('x', 2 * box.cfg.vinyl_memory / 3) s:auto_increment{pad} s:auto_increment{pad} is done, box.info.vinyl().memory.used reports that 1447330 bytes are allocated. Fix this by reserving quota before allocation. A test is added later in the series.
-
Vladimir Davydov authored
Vinyl dump is disabled during local recovery from WAL. This is OK, because we shouldn't exceed the quota provided we don't replay statements that were dumped to disk before restart. Check that.
-
- May 16, 2017
-
-
Alexandr Lyapunov authored
-
- May 15, 2017
-
-
Vladimir Davydov authored
Although upsert optimizers (both sync and background) replace the last UPSERT with a REPLACE and do not insert new statements, they use vy_index_insert_stmt(), which increments index->stmt_count. As a result, index->stmt_count is incremented twice. Closes #2421
-
- May 12, 2017
-
-
Roman Tsisyk authored
See #2429
-
Vladislav Shpilevoy authored
-
Roman Tsisyk authored
Force tarantool-common upgrade to support systemd notifications. + Add missing "s" suffix to TimeoutStartSec= option. See #1923
-
Roman Tsisyk authored
Check that there are no statements between prev_stmt and stmt in the cache on vy_cache_add() before trying to build a chain. This workaround makes vy_cache_add() more fool-proof for cases when vy_read_iterator skips some keys during restoration.
-
Vladislav Shpilevoy authored
Between prepare and commit of a transaction it is possible that some prepared statements are read from mem. Prepared statement has abnomal lsn > MAX_LSN. If version of the index is changed after return of a prepared statement and the iterator is placed to read view, the restart and restore on a such prepared statement could lead to restore on LSN bigger than vlsn. Example: FIBER 1 FIBER 2 box.begin() replace {1}, lsn=MAX_LSN+1 replace {2}, lsn=MAX_LSN+1 prepare vy_tx ... ->->-> open iterator read {1},lsn=MAX_LSN+1 from mem yield ... <-<-<- commit vy_tx replace {1, 1},lsn=100 send iterator to read view ->->-> iterator read view = 100 index version changed - restore on last_stmt last_stmt = {1},lsn=MAX_LSN+1 last_stmt LSN > iterator VLSN ?????????????????????? Lets return in such situations the next key.
-
Vladislav Shpilevoy authored
If vy_mem_iterator was not started, the vy_mem_iterator_restore had been able to iterate to the same key, as the target of restore, and then call vy_mem_iterator_next_lsn_impl to find the older lsn. But the vy_mem_iterator_start_from did not set 'search_started' flag and next_lsn_impl would restart the mem iterator again, to the first statement, which could be incorrect result of restore. Example: cache statements: {1}, {2}, {3} mem statements: {1}, {2}, {3} iterator: GE key: {1} - merge_iterator returns all cached data and then tries to restore mem on {3}. - then mem_iterator during restore calls vy_mem_iterator_start_from and sees tuple {3}. - then mem_iterator tries to find older lsn and calls vy_mem_iterator_next_lsn_impl, which sees 'search_started = false' and restarts iterator to {1}. - then read_iterator returns {1} after {3} - ERROR! Lets set 'search_started' flag inside vy_mem_iterator_start_from.
-
Vladislav Shpilevoy authored
Under heavy load it is possible, that mem_iterator_restore inside read_iterator_next returns statement with the same key, as the previous one. Check keys in vy_read_iterator_merge_next_key.
-
Vladislav Shpilevoy authored
If the mem_iterator is started, but hasn't any curr_stmt (maybe it was finished), it could return its first statement regardless of last_stmt. Lets call start_from() instead of start() in case of not null last_stmt.
-
Alexandr Lyapunov authored
Mem iterator restoration in case of LE and LT was completelly wrong. Additionally I added more asserts and found that restoration is called too frequently. Fix'em both. fix #2207
-
bigbes authored
-
bigbes authored
-
Vladimir Davydov authored
The code calculating the quota watermark was written long ago, when we didn't have the common memory level and didn't use lsregion allocator and hence it was possible to free all memory used by any range. Things have changed drastically since then: now it's impossible to free memory occupied by a range or even an index, because statements are shared between indexes, so the scheduler effectively dumps all memory on exceeding the quota. Fix the quota calculation accordingly.
-
Vladimir Davydov authored
We should use fiber_sleep(0) to yield periodically, not fiber_reschedule() - the latter doesn't advance the event loop.
-
Roman Tsisyk authored
See 56462bca
-
Georgy Kirichenko authored