- May 05, 2017
-
-
Nick Zavaritsky authored
* do not include autotools build files sql: [#2387] [#2267] Cleanup unused SQLite fines. sql: Checkin SQLite test coverage sql: Remove TCL-based tests Remove sqlite-tcl testsuite along with all TCL-relared libs. Clean up sqlite's CMakeLists and remove redundant TCL-related sources. * src/lib/sqlite/CMakeLists.txt: Remove dependency on TCL library. * src/lib/sqlite/src/CMakeLists.txt: Remove testfixture target. * src/lib/sqlite/src/test.*: Remove. * src/lib/sqlite/src/sqlite3.rc: Ditto. * src/lib/sqlite/src/tclsqlite.c: Ditto. * src/lib/sqlite/ext: Ditto. * test/sqlite-tcl: Ditto. Add -o option in lemon (output file name) This is necessary for out-of-source CMake builds. Use dummy commit date and UUID in sqlite3.h Last commit date and UUID are included in generated sqlite3.h. We don't distribute standalone sqlite, and Tarantool itself is already version-stamped. sqlite: Add VERSION Implement CMake build rules for sqlite.
-
Konstantin Osipov authored
-
Roman Tsisyk authored
* `./test-run.py -j [JOBS]` runs tests simultaneously using up to JOBS working threads. If the -j option is given without an argument, test-run will use all available cores on your host. * `./test-run.py` without ``-j` works as usual in sequential mode. * In parallel mode `test/var/00N_XXX` directory contains logs and data files produced by [00N_XXX] thread, like var/ in sequential mode. * `./test-run.py --reproduce ./test/reproduce/00N_XXX.yml` re-produces 00N_XXX failed sequence. Fixes https://github.com/tarantool/test-run/issues/7
-
bigbes authored
Analogue of python's modules: * https://docs.python.org/3.7/library/pwd.html * https://docs.python.org/3.7/library/grp.html getgrnam and getgrgid merged into `getgr`, that can get both names and gid. getgrall for getting all groups + caching (flag `force` to reload cached version). Similar rules for getpw. Closes #2213
-
Roman Tsisyk authored
-
Vladimir Davydov authored
vy_stmt_iterator_iface->restore may yield (it does in case of run iterator), which opens a time window for a concurrent dump or compaction task to delete a merge iterator's source, so we must check index and range version after each call to ->restore, but we don't in vy_merge_iterator_restore(), which may result in a crash.
-
Roman Tsisyk authored
-
Roman Tsisyk authored
* Use pointers instead of offsets * Ban empty separator * Remove extra invocation of memmem() when maxsplit is used
-
bigbes authored
* `string.startswith(inp, head[, begin[, end]])` - return true if string (substring) starts with head * `string.endswith(inp, tail[, begin[, end]])` - return true if string (substring) ends with tail closes gh-2215
-
bigbes authored
* `string.split(inp[, sep[, max]])` - return table with split results closes gh-2211
-
bigbes authored
* `string.ljust(inp, width[, char])` - returns left-justified string filled with charachter 'char' (' ' by default). * `string.rjust(inp, width[, char])` - returns right-justified string filled with charachter 'char' (' ' by default) * `string.center(inp, width[, char])` - returns centered string filled with charachter 'char' (' ' by default) closes gh-2214
-
- May 04, 2017
-
-
Konstantin Nazarov authored
-
Roman Tsisyk authored
Fixes #2412
-
Roman Tsisyk authored
-
Roman Tsisyk authored
Fixes #2265
-
Konstantin Nazarov authored
-
Vladimir Davydov authored
We do it in order to reuse the code starting the iteration for restore. This looks ugly. Instead we'd better pass contrived type and key as function arguments. Also, rename vy_{run,mem}_iterator_do_start() to vy_{run,mem}_iterator_start_from(). Closes #2405
-
Kirill Yukhin authored
* src/lua/init.c: Explicitly include <ctype.h>. `isspace` is used in source. Normally <ctype.h> is included through <readline/readline.h>, nut native `Readline` on OSX doesn't include <ctype.h>.
-
Roman Tsisyk authored
The output doesn't fit to 4MB limit for logs.
-
Roman Tsisyk authored
Apply a patch from Yura Sokolov: The default "fast" string hash function samples only a few positions in a string, the remaining bytes don't affect the function's result. The function performs well for short strings; however long strings can yield extremely high collision rates. An adaptive schema was implemented. Two hash functions are used simultaneously. A bucket is picked based on the output of the fast hash function. If an item is to be inserted in a collision chain longer than a certain threshold, another bucket is picked based on the stronger hash function. Since two hash functions are used simultaneously, insert should consider two buckets. The second bucket is often NOT considered thanks to the bloom filter. The filter is rebuilt during GC cycle.
-
Roman Tsisyk authored
Highlights from Mike Pall [1]: The major addition over beta2 is the LJ_GC64 mode JIT compiler backend contributed by Peter Cawley. Previously, only the x64 and ARM64 interpreters could be built in this mode. This mode removes the 32 bit limitation for garbage collected memory on 64 bit systems. LuaJIT for x64 can optionally be built for LJ_GC64 mode by enabling the -DLUAJIT_ENABLE_GC64 line in src/Makefile or via 'msvcbuild.bat gc64'. Cisco Systems, Inc. and Linaro have sponsored the development of the JIT compiler backend for ARM64. Contributors are Djordje Kovacevic and Stefan Pejic from RT-RK, Charles Baylis from Linaro and Zheng Xu from ARM. ARM64 big endian mode is now supported, too. Cisco Systems, Inc. has sponsored the development of the MIPS64 interpreter and JIT compiler backend. Contributors are Djordje Kovacevic and Stefan Pejic from RT-RK. Peter Cawley has contributed the changes for full exception interoperability on Windows/x86 (32 bit). François Perrad has contributed various extensions from Lua 5.2 and Lua 5.3. Note: some left-over compatibility defines for Lua 5.0 have been removed from the header files. [1]: https://www.freelists.org/post/luajit/LuaJIT210beta3 In context of #2396
-
Roman Tsisyk authored
See https://github.com/LuaJIT/LuaJIT/commit/dc320ca70f In context of #2393
-
- May 03, 2017
-
-
Vladimir Davydov authored
Currently, we take a reference to vy_slice while waiting for IO in run iterator to avoid use-after-free. Since a slice references a run, we also need a reference counter in vy_run. We can't use the same reference counter for counting the number of active slices, because it includes deleted slices which stay allocated only because of being pinned by iterators, hence on top of that we add vy_run->slice_count. And all this machinery exists solely for the sake of run iterator! This patch reworks this as follows. It removes vy_run->refs and vy_slice->refs, leaving only vy_run->slice_count since it is needed for detecting unused runs. Instead it adds vy_slice->pin_count similar to vy_mem->pin_count. As long as the pin_count > 0, the slice can't be deleted. The one who wants to delete the slice (compaction, split, index removal) has to wait until the slice is unpinned. Run iterator pins the slice while waiting for IO. All in all this should make the code easier to follow.
-
Alexandr Lyapunov authored
Patch f57151941ab9abc103c1d5f79d24c48238ab39cc introduced generation of reproduce code and dump of it to the log. But the problem is that the code is initially generated in a big lua string using repeated concatenation in a loop. Such a use of lua strings is too vulnerable in terms of performance. Avoid repeated concatenation of lua string in tx_serial.test.
-
Vladimir Davydov authored
The test now generetes lua code that reproduces found problem. The generated code is saved in log. Copied from tx_serial.test
-
Vladimir Davydov authored
We don't need a doubly-linked list for this. Singly-linked will do.
-
Vladimir Davydov authored
The loop over all ranges can take long so we should yield once in a while in order not to stall the TX thread. The problem is we can't delete dumped in-memory trees until we've added a slice of the new run to each range, so if we yield while adding slices, a concurrent fiber will see a range with a slice containing statements present in in-memory trees, which breaks the assumption taken by merge iterator that its sources don't have duplicates. Handle this by filtering out newly dumped runs by LSN in vy_read_iterator_add_disk().
-
Vladimir Davydov authored
Adding an empty slice to a range is pointless, besides it triggers compaction for no reason, which is especially harmful in case of time-series-like workload. On dump we can omit creating slices for ranges that are not intersected by the new run. Note how it affects the coalesce test: now we have to insert a statement into each range to trigger compaction, not just into the first one.
-
Vladimir Davydov authored
When replaying local WAL, we filter out statements that were written to disk before restart by checking stmt->lsn against run->max_lsn: if the latter is greater, the statement was dumped. Although it is undoubtedly true, this check isn't quite correct. The thing is run->max_lsn might be less that the actual lsn at the time the run was dumped, because max_lsn is computed as the maximum among all statements present in the run file, which doesn't include deleted statements. If this happens, we might replay some statements for nothing: they will cancel each other anyway. This may be dangerous, because the number of such statements can be huge. Suppose, a whole run consists of deleted statements, i.e. there's no run file at all. Then we replay all statements in-memory, which might result in OOM, because the scheduler isn't started until the local recovery is completed. To avoid that, introduce a new record type in the metadata log, VY_LOG_DUMP_INDEX, which is written on each index dump, even if no file is created, and contains the LSN of the dump. Use this LSN on recovery to detect statements that don't need to be replayed.
-
Vladimir Davydov authored
This reverts commit a366b5bb ("vinyl: keep track of empty runs"). The former single memory level design required knowledge of max LSN of each run. Since this information can't be extracted from the run file in general (the newest key might have been deleted by compaction), we added it to the metadata log. Since we can get an empty run (i.e. a run w/o file on disk) as a result of compaction or dump, we had to add a special flag to the log per each run, is_empty, so that we could store a run record while omitting loading run file. Thanks to the concept of slices, this is not needed any more, so we can move min/max LSN back to the index file and remove is_empty flag from the log. This patch starts from removing is_empty flag.
-
Vladimir Davydov authored
Currently, we use a fixed size buffer, which can accommodate up to 64 records. With the single memory level it can easily overflow, as we create a slice for each range on dump in a single transaction, i.e. if there are > 64 ranges in an index, we may get a panic. So this patch makes vylog use a list of dynamically allocated records instead of a static array.
-
Vladimir Davydov authored
Closes #2394
-
Roman Tsisyk authored
Closes #2386
-
Roman Tsisyk authored
Rename `remote_check` to `check_remote_arg` to follow conventions in schema.lua
-
Roman Tsisyk authored
Change conn:call() and conn:eval() API to accept Lua table instead of varargs for function/expression arguments: conn:call(func_name, arg1, arg2, ...) => conn:call(func_name, {arg1, arg2, ...}, opts) conn:eval(expr, arg1, arg2, ...) => conn:eval(expr, {arg1, arg2, ...}, opts) This breaking change is needed to extend call() and eval() API with per-requests options, like `timeout` and `buffer` (see #2195): c:call("echo", {1, 2, 3}, {timeout = 0.2}) c:call("echo", {1, 2, 3}, {buffer = ibuf}) ibuf.rpos, result = msgpack.ibuf_decode(ibuf.rpos) result Tarantool 1.6.x behaviour can be turned on by `call_16` per-connection option: c = net.connect(box.cfg.listen, {call_16 = true}) c:call('echo', 1, 2, 3) This is a breaking change for 1.7.x. Needed for #2285 Closes #2195
-
Konstantin Nazarov authored
Getting the space format should be safe, as it is tied to schema_id, and net.box makes sure that schema_id stays consistent. It means that when you receive a tuple from net.box, you may be sure that its space format is consistent with the remote. Fixes #2402
-
Roman Tsisyk authored
Fixes #2391
-
Konstantin Nazarov authored
Previously the format in space:format() wasn't allowed to be nil. In context of #2391
-
- May 02, 2017
-
-
Vladimir Davydov authored
- In-memory trees are now created per index, not per range as before. - Dump is scheduled per index and writes the whole in-memory tree to a single run file. Upon completion it creates a slice for each range of the index. - Compaction is scheduled per range as before, but now it doesn't include in-memory trees, only on-disk runs (via slices). Compaction and dump of the same index can happen simultaneously. - Range split, just like coalescing, is done immediately by creating new slices and doesn't require long-term operations involving disk writes.
-