- Apr 06, 2016
-
-
Konstantin Osipov authored
This makes the engines less dependent on the global recovery object (eventually this dependency will be removed altogether).
-
Konstantin Osipov authored
* introduce MemtxEngine::m_snap_dir * begin using it during recovery * in MemtxEngine::join() create a separate xdir object, to avoid races (the method is invoked from 'relay' thread) * remove recovery_setup_panic() and pass panic_on_wal_error to recovery_new(). * remember the last checkpoint in memtx engine state and use it for replication join calls (rather than xdir_scan() to find the latest snapshot). * remove recovery_has_data() * move recovery_last_checkpoint() to memtx_engine.cc * remove recovery::snap_dir * use the vclock from memtx engine in join * now a different error message is thrown for the case when snapshot file is missing
-
- Apr 05, 2016
-
-
Konstantin Osipov authored
Use a global variable with server id.
-
Konstantin Osipov authored
* do not use snap_dir to load bootstrap.snap, just any dir object will do * move snap_io_rate_limit to memtx_engine * remove box_set_panic_on_wal_error(), the dynamic setting is used for relays, there is no point to update the global recovery instance after box.cfg{}
-
- Apr 02, 2016
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
This is necessary to ensure the request is alive until request_encode(), which will be moved to WAL thread in the future.
-
Konstantin Osipov authored
-
- Apr 01, 2016
-
-
Dmitry Simonenko authored
-
Roman Tsisyk authored
-
- Mar 31, 2016
-
-
Konstantin Osipov authored
-
- Mar 30, 2016
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
- don't use fiber_yield_timeout, it's too slow; use a single idle timer and invoke the fiber which hasn't been invoked for longest period so that it can time out - chain up the running fibers in a pool so that coro_transfer() goes to the next fiber, not to the sched fiber, most of the time This together improves cbus speed and reduces CPU spent in coro_transfer().
-
- Mar 29, 2016
-
-
Georgy Kirichenko authored
-
- Mar 28, 2016
-
-
Roman Tsisyk authored
-
bigbes authored
-
Roman Tsisyk authored
eio_req must be freed from req->destroy instead of req->finish.
-
Alexandr Lyapunov authored
-
Alexandr Lyapunov authored
-
- Mar 25, 2016
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Ideally, the batch should be written to WAL in a single write.
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Link all ready fibers into a call chain, so that fiber_yield() directly involves the next ready fiber, eventually returning to the fiber initiating the schedule. This slashes the amount of coro_transfer() in fiber_schedule_list() in half. Fix a possible race condition in checks for pool->fetch_output event. Limit the iproto pipe max input to ensure smooth operation when there are many messages.
-
Nick Zavaritsky authored
-
Nick Zavaritsky authored
* extra arguments silently ignored * :get and :put interpret nil timeout as infinity
-
Dmitry Simonenko authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
Konstantin Osipov authored
-
- Mar 24, 2016
-
-
Konstantin Osipov authored
Since in a shared pool we can deadlock if we run out of fibers, shift the responsibility of pipeline congestion control from fiber pool to iproto thread, and make the pool size effectively unlimited. - optimize fiber pool yield performance even further under busy workload scenario. - remove reciprocal message fetch - use a shared fiber pool - temporarily disable lock statistics - use distinct mutexes for every cord
-
bigbes authored
-
Konstantin Osipov authored
-
Nick Zavaritsky authored
-
- Mar 23, 2016
-
-
ocelot-inc authored
-
ocelot-inc authored
-
bigbes authored
-
ocelot-inc authored
-
- Mar 22, 2016
-
-
Konstantin Osipov authored
-
Nick Zavaritsky authored
Using Mike Pall's advanced readline patch, http://smbolton.com/lua.html#readline
-
- Mar 21, 2016
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
Don't start the wakeup event, it's only used internally with custom events.
-