box: rework tuple reference count
Tuples are usually have a very low reference counter (I bet the majority of tuple have it less than 10), and we may rely on the fact in optimization issues. On the other hand it is not actually prohibited for a tuple to have a big reference counter, thus the code must handle it properly. The obvious solution is to store narrow reference counter right in struct tuple, and store it somewhere else if it hits threshold. The previous implementation has a 15 bit counter and 1 bit flag the that actual counter is stored in separate array. That worked fine except 15 bits are still an overkill for real reference counts. And that solution introduced unions into struct tuple, which in turn, generally speaking, causes an UB since by standard it is an UB to access one union part after setting other. The new solution is to store 8 bit counter and 1 bit flag. The external storage is made as hash table to which a portion of the counter is uploaded (or acquire) very seldom. That makes the counter in tuple more compact, rather fast (and even fastest for low reference counter values) and has no limitation such as limited count of tuples that can have big reference counts. Part of #5385
Showing
- src/box/memtx_engine.c 2 additions, 2 deletionssrc/box/memtx_engine.c
- src/box/tuple.c 100 additions, 103 deletionssrc/box/tuple.c
- src/box/tuple.h 87 additions, 32 deletionssrc/box/tuple.h
- src/box/vy_stmt.c 4 additions, 4 deletionssrc/box/vy_stmt.c
- src/box/vy_stmt.h 2 additions, 2 deletionssrc/box/vy_stmt.h
- test/unit/tuple_bigref.c 212 additions, 89 deletionstest/unit/tuple_bigref.c
- test/unit/tuple_bigref.result 37 additions, 13 deletionstest/unit/tuple_bigref.result
Loading
Please register or sign in to comment