vinyl: fix dump bandwidth calculation
We compute dump bandwidth basing on the time it takes a run writing task to complete. While it used to work when we didn't have data compression and indexes didn't share in-memory tuples, today the logic behind dump bandwidth calculation is completely flawed: - Due to data compression, the amount of memory we dump may be much greater than the amount of data we write to disk, in which case dump bandwidth will be underestimated. - If a space has several indexes, dumping it may result in writing more data than is actually stored in-memory, because tuples of the same space are shared among its indexes in-memory, but stored separately when written to disk. In this case, dump bandwidth will be overestimated. This results in quota watermark being set incorrectly and, as a result, either stalling transactions or dumping memory non-stop. Obviously, to resolve both issues, we need to account memory freed per unit of time instead of data written to disk. So this patch makes vy_scheduler_trigger_dump() remember the time when dump was started and vy_scheduler_complete_dump() update dump bandwidth basing on the amount of memory dumped and the time dump took.
Loading
Please register or sign in to comment