3.0 GA
TiKV 3.0 GA Release Notes
Release date: June 28, 2019
TiKV version: 3.0.0
TiKV
Engine
- Introduce Titan, a key-value plugin that improves write performance for scenarios with value sizes greater than 1KiB, and relieves write amplification in certain degrees
- Optimize memory management to reduce memory allocation and copying for
Iterator Key Bound Option - Support
block cachesharing among different column families
Server
- Support reversed
raw_scanandraw_batch_scan - Support batch receiving and sending Raft messages, improving TPS by 7% for write intensive scenarios
- Support getting monitoring information via HTTP
- Support Local Reader in RawKV to improve performance
- Reduce context switch overhead from
batch commands
- Support reversed
Raftstore
- Support Multi-thread Raftstore and Multi-thread Apply to improve scalabilities, concurrency capacity, and resource usage within a single node. Performance improves by 70% under the same level of pressure
- Support checking RocksDB Level 0 files before applying snapshots to avoid write stall
- Support Hibernate Regions to optimize CPU consumption from RaftStore (Experimental)
- Remove the local reader thread
Transaction
- Support distributed GC and concurrent lock resolving for improved GC performance
- Support the pessimistic transaction model (Experimental)
- Modify the semantics of
Insertto allow Prewrite to succeed only when there is no Key - Remove
txn scheduler - Add monitoring items related to
read indexandGC worker
Coprocessor
- Refactor the computation framework to implement vector operators, computation using vector expressions, and vector aggregations to improve performance
- Support providing operator execution status for the
EXPLAIN ANALYZEstatement in TiDB - Switch to the
work-stealingthread pool model to reduce context switch cost
Misc
- Develop a unified log format specification with restructured log system to facilitate collection and analysis by tools
- Add performance metrics related to configuration information and key bound crossing.
PD
Support re-creating a cluster from a single node
Migrate Region metadata from etcd to the go-leveldb storage engine to solve the storage bottleneck in etcd for large-scale clusters
API
- Add the
remove-tombstoneAPI to clear Tombstone stores - Add the
ScanRegionsAPI to batch query Region information - Add the
GetOperatorAPI to query running operators - Optimize the performance of the
GetStoresAPI
- Add the
Configurations
- Optimize configuration check logic to avoid configuration item errors
- Add
enable-one-way-mergeto control the direction of Region merge - Add
hot-region-schedule-limitto control the scheduling rate for hot Regions - Add
hot-region-cache-hits-thresholdto identify hotspot when hitting multiple thresholds consecutively - Add the
store-balance-rateconfiguration item to control the maximum numbers of balance Region operators allowed per minute
Scheduler Optimizations
- Add the store limit mechanism for separately controlling the speed of operators for each store
- Support the
waitingOperatorqueue to optimize the resource race among different schedulers - Support scheduling rate limit to actively send scheduling operations to TiKV. This improves the scheduling rate by limiting the number of concurrent scheduling tasks on a single node.
- Optimize the
Region Scatterscheduling to be not restrained by the limit mechanism - Add the
shuffle-hot-regionscheduler to facilitate TiKV stability test in scenarios of poor hotspot scheduling
Simulator
- Add simulator for data import scenarios
- Support setting different heartbeats intervals for the Store
Others
- Upgrade etcd to solve the issues of inconsistent log output formats, Leader selection failure in prevote, and lease deadlocking.
- Develop a unified log format specification with restructured log system to facilitate collection and analysis by tools
- Add monitoring metrics including scheduling parameters, cluster label information, time consumed by PD to process TSO requests, Store ID and address information, etc.