3.0 GA
TiKV 3.0 GA Release Notes
Release date: June 28, 2019
TiKV version: 3.0.0
TiKV
Engine
- Introduce Titan, a key-value plugin that improves write performance for scenarios with value sizes greater than 1KiB, and relieves write amplification in certain degrees
- Optimize memory management to reduce memory allocation and copying for
Iterator Key Bound Option
- Support
block cache
sharing among different column families
Server
- Support reversed
raw_scan
andraw_batch_scan
- Support batch receiving and sending Raft messages, improving TPS by 7% for write intensive scenarios
- Support getting monitoring information via HTTP
- Support Local Reader in RawKV to improve performance
- Reduce context switch overhead from
batch commands
- Support reversed
Raftstore
- Support Multi-thread Raftstore and Multi-thread Apply to improve scalabilities, concurrency capacity, and resource usage within a single node. Performance improves by 70% under the same level of pressure
- Support checking RocksDB Level 0 files before applying snapshots to avoid write stall
- Support Hibernate Regions to optimize CPU consumption from RaftStore (Experimental)
- Remove the local reader thread
Transaction
- Support distributed GC and concurrent lock resolving for improved GC performance
- Support the pessimistic transaction model (Experimental)
- Modify the semantics of
Insert
to allow Prewrite to succeed only when there is no Key - Remove
txn scheduler
- Add monitoring items related to
read index
andGC worker
Coprocessor
- Refactor the computation framework to implement vector operators, computation using vector expressions, and vector aggregations to improve performance
- Support providing operator execution status for the
EXPLAIN ANALYZE
statement in TiDB - Switch to the
work-stealing
thread pool model to reduce context switch cost
Misc
- Develop a unified log format specification with restructured log system to facilitate collection and analysis by tools
- Add performance metrics related to configuration information and key bound crossing.
PD
Support re-creating a cluster from a single node
Migrate Region metadata from etcd to the go-leveldb storage engine to solve the storage bottleneck in etcd for large-scale clusters
API
- Add the
remove-tombstone
API to clear Tombstone stores - Add the
ScanRegions
API to batch query Region information - Add the
GetOperator
API to query running operators - Optimize the performance of the
GetStores
API
- Add the
Configurations
- Optimize configuration check logic to avoid configuration item errors
- Add
enable-one-way-merge
to control the direction of Region merge - Add
hot-region-schedule-limit
to control the scheduling rate for hot Regions - Add
hot-region-cache-hits-threshold
to identify hotspot when hitting multiple thresholds consecutively - Add the
store-balance-rate
configuration item to control the maximum numbers of balance Region operators allowed per minute
Scheduler Optimizations
- Add the store limit mechanism for separately controlling the speed of operators for each store
- Support the
waitingOperator
queue to optimize the resource race among different schedulers - Support scheduling rate limit to actively send scheduling operations to TiKV. This improves the scheduling rate by limiting the number of concurrent scheduling tasks on a single node.
- Optimize the
Region Scatter
scheduling to be not restrained by the limit mechanism - Add the
shuffle-hot-region
scheduler to facilitate TiKV stability test in scenarios of poor hotspot scheduling
Simulator
- Add simulator for data import scenarios
- Support setting different heartbeats intervals for the Store
Others
- Upgrade etcd to solve the issues of inconsistent log output formats, Leader selection failure in prevote, and lease deadlocking.
- Develop a unified log format specification with restructured log system to facilitate collection and analysis by tools
- Add monitoring metrics including scheduling parameters, cluster label information, time consumed by PD to process TSO requests, Store ID and address information, etc.