The components in the TiKV cluster are started by TiUP in the following order:
PD -> TiKV -> Prometheus -> Grafana -> Node Exporter -> Blackbox Exporter
This document describes the following common operations when you operate and maintain a TiKV cluster using TiUP.
You can manage multiple TiKV clusters with the TiUP cluster component.
To view all the deployed TiKV clusters, run the following command:
tiup cluster list
To start the cluster, run the following command:
tiup cluster start ${cluster-name}
The components in the TiKV cluster are started by TiUP in the following order:
PD -> TiKV -> Prometheus -> Grafana -> Node Exporter -> Blackbox Exporter
You can start only some of the components by adding the -R
or -N
parameters in the command. For example:
This command starts only the PD component:
tiup cluster start ${cluster-name} -R pd
This command starts only the PD components on the 1.2.3.4
and 1.2.3.5
hosts:
tiup cluster start ${cluster-name} -N 1.2.3.4:2379,1.2.3.5:2379
-R
or -N
parameters, make sure the order of components is correct. For example, start the PD component before the TiKV component. Otherwise, the start might fail.After starting the cluster, check the status of each component to ensure that they are up and running. TiUP provides a display
command to do so, and you don’t have to log in to every machine to view the component status.
tiup cluster display ${cluster-name}
When the cluster is in operation, if you need to modify the parameters of a component, run the edit-config
command. The detailed steps are as follows:
Open the configuration file of the cluster in the editing mode:
tiup cluster edit-config ${cluster-name}
Configure the parameters:
If the configuration is globally effective for a component, edit server_configs
:
server_configs:
tikv:
server.status-thread-pool-size: 2
If the configuration takes effect on a specific node, edit the configuration in config
of the node:
tikv_servers:
- host: 10.0.1.11
port: 4000
config:
server.status-thread-pool-size: 2
For the parameter format, see the TiUP parameter template.
For more information on the configuration parameters of components, refer to TiKV config.toml.example
, and PD config.toml.example
.
Rolling update the configuration and restart the corresponding components by running the reload
command:
tiup cluster reload ${cluster-name} [-N <nodes>] [-R <roles>]
If you want to set the status thread pool size parameter (status-thread-pool-size
in the server module) to 2
in tikv-server, edit the configuration as follows:
server_configs:
tikv:
server.status-thread-pool-size: 2
Then, run the tiup cluster reload ${cluster-name} -R tikv
command to rolling restart the TiKV component.
After deploying and starting the cluster, you can rename the cluster using the tiup cluster rename
command:
tiup cluster rename ${cluster-name} ${new-name}
To stop the cluster, run the following command:
tiup cluster stop ${cluster-name}
The components in the TiKV cluster are stopped by TiUP in the following order:
Grafana -> Prometheus -> TiKV -> PD -> Node Exporter -> Blackbox Exporter
Similar to the start
command, the stop
command supports stopping some of the components by adding the -R
or -N
parameters. For example:
This command stops only the TiKV component:
tiup cluster stop ${cluster-name} -R tikv
This command stops only the components on the 1.2.3.4
and 1.2.3.5
hosts:
tiup cluster stop ${cluster-name} -N 1.2.3.4:4000,1.2.3.5:4000
The operation of cleaning up cluster data stops all the services and cleans up the data directory or/and log directory. The operation cannot be reverted, so proceed with caution.
Clean up the data of all services in the cluster, but keep the logs:
tiup cluster clean ${cluster-name} --data
Clean up the logs of all services in the cluster, but keep the data:
tiup cluster clean ${cluster-name} --log
Clean up the data and logs of all services in the cluster:
tiup cluster clean ${cluster-name} --all
Clean up the logs and data of all services except Prometheus:
tiup cluster clean ${cluster-name} --all --ignore-role prometheus
Clean up the logs and data of all services except the 172.16.13.11:9000
instance:
tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.11:9000
Clean up the logs and data of all services except the 172.16.13.12
node:
tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.12
The destroy operation stops the services and clears the data directory and deployment directory. The operation cannot be reverted, so proceed with caution.
tiup cluster destroy ${cluster-name}