TiDB环境搭建

本节采用单机环境,搭建TiDB测试环境。
全程云环境部署,操作系统为CentOS7.6,用户为root。

1、修改ssh配置

# 提高连接数
vi /etc/ssh/sshd_config 
MaxSessions 20

#重启sshd
service sshd restart

2、安装tidb

# 系统更新
yum -y update

# 安装tidb源
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

# 安装tiup cluster
source .bash_profile
tiup cluster

3、新建cluster配置文件

# 新建配置文件
vi mytidb.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 192.168.1.111

tidb_servers:
 - host: 192.168.1.111

tikv_servers:
 - host: 192.168.1.111
   port: 20160
   status_port: 20180

 - host: 192.168.1.111
   port: 20161
   status_port: 20181

 - host: 192.168.1.111
   port: 20162
   status_port: 20182

tiflash_servers:
 - host: 192.168.1.111

monitoring_servers:
 - host: 192.168.1.111

grafana_servers:
 - host: 192.168.1.111

4、应用cluster

#应用配置文件
tiup cluster deploy mytidb v4.0.0 ./mytidb.yaml --user root -i hwk8s.pem
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster deploy mytidb v4.0.0 ./mytidb.yaml --user root -i hwk8s.pem
Please confirm your topology:
TiDB Cluster: mytidb
TiDB Version: v4.0.0
Type        Host           Ports                            OS/Arch       Directories
----        ----           -----                            -------       -----------
pd          192.168.1.111  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv        192.168.1.111  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv        192.168.1.111  20161/20181                      linux/x86_64  /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv        192.168.1.111  20162/20182                      linux/x86_64  /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb        192.168.1.111  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash     192.168.1.111  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus  192.168.1.111  9090                             linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana     192.168.1.111  3000                             linux/x86_64  /tidb-deploy/grafana-3000
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.0 (linux/amd64) ... Done
  - Download tikv:v4.0.0 (linux/amd64) ... Done
  - Download tidb:v4.0.0 (linux/amd64) ... Done
  - Download tiflash:v4.0.0 (linux/amd64) ... Done
  - Download prometheus:v4.0.0 (linux/amd64) ... Done
  - Download grafana:v4.0.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 192.168.1.111:22 ... Done
+ Copy files
  - Copy pd -> 192.168.1.111 ... Done
  - Copy tikv -> 192.168.1.111 ... Done
  - Copy tikv -> 192.168.1.111 ... Done
  - Copy tikv -> 192.168.1.111 ... Done
  - Copy tidb -> 192.168.1.111 ... Done
  - Copy tiflash -> 192.168.1.111 ... Done
  - Copy prometheus -> 192.168.1.111 ... Done
  - Copy grafana -> 192.168.1.111 ... Done
  - Copy node_exporter -> 192.168.1.111 ... Done
  - Copy blackbox_exporter -> 192.168.1.111 ... Done
+ Check status
Deployed cluster `mytidb` successfully, you can start the cluster via `tiup cluster start mytidb`

#启用cluster
tiup cluster start mytidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster start mytidb
Starting cluster mytidb...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:5 OptTimeout:60 APITimeout:300 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Starting component pd
        Starting instance pd 192.168.1.111:2379
        Start pd 192.168.1.111:2379 success
Starting component node_exporter
        Starting instance 192.168.1.111
        Start 192.168.1.111 success
Starting component blackbox_exporter
        Starting instance 192.168.1.111
        Start 192.168.1.111 success
Starting component tikv
        Starting instance tikv 192.168.1.111:20162
        Starting instance tikv 192.168.1.111:20161
        Starting instance tikv 192.168.1.111:20160
        Start tikv 192.168.1.111:20162 success
        Start tikv 192.168.1.111:20161 success
        Start tikv 192.168.1.111:20160 success
Starting component tidb
        Starting instance tidb 192.168.1.111:4000
        Start tidb 192.168.1.111:4000 success
Starting component tiflash
        Starting instance tiflash 192.168.1.111:9000
        Start tiflash 192.168.1.111:9000 success
Starting component prometheus
        Starting instance prometheus 192.168.1.111:9090
        Start prometheus 192.168.1.111:9090 success
Starting component grafana
        Starting instance grafana 192.168.1.111:3000
        Start grafana 192.168.1.111:3000 success
Checking service state of pd
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:37 CST; 13s ago
Checking service state of tikv
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:38 CST; 12s ago
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:38 CST; 12s ago
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:38 CST; 12s ago
Checking service state of tidb
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:42 CST; 9s ago
Checking service state of tiflash
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:45 CST; 5s ago
Checking service state of prometheus
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:47 CST; 4s ago
Checking service state of grafana
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:47 CST; 4s ago
+ [ Serial ] - UpdateTopology: cluster=mytidb
Started cluster `mytidb` successfully

5、查看cluster状态

#查看cluster清单
tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster list
Name    User  Version  Path                                         PrivateKey
----    ----  -------  ----                                         ----------
mytidb  tidb  v4.0.0   /root/.tiup/storage/cluster/clusters/mytidb  /root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa


#查看cluster详情
tiup cluster display mytidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster display mytidb
TiDB Cluster: mytidb
TiDB Version: v4.0.0
ID                   Role        Host           Ports                            OS/Arch       Status   Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------   --------                    ----------
192.168.1.111:3000   grafana     192.168.1.111  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000
192.168.1.111:2379   pd          192.168.1.111  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
192.168.1.111:9090   prometheus  192.168.1.111  9090                             linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
192.168.1.111:4000   tidb        192.168.1.111  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000
192.168.1.111:9000   tiflash     192.168.1.111  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
192.168.1.111:20160  tikv        192.168.1.111  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.1.111:20161  tikv        192.168.1.111  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.1.111:20162  tikv        192.168.1.111  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162

6、mysql客户端操作tidb

#安装源
wget https://repo.mysql.com//mysql80-community-release-el7-3.noarch.rpm
rpm -Uvh mysql80-community-release-el7-3.noarch.rpm

#安装mysql客户端
yum install mysql-community-client.x86_64

#登录tidb
mysql -h 192.168.1.111 -P 4000 -u root

#和普通mysql操作区别很小

7、查看tidb管理界面

# 性能监控
http://192.168.1.111:3000 
admin/admin

# 管理界面
http://192.168.1.111:2379/dashboard 
root/空

Leave a Reply

Your email address will not be published. Required fields are marked *

*