CEPH环境搭建05

本节主要是使用nfs-kernel-server将cephfs或rbd映射为nfs。

1、安装nfs相关软件

apt-get install nfs-kernel-server
apt-get install nfs-common

2、在ceph-0001挂载好fuse和rbd

ceph-fuse /mnt/fuse
rbd map r1
mount -t ext4 /dev/rbd0 /mnt/rbd

3、配置exports文件

vi /etc/exports
/mnt/fuse         192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check,fsid=0)
/mnt/rbd          192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check)

4、更新配置

exportfs -a

systemctl restart nfs-kernel-server

showmount -e
Export list for ceph-0001:
/mnt/rbd  192.168.1.0/24
/mnt/fuse 192.168.1.0/24

5、到另一台机器进行挂载

mount -t nfs  ceph-0001:/mnt/fuse  /mnt/fuse --verbose
mount.nfs: timeout set for Fri Jun  5 19:06:27 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.1.101,clientaddr=192.168.1.103'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.1.101'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.101 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.101 prog 100005 vers 3 prot UDP port 39630

mount -t nfs  ceph-0001:/mnt/rbd  /mnt/rbd --verbose
mount.nfs: timeout set for Fri Jun  5 19:06:36 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.1.101,clientaddr=192.168.1.103'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.1.101'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.101 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.101 prog 100005 vers 3 prot UDP port 39630

ls /mnt/fuse
fuse.txt  volumes

ls /mnt/rbd
lost+found  rbd1.txt

CEPH环境搭建04

本节主要是测试ceph的三种存储方式之一对象存储。

1、创建zone,并开启rgw

#创建realm
radosgw-admin realm create --rgw-realm=myrealm --default
#创建zonegroup
radosgw-admin zonegroup create --rgw-zonegroup=myzg --endpoints=http://ceph01:8080 --rgw-realm=myrealm --master --default
#创建zone
radosgw-admin zone create --rgw-zonegroup=myzg --rgw-zone=myzone --endpoints=http://ceph01:8080 --master --default
# 在ceph01启用rgw
ceph orch apply rgw myrealm myzone --placement="1 ceph01"

2、创建s3用户

radosgw-admin user create --uid=s3user --display-name=s3user  --system
{
  "user_id": "s3user",
  "display_name": "s3user",
  "email": "",
  "suspended": 0,
  "max_buckets": 1000,
  "auid": 0,
  "subusers": [],
  "keys": [
    {
      "user": "s3user",
      "access_key": "6IUA1DMFDTP5BG9ZMIR8",
      "secret_key": "zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP"
    }
  ],
  "swift_keys": [],
  "caps": [],
  "op_mask": "read, write, delete",
  "system": "true",
  "default_placement": "",
  "placement_tags": [],
  "bucket_quota": {
    "enabled": false,
    "check_on_raw": false,
    "max_size": -1,
    "max_size_kb": 0,
    "max_objects": -1
  },
  "user_quota": {
    "enabled": false,
    "check_on_raw": false,
    "max_size": -1,
    "max_size_kb": 0,
    "max_objects": -1
  },
  "temp_url_keys": [],
  "type": "rgw"
}

3、创建swift用户

sudo radosgw-admin subuser create --uid=s3user --subuser=s3user:swift --access=full
{
  "user_id": "s3user",
  "display_name": "s3user",
  "email": "",
  "suspended": 0,
  "max_buckets": 1000,
  "auid": 0,
  "subusers": [
    {
      "id": "s3user:swift",
      "permissions": "full-control"
    }
  ],
  "keys": [
    {
      "user": "s3user",
      "access_key": "6IUA1DMFDTP5BG9ZMIR8",
      "secret_key": "zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP"
    }
  ],
  "swift_keys": [
    {
      "user": "s3user:swift",
      "secret_key": "2wou5DxQ6WiBYyHf8qb3QIMX9BnhhBd5Njlj6LJX"
    }
  ],
  "caps": [],
  "op_mask": "read, write, delete",
  "system": "true",
  "default_placement": "",
  "placement_tags": [],
  "bucket_quota": {
    "enabled": false,
    "check_on_raw": false,
    "max_size": -1,
    "max_size_kb": 0,
    "max_objects": -1
  },
  "user_quota": {
    "enabled": false,
    "check_on_raw": false,
    "max_size": -1,
    "max_size_kb": 0,
    "max_objects": -1
  },
  "temp_url_keys": [],
  "type": "rgw"
}

4、通过s3接口新增一个bucket

sudo apt-get install python-boto

# 编辑s3test.py文件
vi s3test.py

import boto.s3.connection

access_key = '6IUA1DMFDTP5BG9ZMIR8'
secret_key = 'zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP'

conn = boto.connect_s3(
  aws_access_key_id=access_key,
  aws_secret_access_key=secret_key,
  host='ceph01', port=80,
  is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
  )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
  print "{name} {created}".format(
    name=bucket.name,
    created=bucket.creation_date,
    )

#运行
python s3test.py
mybucket 2020-05-19T20:01:59.139Z

5、通过swift接口查询bucket

sudo apt-get install python-pip
sudo pip install --upgrade setuptools
sudo pip install --upgrade python-swiftclient

# 用swift工具查询bucket
swift -V 1 -A http://172.16.172.101:80/auth -U s3user:swift -K '2wou5DxQ6WiBYyHf8qb3QIMX9BnhhBd5Njlj6LJX' list
mybucket

5、通过s3cmd接口查询bucket

sudo apt-get install s3cmd

#生成配置
s3cmd --configure

#用s3cmd工具查询bucket
s3cmd ls
2020-05-19 20:01  s3://mybucket

CEPH环境搭建03

本节主要是测试ceph的三种存储方式之一块存储。

1、创建存储池以及rbd

sudo ceph osd pool ls
sudo ceph osd pool create rbd

sudo rados df

sudo rbd ls
sudo rbd create --size 1024 rbd/r1

2、将映像映射到块设备

#直接映射会报一个错
#sudo rbd map r1
#rbd: sysfs write failed
#RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable r1 object-map fast-diff deep-flatten".
#In some cases useful info is found in syslog - try "dmesg | tail".
#rbd: map failed: (6) No such device or address

#修正错误,重新映射
sudo rbd feature disable r1 object-map fast-diff deep-flatten
sudo rbd map r1
/dev/rbd0

#查看映射情况
sudo rbd showmapped
id pool image snap device
0  rbd  r1    -    /dev/rbd0

3、初始化块设备

#fdis初始化设备
sudo fdisk -l /dev/rbd0

#格式化分区为ext4
sudo mkfs.ext4 -m0 /dev/rbd0

#挂载块设备
sudo mkdir -p /mnt/rbd/r1
sudo mount -t ext4 /dev/rbd0 /mnt/rbd/r1

4、进行一些基本操作

sudo ls /mnt/rbd/r1

sudo vi /mnt/rbd/r1/hi.txt

sudo cat /mnt/rbd/r1/hi.txt
hello rbd

CEPH环境搭建02

本节主要是将存储设备加入ceph进行管理,并测试ceph的三种存储方式之一cephfs。

1、查看设备状态

sudo ceph osd status
ID  HOST     USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
0  ceph01  1027M   298G      0        0       0        0   exists,up
1  ceph02  1027M   298G      0        0       0        0   exists,up
2  ceph03  1027M   298G      0        0       0        0   exists,up
3  ceph04  1027M   298G      0        0       0        0   exists,up

sudo ceph orch device ls
HOST    PATH      TYPE   SIZE  DEVICE                             AVAIL  REJECT REASONS
ceph01  /dev/sdb  hdd    300G  VBOX_HARDDISK_VB434b1565-528a303a  True
ceph01  /dev/sda  hdd    300G  VBOX_HARDDISK_VB3eec2162-9aed4ffc  False  locked
ceph02  /dev/sdb  hdd    300G  VBOX_HARDDISK_VBa6445865-c497aa8e  True
ceph02  /dev/sda  hdd    300G  VBOX_HARDDISK_VB64e04201-60c7209f  False  locked
ceph03  /dev/sdb  hdd    300G  VBOX_HARDDISK_VB20fd0c04-b14ef3fa  True
ceph03  /dev/sda  hdd    300G  VBOX_HARDDISK_VB6f4439ab-85f80c78  False  locked
ceph04  /dev/sdb  hdd    300G  VBOX_HARDDISK_VB2c293541-3183e992  True
ceph04  /dev/sda  hdd    300G  VBOX_HARDDISK_VBd81d45d4-a88d6ff3  False  locked

2、存储设备加入osd管理

sudo ceph orch apply osd --all-available-devices
Scheduled osd update...

sudo ceph orch device ls --refresh
HOST    PATH      TYPE   SIZE  DEVICE                             AVAIL  REJECT REASONS
ceph01  /dev/sda  hdd    300G  VBOX_HARDDISK_VB3eec2162-9aed4ffc  False  locked
ceph01  /dev/sdb  hdd    300G  VBOX HARDDISK_VB434b1565-528a303a  False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph02  /dev/sda  hdd    300G  VBOX_HARDDISK_VB64e04201-60c7209f  False  locked
ceph02  /dev/sdb  hdd    300G  VBOX HARDDISK_VBa6445865-c497aa8e  False  LVM detected, locked, Insufficient space (<5GB) on vgs
ceph03  /dev/sda  hdd    300G  VBOX_HARDDISK_VB6f4439ab-85f80c78  False  locked
ceph03  /dev/sdb  hdd    300G  VBOX HARDDISK_VB20fd0c04-b14ef3fa  False  locked, Insufficient space (<5GB) on vgs, LVM detected
ceph04  /dev/sda  hdd    300G  VBOX_HARDDISK_VBd81d45d4-a88d6ff3  False  locked
ceph04  /dev/sdb  hdd    300G  VBOX HARDDISK_VB2c293541-3183e992  False  LVM detected, locked, Insufficient space (<5GB) on vgs

3、查看osd状态

sudo ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META   AVAIL    %USE  VAR   PGS  STATUS
0    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    0      up
1    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    1      up
2    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    1      up
3    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    1      up
TOTAL  1.2 TiB  4.0 GiB   14 MiB   0 B  4 GiB  1.2 TiB  0.33
MIN/MAX VAR: 1.00/1.00  STDDEV: 0

sudo ceph osd utilization
avg 0.75
stddev 0.433013 (expected baseline 0.75)
min osd.0 with 0 pgs (0 * mean)
max osd.1 with 1 pgs (1.33333 * mean)
neohope@ceph01:~$ sudo ceph osd pool stats
pool device_health_metrics id 1
nothing is going on

sudo ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         1.17200  root default
-3         0.29300      host ceph01
0    hdd  0.29300          osd.0        up   1.00000  1.00000
-5         0.29300      host ceph02
1    hdd  0.29300          osd.1        up   1.00000  1.00000
-7         0.29300      host ceph03
2    hdd  0.29300          osd.2        up   1.00000  1.00000
-9         0.29300      host ceph04
3    hdd  0.29300          osd.3        up   1.00000  1.00000

sudo ceph pg stat
1 pgs: 1 active+clean; 0 B data, 14 MiB used, 1.2 TiB / 1.2 TiB avail

4、创建cephfs

sudo ceph fs volume ls
[]
sudo ceph fs volume create  v1
sudo ceph fs volume ls
[
{
"name": "v1"
}
]

sudo ceph fs subvolumegroup create v1 g1
sudo ceph fs subvolumegroup ls v1

sudo ceph fs subvolume create v1 sv1
sudo ceph fs subvolume ls v1

sudo ceph fs ls
name: v1, metadata pool: cephfs.v1.meta, data pools: [cephfs.v1.data ]

5、挂载cephfs

sudo apt-get install ceph-fuse

# 挂载cephfs
sudo mkdir -p /mnt/ceph/ceph_fuse
sudo ceph-fuse /mnt/ceph/ceph_fuse
ceph-fuse[24512]: starting ceph client
2020-05-18 05:57:36.039818 7f7d221a2500 -1 init, newargv = 0x559708e0e2e0 newarg                                                                                                             c=9
ceph-fuse[24512]: starting fuse

# 查看挂载情况
sudo mount | grep ceph
ceph-fuse on /mnt/ceph/ceph_fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

# 可以当作本地磁盘做一些日常操作
sudo ls /mnt/ceph/ceph_fuse/
volumes
sudo ls /mnt/ceph/ceph_fuse/volumes
g1  _nogroup
sudo ls /mnt/ceph/ceph_fuse/volumes/g1

sudo vi /mnt/ceph/ceph_fuse/volumes/g1/hi.txt
sudo cat /mnt/ceph/ceph_fuse/volumes/g1/hi.txt
hello ceph fuse

CEPH环境搭建01

1、初始环境

准备四个节点(每个节点的hosts和hostname要进行相应修改):

ceph-0001 172.16.172.101
ceph-0002 172.16.172.102
ceph-0003 172.16.172.103
ceph-0004 172.16.172.104

每个节点都执行:

sudo apt-get update
sudo apt-get install docker.io

2、在主节点安装cephadm

#官方推荐的方法有问题
#sudo ./cephadm add-repo --release octopus
#The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
#sudo rm /etc/apt/trusted.gpg.d/ceph.release.gpg

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod 711 cephadm
sudo ./cephadm install

sudo cephadm install ceph-common

3、初始化

sudo mkdir -p /etc/ceph

sudo cephadm bootstrap --mon-ip 172.16.172.101 --allow-overwrite
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 7bffaaf6-9688-11ea-ac24-080027b4217f
INFO:cephadm:Verifying IP 172.16.172.101 port 3300 ...
INFO:cephadm:Verifying IP 172.16.172.101 port 6789 ...
INFO:cephadm:Mon IP 172.16.172.101 is in CIDR network 172.16.172.0/24
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host ceph01...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

URL: https://localhost:8443/
User: admin
Password: mdbewc14gq

INFO:cephadm:You can access the Ceph CLI with:

sudo /usr/sbin/cephadm shell --fsid 7bffaaf6-9688-11ea-ac24-080027b4217f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

4、此时可以通过最后给出的信息,登录管理页面了

5、修改配置文件

sudo vi /etc/ceph/ceph.conf

[global]
fsid = a4547d9d-f1a1-4753-b5cc-df0e043ebc65
mon_initial_members = ceph01
#原本生成的mon_host好像有些问题
#mon_host = [v2:ceph01:3300/0,v1:ceph:6789/0]
mon_host = 172.16.172.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.16.172.0/24

6、查看ceph状态

sudo ceph status
cluster:
id:     7bffaaf6-9688-11ea-ac24-080027b4217f
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
OSD count 0 < osd_pool_default_size 3

services:
mon: 1 daemons, quorum ceph01 (age 35m)
mgr: ceph01.lreqdw(active, since 33m)
osd: 0 osds: 0 up, 0 in

data:
pools:   1 pools, 1 pgs
objects: 0 objects, 0 B
usage:   0 B used, 0 B / 0 B avail
pgs:     100.000% pgs unknown
1 unknown

7、三个子节点做好准备

#在ceph01
#将ceph.pub拷贝到其他三个节点
scp /etc/ceph/ceph.pub  neohope@ceph02:~/authorized_keys

#在ceph02
#启用root
sudo passwd -u root
#设置好root的authorized_keys
mv authorized_keys  /root/.ssh/
cd /root/.ssh/
chown root:root authorized_keys
chmod 0600 authorized_keys
#允许root ssh登录
sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo service ssh restart

#在ceph01
#获取私钥
sudo ceph config-key get mgr/cephadm/ssh_identity_key > ceph.pem
chmod 0600 ceph.pem
#测试root登录
ssh  -i ceph.pem root@ceph02

8、三个子节点加入网络

sudo ceph orch host add ceph02
Added host 'ceph02'

sudo ceph orch host add ceph03
Added host 'ceph03'

sudo ceph orch host add ceph04
Added host 'ceph04'

sudo ceph orch host ls
HOST    ADDR    LABELS  STATUS
ceph01  ceph01
ceph02  ceph02
ceph03  ceph03
ceph04  ceph04

9、设置mon

ceph orch apply mon 4
ceph orch apply mon ceph01,ceph02,ceph03,ceph04
sudo ceph status

ISTIO环境搭建02

换了3个云厂商,才把最后的例子跑完。。。

1、下载示例源码

git clone https://github.com/istio/istio.git
Cloning into 'istio'...

2、生成镜像

cd istio/samples/helloworld/src
./build_service.sh
Sending build context to Docker daemon  7.168kB
Step 1/8 : FROM python:2-onbuild
2-onbuild: Pulling from library/python
......

sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
istio/examples-helloworld-v2         latest              2c7736ccfb8b        45 seconds ago      713MB
istio/examples-helloworld-v1         latest              20be3b24eab7        46 seconds ago      713MB

3、镜像发到其他节点

# 备份镜像
sudo docker save -o hello1.tar 20be3b24eab7
sudo docker save -o hello2.tar 2c7736ccfb8b

# 镜像发送到其他3个节点,并导入
# 对于每个节点做下面的操作
scp -i ~/hwk8s.pem hello1.tar root@192.168.1.229:~/
scp -i ~/hwk8s.pem hello2.tar root@192.168.1.229:~/

ssh -i 192.168.1.229

sudo docker load -i hello1.tar
sudo docker tag 20be3b24eab7 istio/examples-helloworld-v1:latest

sudo docker load -i hello2.tar
sudo docker tag 2c7736ccfb8b istio/examples-helloworld-v2:latest

exit

4、部署helloworld

kubectl apply -f helloworld.yaml
service/helloworld created
deployment.apps/helloworld-v1 created
deployment.apps/helloworld-v2 created

kubectl apply -f helloworld-gateway.yaml
gateway.networking.istio.io/helloworld-gateway created
virtualservice.networking.istio.io/helloworld created

kubectl get pods
kubectl get deployments

5、测试并生成流量

# 设置环境变量
# 这里IP要选用内网IP
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export GATEWAY_URL=192.168.1.124:$INGRESS_PORT

# 测试一下,会发现两次访问会用不同版本的服务
curl http://$GATEWAY_URL/hello
Hello version: v1, instance: helloworld-v1-5b75657f75-9dss5
curl http://$GATEWAY_URL/hello
Hello version: v2, instance: helloworld-v2-7855866d4f-rd2tr

# 也可以在外网,通过浏览器浏览
# 这里IP要选用外网IP
# 同样的,刷新浏览器会在不同版本服务之间切换
http://159.138.135.216:INGRESS_PORT/hello

# 生成访问流量
./loadgen.sh

6、使用kiali查看相关信息

#kiali 20001
istioctl dashboard kiali

#按上一节方法修改nginx配置内部端口
#重新加载配置
nginx -s reload

# 浏览器访问
http://159.138.135.216:8000

7、其他dashboard信息也可以用相同方法访问

#grafana 3000
istioctl dashboard grafana
#jaeger  16686
istioctl dashboard jaeger
#kiali 20001
istioctl dashboard kiali
#prometheus 9090
istioctl dashboard prometheus
#podid 9876
istioctl dashboard controlz podid
#podid 15000
istioctl dashboard envoy podid
#zipkin
istioctl dashboard zipkin

#按上一节方法修改nginx配置内部端口
#重新加载配置
nginx -s reload

# 浏览器访问
http://159.138.135.216:8000

ISTIO环境搭建01

1、首先,请根据前面k8s的教程,搭建一套可以运行的k8s环境
搭建Kubernetes环境01

搭建Kubernetes环境02

k8s-0001 159.138.135.216 192.168.1.124
k8s-0002 159.138.139.37 192.168.1.229
k8s-0003 159.138.31.39 192.168.1.187
k8s-0004 119.8.113.135 192.168.1.83

2、下载并部署istio

#下载并部署istio
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.5.2
export PATH=$PWD/bin:$PATH
istioctl manifest apply --set profile=demo
Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
Waiting for resources to become ready...
Waiting for resources to become ready...
Waiting for resources to become ready...
Waiting for resources to become ready...
Waiting for resources to become ready...
- Applying manifest for component EgressGateways...
- Applying manifest for component IngressGateways...
- Applying manifest for component AddonComponents...
✔ Finished applying manifest for component EgressGateways.
✔ Finished applying manifest for component AddonComponents.
✔ Finished applying manifest for component IngressGateways.
✔ Installation complete

#告知istio,对default空间下的pod自动注入Envoy sidecar
kubectl label namespace default istio-injection=enabled
namespace/default labeled

3、部署demo

#部署
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

#查看pods情况
kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-6fc55d65c9-kxxpm       2/2     Running   0          106s
productpage-v1-7f44c4d57c-h6h7p   2/2     Running   0          105s
ratings-v1-6f855c5fff-2rjz9       2/2     Running   0          105s
reviews-v1-54b8794ddf-tq5vm       2/2     Running   0          106s
reviews-v2-c4d6568f9-q8mvs        2/2     Running   0          106s
reviews-v3-7f66977689-ccp9c       2/2     Running   0          106s

#查看services情况
kubectl get services
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.104.68.235   <none>        9080/TCP   89s
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP    31m
productpage   ClusterIP   10.106.255.85   <none>        9080/TCP   89s
ratings       ClusterIP   10.103.19.155   <none>        9080/TCP   89s
reviews       ClusterIP   10.110.79.44    <none>        9080/TCP   89s</none></none></none></none></none>

# 开启外部访问
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

#查看gateway情况
kubectl get gateway
NAME               AGE
bookinfo-gateway   7s

4、设置ingress

# 查看是否配置了外部IP
kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
istio-ingressgateway   LoadBalancer   10.105.220.60   <pending>     15020:32235/TCP,80:30266/TCP,443:30265/TCP,15029:30393/TCP,15030:30302/TCP,15031:30789/TCP,15032:31411/TCP,31400:30790/TCP,15443:31341/TCP   5m30s</pending>

#使用node的地址作为host,和LB只需要配置一种
export INGRESS_HOST=47.57.158.253

#使用LB的地址作为host,和node只需要配置一种
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

#配置http端口
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

#配置https端口
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

#设置并查看外部访问地址
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo http://$GATEWAY_URL/productpage

#此时就可以通过节点的ip地址来访问部署的实例了
#浏览器打开上面输出的地址
#http://47.57.158.253:30266/productpage

5、开启管理页面

#开始kaili
istioctl dashboard kiali

#安装nginx
#并设置反向代理
vi /etc/nginx/nginx.conf
http {

  upstream backend {
    # 代理的本地端口
    server 127.0.0.1:20001;
  }

  server {
    # 访问的外部端口
    listen 8000;
    location / {
      proxy_pass http://backend;
    }
  }

}

# 通过反向代理的8000端口就可以访问kiali的管理界面了
# #http://47.57.158.253:8000

PS:
必须开放的TCP端口有:

8000 nginx代理端口
8001 k8s默认代理端口
30266 bookinfo demo端口,会变更

OpenShift搭建Kubernetes环境03

常用命令行:

#帮助
oc help

#诊断
oc adm diagnostics

#修改policy
oc adm policy

#启动私有registry
oc adm registry
oc adm registry --config=admin.kubeconfig --service-account=registry
oc adm registry --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig  --service-account=registry

#启动router
oc adm router

#开启关闭cluster
oc cluster up
oc cluster up --public-hostname=172.31.36.215
oc cluster down

#删除
oc delete all --selector app=ruby-ex
oc delete all --selector app=ruby-ex
oc delete services/ruby-ex
oc describe builds/ruby-ex-1
oc describe pod/deployment-example-1-deploy
oc describe secret registry-token-q8dfm

#暴露服务
oc expose svc/nodejs-ex
oc expose svc/ruby-ex

#获取信息
oc get
oc get all
oc get all --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc get all --selector app=registry
oc get all --selector app=ruby-ex
oc get builds
oc get events
oc get projects --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc get secrets

#登录
oc login
oc login -u developer
oc login -u system:admin
oc login -u system:admin --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc login https://127.0.0.1:8443 -u developer
oc login https://172.31.36.215:8443 --token=tMgeqgvyGkpxhEH-MhP2AdChbTXCDDHzD-27JvZPfzQ
oc login https://172.31.36.215:8443 -u system:admin

#查看日志
oc logs -f bc/nodejs-ex
oc logs -f bc/ruby-ex

#部署app
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
oc new-app deployment-example:latest
oc new-app https://github.com/sclorg/nodejs-ex -l name=myapp
oc new-app openshift/deployment-example
oc new-app openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git

#新建项目
oc new-project test

#rollout
oc rollout latest docker-registry

#查看组昂头
oc statu
oc status --suggest
oc status -v

#镜像打标签
oc tag --source=docker openshift/deployment-example:v1 deployment-example:latest

#看版本
oc version

#登录用户
oc whoami

OpenShift搭建Kubernetes环境02

1、通过镜像部署应用

#登录,用户名developer密码任意
./oc login -u developer
./oc whoami

#部署应用
#方法1
./oc tag --source=docker openshift/deployment-example:v1 deployment-example:latest
#方法2
./oc tag docker.io/openshift/deployment-example:v1 deployment-example:latest
./oc new-app deployment-example:latest
./oc status
curl http://172.30.192.169:8080

#更新应用
#方法1
./oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
#方法2
oc tag docker.io/openshift/deployment-example:v2 deployment-example:latest
curl http://172.30.192.169:8080

#查看情况
./oc get all
NAME                             READY     STATUS    RESTARTS   AGE
pod/deployment-example-3-4wk9x   1/1       Running   0          3m

NAME                                         DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1   0         0         0         18m
replicationcontroller/deployment-example-2   0         0         0         15m
replicationcontroller/deployment-example-3   1         1         1         4m

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.82.203   <none>        8080/TCP   18m</none>

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   3          1         1         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                    TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   172.30.1.1:5000/myproject/deployment-example   latest    4 minutes ago

2、构建镜像并部署应用

#登录
./oc login https://IP:8443 -u developer

#部署应用
./oc new-app openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git
--> Found Docker image b3b1ce7 (2 years old) from Docker Hub for "openshift/nodejs-010-centos7"

Node.js 0.10
------------
Platform for building and running Node.js 0.10 applications

Tags: builder, nodejs, nodejs010

* An image stream tag will be created as "nodejs-010-centos7:latest" that will track the source image
* A source build using source code from https://github.com/sclorg/nodejs-ex.git will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Every time "nodejs-010-centos7:latest" changes a new build will be triggered
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"

--> Creating resources ...
imagestream.image.openshift.io "nodejs-010-centos7" created
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.

#暴露服务
./oc expose svc/nodejs-ex
route.route.openshift.io/nodejs-ex exposed

#查看状态
./oc status
In project My Project (myproject) on server https://IP:8443

http://nodejs-ex-myproject.IP.nip.io to pod port 8080-tcp (svc/nodejs-ex)
dc/nodejs-ex deploys istag/nodejs-ex:latest <-
bc/nodejs-ex source builds https://github.com/sclorg/nodejs-ex.git on istag/nodejs-010-centos7:latest
build #1 pending for about a minute
deployment #1 waiting on image or update

2 infos identified, use 'oc status --suggest' to see details.

#访问服务
curl  http://nodejs-ex-myproject.127.0.0.1.nip.io

OpenShift搭建Kubernetes环境01

1、环境准备
操作系统Centos7.7

2、安装所需软件

sudo yum update
sudo yum install curl telnet git docker

3、修改Docker配置,支持私有的registry

sudo vi /etc/docker/daemon.json
#内容如下
{
"insecure-registries" : [ "172.30.0.0/16"]
}

4、启动Docker

sudo systemctl start docker
sudo systemctl status docker
sudo systemctl enable docker

5、下载最新版本的openshift origin

https://github.com/openshift/origin/releases/

wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -xf openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz

6、开启cluster

#切换路径
cd openshift

#--public-hostname这个参数,是其他节点访问的地址,也是网站的默认地址
sudo ./oc cluster up --public-hostname=172.31.36.215
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I1112 14:25:54.907027    1428 config.go:40] Running "create-master-config"
I1112 14:25:57.915599    1428 config.go:46] Running "create-node-config"
I1112 14:25:59.062042    1428 flags.go:30] Running "create-kubelet-flags"
I1112 14:25:59.521012    1428 run_kubelet.go:49] Running "start-kubelet"
I1112 14:25:59.721185    1428 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I1112 14:26:21.735024    1428 interface.go:26] Installing "kube-proxy" ...
I1112 14:26:21.735053    1428 interface.go:26] Installing "kube-dns" ...
I1112 14:26:21.735061    1428 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I1112 14:26:21.735068    1428 interface.go:26] Installing "openshift-apiserver" ...
I1112 14:26:21.735089    1428 apply_template.go:81] Installing "kube-proxy"
I1112 14:26:21.735098    1428 apply_template.go:81] Installing "openshift-apiserver"
I1112 14:26:21.735344    1428 apply_template.go:81] Installing "kube-dns"
I1112 14:26:21.736634    1428 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I1112 14:26:25.755466    1428 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
I1112 14:27:47.998244    1428 run_self_hosted.go:242] openshift-apiserver available
I1112 14:27:47.998534    1428 interface.go:26] Installing "openshift-controller-manager" ...
I1112 14:27:47.998554    1428 apply_template.go:81] Installing "openshift-controller-manager"
I1112 14:27:51.521512    1428 interface.go:41] Finished installing "openshift-controller-manager"
Adding default OAuthClient redirect URIs ...
Adding sample-templates ...
Adding centos-imagestreams ...
Adding router ...
Adding web-console ...
Adding registry ...
Adding persistent-volumes ...
I1112 14:27:51.544935    1428 interface.go:26] Installing "sample-templates" ...
I1112 14:27:51.544947    1428 interface.go:26] Installing "centos-imagestreams" ...
I1112 14:27:51.544955    1428 interface.go:26] Installing "openshift-router" ...
I1112 14:27:51.544963    1428 interface.go:26] Installing "openshift-web-console-operator" ...
I1112 14:27:51.544973    1428 interface.go:26] Installing "openshift-image-registry" ...
I1112 14:27:51.544980    1428 interface.go:26] Installing "persistent-volumes" ...
I1112 14:27:51.545540    1428 interface.go:26] Installing "sample-templates/postgresql" ...
I1112 14:27:51.545551    1428 interface.go:26] Installing "sample-templates/cakephp quickstart" ...
I1112 14:27:51.545559    1428 interface.go:26] Installing "sample-templates/dancer quickstart" ...
I1112 14:27:51.545567    1428 interface.go:26] Installing "sample-templates/django quickstart" ...
I1112 14:27:51.545574    1428 interface.go:26] Installing "sample-templates/rails quickstart" ...
I1112 14:27:51.545580    1428 interface.go:26] Installing "sample-templates/jenkins pipeline ephemeral" ...
I1112 14:27:51.545587    1428 interface.go:26] Installing "sample-templates/sample pipeline" ...
I1112 14:27:51.545595    1428 interface.go:26] Installing "sample-templates/mongodb" ...
I1112 14:27:51.545602    1428 interface.go:26] Installing "sample-templates/mysql" ...
I1112 14:27:51.545609    1428 interface.go:26] Installing "sample-templates/nodejs quickstart" ...
I1112 14:27:51.545616    1428 interface.go:26] Installing "sample-templates/mariadb" ...
I1112 14:27:51.545665    1428 apply_list.go:67] Installing "sample-templates/mariadb"
I1112 14:27:51.545775    1428 apply_list.go:67] Installing "centos-imagestreams"
I1112 14:27:51.552201    1428 apply_list.go:67] Installing "sample-templates/rails quickstart"
I1112 14:27:51.552721    1428 apply_template.go:81] Installing "openshift-web-console-operator"
I1112 14:27:51.553283    1428 apply_list.go:67] Installing "sample-templates/postgresql"
I1112 14:27:51.553420    1428 apply_list.go:67] Installing "sample-templates/cakephp quickstart"
I1112 14:27:51.553539    1428 apply_list.go:67] Installing "sample-templates/dancer quickstart"
I1112 14:27:51.553653    1428 apply_list.go:67] Installing "sample-templates/django quickstart"
I1112 14:27:51.553900    1428 apply_list.go:67] Installing "sample-templates/mysql"
I1112 14:27:51.554028    1428 apply_list.go:67] Installing "sample-templates/jenkins pipeline ephemeral"
I1112 14:27:51.554359    1428 apply_list.go:67] Installing "sample-templates/nodejs quickstart"
I1112 14:27:51.554567    1428 apply_list.go:67] Installing "sample-templates/mongodb"
I1112 14:27:51.554692    1428 apply_list.go:67] Installing "sample-templates/sample pipeline"
I1112 14:28:06.634946    1428 interface.go:41] Finished installing "sample-templates/postgresql" "sample-templates/cakephp quickstart" "sample-templates/dancer quickstart" "sample-templates/django quickstart" "sample-templates/rails quickstart" "sample-templates/jenkins pipeline ephemeral" "sample-templates/sample pipeline" "sample-templates/mongodb" "sample-templates/mysql" "sample-templates/nodejs quickstart" "sample-templates/mariadb"
I1112 14:28:28.673589    1428 interface.go:41] Finished installing "sample-templates" "centos-imagestreams" "openshift-router" "openshift-web-console-operator" "openshift-image-registry" "persistent-volumes"
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.

The server is accessible via web console at:
https://172.31.36.215:8443

You are logged in as:
User:     developer
Password: <any value=""></any>

To login as administrator:
oc login -u system:admin

7、登录UI

https://172.31.36.215:8443/console
system/admin

8、管理员访问

#登录
sudo ./oc login -u system:admin --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig

#查看情况
sudo ./oc get all --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
NAME                                READY     STATUS      RESTARTS   AGE
pod/docker-registry-1-rvv44         1/1       Running     0          29m
pod/persistent-volume-setup-88c5t   0/1       Completed   0          30m
pod/router-1-x527s                  1/1       Running     0          29m

NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/docker-registry-1   1         1         1         29m
replicationcontroller/router-1            1         1         1         29m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
service/docker-registry   ClusterIP   172.30.1.1      <none>        5000/TCP                  30m
service/kubernetes        ClusterIP   172.30.0.1      <none>        443/TCP                   31m
service/router            ClusterIP   172.30.190.49   <none>        80/TCP,443/TCP,1936/TCP   29m</none></none></none>

NAME                                DESIRED   SUCCESSFUL   AGE
job.batch/persistent-volume-setup   1         1            30m

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/docker-registry   1          1         1         config
deploymentconfig.apps.openshift.io/router            1          1         1         config

#查看项目清单
sudo ./oc get projects --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
NAME                            DISPLAY NAME   STATUS
default                                        Active
kube-dns                                       Active
kube-proxy                                     Active
kube-public                                    Active
kube-system                                    Active
myproject                       My Project     Active
openshift                                      Active
openshift-apiserver                            Active
openshift-controller-manager                   Active
openshift-core-operators                       Active
openshift-infra                                Active
openshift-node                                 Active
openshift-service-cert-signer                  Active
openshift-web-console                          Active

9、查看容器清单

sudo docker ps -a
CONTAINER ID        IMAGE                                                                                                                            COMMAND                  CREATED              STATUS                        PORTS               NAMES
c347c56d2a7c        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift opensh..."   14 seconds ago       Up 13 seconds                                     k8s_c_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_1
7a079835fd87        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-sc..."   16 seconds ago       Up 15 seconds                                     k8s_scheduler_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_1
33edea80b969        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-co..."   18 seconds ago       Up 17 seconds                                     k8s_controllers_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_1
c5c4b4a30927        docker.io/openshift/origin-service-serving-cert-signer@sha256:699e649874fb8549f2e560a83c4805296bdf2cef03a5b41fa82b3820823393de   "service-serving-c..."   20 seconds ago       Up 19 seconds                                     k8s_operator_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_1
9bf5456b9a97        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift experi..."   22 seconds ago       Up 21 seconds                                     k8s_operator_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_1
66f27274adb4        openshift/nodejs-010-centos7@sha256:bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332                             "container-entrypo..."   About a minute ago   Exited (0) 43 seconds ago                         s2i_openshift_nodejs_010_centos7_sha256_bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332_eaab5bb0
e4c52a772a9f        be30b6cce5fa                                                                                                                     "/usr/bin/origin-w..."   About a minute ago   Exited (137) 2 seconds ago                        k8s_webconsole_webconsole-5594d5b67f-8l4b8_openshift-web-console_b5515962-05fc-11ea-84e4-062e09fba9f6_0
a778ec40561e        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           About a minute ago   Exited (0) 2 seconds ago                          k8s_POD_webconsole-5594d5b67f-8l4b8_openshift-web-console_b5515962-05fc-11ea-84e4-062e09fba9f6_0
e15062eac455        docker.io/openshift/origin-docker-registry@sha256:5c2fe22619668face238d1ba8602a95b3102b81e667b54ba2888f1f0ee261ffd               "/bin/sh -c '/usr/..."   6 minutes ago        Up 6 minutes                                      k8s_registry_docker-registry-1-wmp47_default_9cfdaf50-05fc-11ea-84e4-062e09fba9f6_0
861c4c49572a        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_docker-registry-1-wmp47_default_9cfdaf50-05fc-11ea-84e4-062e09fba9f6_0
c6ebd5ad0bba        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift experi..."   7 minutes ago        Exited (255) 24 seconds ago                       k8s_operator_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_0
cddd662f7d86        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_0
bdca70a2b67f        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift opensh..."   7 minutes ago        Exited (255) 23 seconds ago                       k8s_c_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_0
9d671211845b        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_0
8561b5a28a35        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "openshift start n..."   8 minutes ago        Up 8 minutes                                      k8s_kube-proxy_kube-proxy-z9622_kube-proxy_67da606f-05fc-11ea-84e4-062e09fba9f6_0
a240a1ac6457        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "openshift start n..."   8 minutes ago        Up 8 minutes                                      k8s_kube-dns_kube-dns-5xlrh_kube-dns_67da7e68-05fc-11ea-84e4-062e09fba9f6_0
2233dff0c201        docker.io/openshift/origin-service-serving-cert-signer@sha256:699e649874fb8549f2e560a83c4805296bdf2cef03a5b41fa82b3820823393de   "service-serving-c..."   8 minutes ago        Exited (255) 24 seconds ago                       k8s_operator_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_0
b622c82b5ef3        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_kube-proxy-z9622_kube-proxy_67da606f-05fc-11ea-84e4-062e09fba9f6_0
9303e90d164c        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_kube-dns-5xlrh_kube-dns_67da7e68-05fc-11ea-84e4-062e09fba9f6_0
02f9425b8c7b        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_0
f279a265ee20        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "/bin/bash -c '#!/..."   9 minutes ago        Up 9 minutes                                      k8s_etcd_master-etcd-localhost_kube-system_c1cc5d01ac323a05089a07a6082dbe54_0
7376f93cadce        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-sc..."   9 minutes ago        Exited (1) 24 seconds ago                         k8s_scheduler_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_0
0d250ebb56eb        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-co..."   9 minutes ago        Exited (255) 23 seconds ago                       k8s_controllers_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_0
78f161557ef8        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_0
adc1aa2a86d8        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_0
62e223931bbc        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_master-etcd-localhost_kube-system_c1cc5d01ac323a05089a07a6082dbe54_0
9b30e2734938        openshift/origin-node:v3.11                                                                                                      "hyperkube kubelet..."   9 minutes ago        Up 9 minutes                                      origin

10、清理

#停止cluster
sudo ./oc cluster down
#清理配置
sudo rm -rf openshift.local.clusterup