OpenShift搭建Kubernetes环境03

常用命令行:

#帮助
oc help

#诊断
oc adm diagnostics

#修改policy
oc adm policy

#启动私有registry
oc adm registry
oc adm registry --config=admin.kubeconfig --service-account=registry
oc adm registry --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig  --service-account=registry

#启动router
oc adm router

#开启关闭cluster
oc cluster up
oc cluster up --public-hostname=172.31.36.215
oc cluster down

#删除
oc delete all --selector app=ruby-ex
oc delete all --selector app=ruby-ex
oc delete services/ruby-ex
oc describe builds/ruby-ex-1
oc describe pod/deployment-example-1-deploy
oc describe secret registry-token-q8dfm

#暴露服务
oc expose svc/nodejs-ex
oc expose svc/ruby-ex

#获取信息
oc get
oc get all
oc get all --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc get all --selector app=registry
oc get all --selector app=ruby-ex
oc get builds
oc get events
oc get projects --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc get secrets

#登录
oc login
oc login -u developer
oc login -u system:admin
oc login -u system:admin --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc login https://127.0.0.1:8443 -u developer
oc login https://172.31.36.215:8443 --token=tMgeqgvyGkpxhEH-MhP2AdChbTXCDDHzD-27JvZPfzQ
oc login https://172.31.36.215:8443 -u system:admin

#查看日志
oc logs -f bc/nodejs-ex
oc logs -f bc/ruby-ex

#部署app
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
oc new-app deployment-example:latest
oc new-app https://github.com/sclorg/nodejs-ex -l name=myapp
oc new-app openshift/deployment-example
oc new-app openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git

#新建项目
oc new-project test

#rollout
oc rollout latest docker-registry

#查看组昂头
oc statu
oc status --suggest
oc status -v

#镜像打标签
oc tag --source=docker openshift/deployment-example:v1 deployment-example:latest

#看版本
oc version

#登录用户
oc whoami

OpenShift搭建Kubernetes环境02

1、通过镜像部署应用

#登录,用户名developer密码任意
./oc login -u developer
./oc whoami

#部署应用
#方法1
./oc tag --source=docker openshift/deployment-example:v1 deployment-example:latest
#方法2
./oc tag docker.io/openshift/deployment-example:v1 deployment-example:latest
./oc new-app deployment-example:latest
./oc status
curl http://172.30.192.169:8080

#更新应用
#方法1
./oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
#方法2
oc tag docker.io/openshift/deployment-example:v2 deployment-example:latest
curl http://172.30.192.169:8080

#查看情况
./oc get all
NAME                             READY     STATUS    RESTARTS   AGE
pod/deployment-example-3-4wk9x   1/1       Running   0          3m

NAME                                         DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1   0         0         0         18m
replicationcontroller/deployment-example-2   0         0         0         15m
replicationcontroller/deployment-example-3   1         1         1         4m

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.82.203   <none>        8080/TCP   18m</none>

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   3          1         1         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                    TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   172.30.1.1:5000/myproject/deployment-example   latest    4 minutes ago

2、构建镜像并部署应用

#登录
./oc login https://IP:8443 -u developer

#部署应用
./oc new-app openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git
--> Found Docker image b3b1ce7 (2 years old) from Docker Hub for "openshift/nodejs-010-centos7"

Node.js 0.10
------------
Platform for building and running Node.js 0.10 applications

Tags: builder, nodejs, nodejs010

* An image stream tag will be created as "nodejs-010-centos7:latest" that will track the source image
* A source build using source code from https://github.com/sclorg/nodejs-ex.git will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Every time "nodejs-010-centos7:latest" changes a new build will be triggered
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"

--> Creating resources ...
imagestream.image.openshift.io "nodejs-010-centos7" created
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.

#暴露服务
./oc expose svc/nodejs-ex
route.route.openshift.io/nodejs-ex exposed

#查看状态
./oc status
In project My Project (myproject) on server https://IP:8443

http://nodejs-ex-myproject.IP.nip.io to pod port 8080-tcp (svc/nodejs-ex)
dc/nodejs-ex deploys istag/nodejs-ex:latest <-
bc/nodejs-ex source builds https://github.com/sclorg/nodejs-ex.git on istag/nodejs-010-centos7:latest
build #1 pending for about a minute
deployment #1 waiting on image or update

2 infos identified, use 'oc status --suggest' to see details.

#访问服务
curl  http://nodejs-ex-myproject.127.0.0.1.nip.io

OpenShift搭建Kubernetes环境01

1、环境准备
操作系统Centos7.7

2、安装所需软件

sudo yum update
sudo yum install curl telnet git docker

3、修改Docker配置,支持私有的registry

sudo vi /etc/docker/daemon.json
#内容如下
{
"insecure-registries" : [ "172.30.0.0/16"]
}

4、启动Docker

sudo systemctl start docker
sudo systemctl status docker
sudo systemctl enable docker

5、下载最新版本的openshift origin

https://github.com/openshift/origin/releases/

wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -xf openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz

6、开启cluster

#切换路径
cd openshift

#--public-hostname这个参数,是其他节点访问的地址,也是网站的默认地址
sudo ./oc cluster up --public-hostname=172.31.36.215
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I1112 14:25:54.907027    1428 config.go:40] Running "create-master-config"
I1112 14:25:57.915599    1428 config.go:46] Running "create-node-config"
I1112 14:25:59.062042    1428 flags.go:30] Running "create-kubelet-flags"
I1112 14:25:59.521012    1428 run_kubelet.go:49] Running "start-kubelet"
I1112 14:25:59.721185    1428 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I1112 14:26:21.735024    1428 interface.go:26] Installing "kube-proxy" ...
I1112 14:26:21.735053    1428 interface.go:26] Installing "kube-dns" ...
I1112 14:26:21.735061    1428 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I1112 14:26:21.735068    1428 interface.go:26] Installing "openshift-apiserver" ...
I1112 14:26:21.735089    1428 apply_template.go:81] Installing "kube-proxy"
I1112 14:26:21.735098    1428 apply_template.go:81] Installing "openshift-apiserver"
I1112 14:26:21.735344    1428 apply_template.go:81] Installing "kube-dns"
I1112 14:26:21.736634    1428 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I1112 14:26:25.755466    1428 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
I1112 14:27:47.998244    1428 run_self_hosted.go:242] openshift-apiserver available
I1112 14:27:47.998534    1428 interface.go:26] Installing "openshift-controller-manager" ...
I1112 14:27:47.998554    1428 apply_template.go:81] Installing "openshift-controller-manager"
I1112 14:27:51.521512    1428 interface.go:41] Finished installing "openshift-controller-manager"
Adding default OAuthClient redirect URIs ...
Adding sample-templates ...
Adding centos-imagestreams ...
Adding router ...
Adding web-console ...
Adding registry ...
Adding persistent-volumes ...
I1112 14:27:51.544935    1428 interface.go:26] Installing "sample-templates" ...
I1112 14:27:51.544947    1428 interface.go:26] Installing "centos-imagestreams" ...
I1112 14:27:51.544955    1428 interface.go:26] Installing "openshift-router" ...
I1112 14:27:51.544963    1428 interface.go:26] Installing "openshift-web-console-operator" ...
I1112 14:27:51.544973    1428 interface.go:26] Installing "openshift-image-registry" ...
I1112 14:27:51.544980    1428 interface.go:26] Installing "persistent-volumes" ...
I1112 14:27:51.545540    1428 interface.go:26] Installing "sample-templates/postgresql" ...
I1112 14:27:51.545551    1428 interface.go:26] Installing "sample-templates/cakephp quickstart" ...
I1112 14:27:51.545559    1428 interface.go:26] Installing "sample-templates/dancer quickstart" ...
I1112 14:27:51.545567    1428 interface.go:26] Installing "sample-templates/django quickstart" ...
I1112 14:27:51.545574    1428 interface.go:26] Installing "sample-templates/rails quickstart" ...
I1112 14:27:51.545580    1428 interface.go:26] Installing "sample-templates/jenkins pipeline ephemeral" ...
I1112 14:27:51.545587    1428 interface.go:26] Installing "sample-templates/sample pipeline" ...
I1112 14:27:51.545595    1428 interface.go:26] Installing "sample-templates/mongodb" ...
I1112 14:27:51.545602    1428 interface.go:26] Installing "sample-templates/mysql" ...
I1112 14:27:51.545609    1428 interface.go:26] Installing "sample-templates/nodejs quickstart" ...
I1112 14:27:51.545616    1428 interface.go:26] Installing "sample-templates/mariadb" ...
I1112 14:27:51.545665    1428 apply_list.go:67] Installing "sample-templates/mariadb"
I1112 14:27:51.545775    1428 apply_list.go:67] Installing "centos-imagestreams"
I1112 14:27:51.552201    1428 apply_list.go:67] Installing "sample-templates/rails quickstart"
I1112 14:27:51.552721    1428 apply_template.go:81] Installing "openshift-web-console-operator"
I1112 14:27:51.553283    1428 apply_list.go:67] Installing "sample-templates/postgresql"
I1112 14:27:51.553420    1428 apply_list.go:67] Installing "sample-templates/cakephp quickstart"
I1112 14:27:51.553539    1428 apply_list.go:67] Installing "sample-templates/dancer quickstart"
I1112 14:27:51.553653    1428 apply_list.go:67] Installing "sample-templates/django quickstart"
I1112 14:27:51.553900    1428 apply_list.go:67] Installing "sample-templates/mysql"
I1112 14:27:51.554028    1428 apply_list.go:67] Installing "sample-templates/jenkins pipeline ephemeral"
I1112 14:27:51.554359    1428 apply_list.go:67] Installing "sample-templates/nodejs quickstart"
I1112 14:27:51.554567    1428 apply_list.go:67] Installing "sample-templates/mongodb"
I1112 14:27:51.554692    1428 apply_list.go:67] Installing "sample-templates/sample pipeline"
I1112 14:28:06.634946    1428 interface.go:41] Finished installing "sample-templates/postgresql" "sample-templates/cakephp quickstart" "sample-templates/dancer quickstart" "sample-templates/django quickstart" "sample-templates/rails quickstart" "sample-templates/jenkins pipeline ephemeral" "sample-templates/sample pipeline" "sample-templates/mongodb" "sample-templates/mysql" "sample-templates/nodejs quickstart" "sample-templates/mariadb"
I1112 14:28:28.673589    1428 interface.go:41] Finished installing "sample-templates" "centos-imagestreams" "openshift-router" "openshift-web-console-operator" "openshift-image-registry" "persistent-volumes"
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.

The server is accessible via web console at:
https://172.31.36.215:8443

You are logged in as:
User:     developer
Password: <any value=""></any>

To login as administrator:
oc login -u system:admin

7、登录UI

https://172.31.36.215:8443/console
system/admin

8、管理员访问

#登录
sudo ./oc login -u system:admin --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig

#查看情况
sudo ./oc get all --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
NAME                                READY     STATUS      RESTARTS   AGE
pod/docker-registry-1-rvv44         1/1       Running     0          29m
pod/persistent-volume-setup-88c5t   0/1       Completed   0          30m
pod/router-1-x527s                  1/1       Running     0          29m

NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/docker-registry-1   1         1         1         29m
replicationcontroller/router-1            1         1         1         29m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
service/docker-registry   ClusterIP   172.30.1.1      <none>        5000/TCP                  30m
service/kubernetes        ClusterIP   172.30.0.1      <none>        443/TCP                   31m
service/router            ClusterIP   172.30.190.49   <none>        80/TCP,443/TCP,1936/TCP   29m</none></none></none>

NAME                                DESIRED   SUCCESSFUL   AGE
job.batch/persistent-volume-setup   1         1            30m

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/docker-registry   1          1         1         config
deploymentconfig.apps.openshift.io/router            1          1         1         config

#查看项目清单
sudo ./oc get projects --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
NAME                            DISPLAY NAME   STATUS
default                                        Active
kube-dns                                       Active
kube-proxy                                     Active
kube-public                                    Active
kube-system                                    Active
myproject                       My Project     Active
openshift                                      Active
openshift-apiserver                            Active
openshift-controller-manager                   Active
openshift-core-operators                       Active
openshift-infra                                Active
openshift-node                                 Active
openshift-service-cert-signer                  Active
openshift-web-console                          Active

9、查看容器清单

sudo docker ps -a
CONTAINER ID        IMAGE                                                                                                                            COMMAND                  CREATED              STATUS                        PORTS               NAMES
c347c56d2a7c        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift opensh..."   14 seconds ago       Up 13 seconds                                     k8s_c_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_1
7a079835fd87        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-sc..."   16 seconds ago       Up 15 seconds                                     k8s_scheduler_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_1
33edea80b969        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-co..."   18 seconds ago       Up 17 seconds                                     k8s_controllers_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_1
c5c4b4a30927        docker.io/openshift/origin-service-serving-cert-signer@sha256:699e649874fb8549f2e560a83c4805296bdf2cef03a5b41fa82b3820823393de   "service-serving-c..."   20 seconds ago       Up 19 seconds                                     k8s_operator_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_1
9bf5456b9a97        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift experi..."   22 seconds ago       Up 21 seconds                                     k8s_operator_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_1
66f27274adb4        openshift/nodejs-010-centos7@sha256:bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332                             "container-entrypo..."   About a minute ago   Exited (0) 43 seconds ago                         s2i_openshift_nodejs_010_centos7_sha256_bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332_eaab5bb0
e4c52a772a9f        be30b6cce5fa                                                                                                                     "/usr/bin/origin-w..."   About a minute ago   Exited (137) 2 seconds ago                        k8s_webconsole_webconsole-5594d5b67f-8l4b8_openshift-web-console_b5515962-05fc-11ea-84e4-062e09fba9f6_0
a778ec40561e        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           About a minute ago   Exited (0) 2 seconds ago                          k8s_POD_webconsole-5594d5b67f-8l4b8_openshift-web-console_b5515962-05fc-11ea-84e4-062e09fba9f6_0
e15062eac455        docker.io/openshift/origin-docker-registry@sha256:5c2fe22619668face238d1ba8602a95b3102b81e667b54ba2888f1f0ee261ffd               "/bin/sh -c '/usr/..."   6 minutes ago        Up 6 minutes                                      k8s_registry_docker-registry-1-wmp47_default_9cfdaf50-05fc-11ea-84e4-062e09fba9f6_0
861c4c49572a        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_docker-registry-1-wmp47_default_9cfdaf50-05fc-11ea-84e4-062e09fba9f6_0
c6ebd5ad0bba        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift experi..."   7 minutes ago        Exited (255) 24 seconds ago                       k8s_operator_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_0
cddd662f7d86        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_0
bdca70a2b67f        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift opensh..."   7 minutes ago        Exited (255) 23 seconds ago                       k8s_c_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_0
9d671211845b        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_0
8561b5a28a35        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "openshift start n..."   8 minutes ago        Up 8 minutes                                      k8s_kube-proxy_kube-proxy-z9622_kube-proxy_67da606f-05fc-11ea-84e4-062e09fba9f6_0
a240a1ac6457        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "openshift start n..."   8 minutes ago        Up 8 minutes                                      k8s_kube-dns_kube-dns-5xlrh_kube-dns_67da7e68-05fc-11ea-84e4-062e09fba9f6_0
2233dff0c201        docker.io/openshift/origin-service-serving-cert-signer@sha256:699e649874fb8549f2e560a83c4805296bdf2cef03a5b41fa82b3820823393de   "service-serving-c..."   8 minutes ago        Exited (255) 24 seconds ago                       k8s_operator_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_0
b622c82b5ef3        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_kube-proxy-z9622_kube-proxy_67da606f-05fc-11ea-84e4-062e09fba9f6_0
9303e90d164c        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_kube-dns-5xlrh_kube-dns_67da7e68-05fc-11ea-84e4-062e09fba9f6_0
02f9425b8c7b        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_0
f279a265ee20        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "/bin/bash -c '#!/..."   9 minutes ago        Up 9 minutes                                      k8s_etcd_master-etcd-localhost_kube-system_c1cc5d01ac323a05089a07a6082dbe54_0
7376f93cadce        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-sc..."   9 minutes ago        Exited (1) 24 seconds ago                         k8s_scheduler_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_0
0d250ebb56eb        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-co..."   9 minutes ago        Exited (255) 23 seconds ago                       k8s_controllers_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_0
78f161557ef8        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_0
adc1aa2a86d8        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_0
62e223931bbc        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_master-etcd-localhost_kube-system_c1cc5d01ac323a05089a07a6082dbe54_0
9b30e2734938        openshift/origin-node:v3.11                                                                                                      "hyperkube kubelet..."   9 minutes ago        Up 9 minutes                                      origin

10、清理

#停止cluster
sudo ./oc cluster down
#清理配置
sudo rm -rf openshift.local.clusterup

Rancher搭建Kubernetes环境

1、安装docker

sudo apt-get update
sudo apt-get install docker.io

2、运行Rancher

mkdir /home/ubuntu/rancker
sudo docker run -d -v /home/ubuntu/rancker:/var/lib/rancher/ --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable

3、登录

http://OUTTER_IP
默认用户名admin,需要设置密码
URL一般会填写内网地址,https://172.31.33.84

4、根据向导新建cluster,向导会帮忙生成对应语句
4.1、根据操作指引,在node1运行controlplane和worker

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.2 --server https://172.31.33.84 --token 4rjrgss2hq5w6nlmp4frptxqlq68zr7szvd9fd45pm7rfk968snsjk --ca-checksum 79f195454ab982ce478878f4e5525516ad09d6eadc4c611d4d542da9a7fc6c7e --controlplane --worker

4.2、根据操作指引,在node2运行etcd 和worker

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.2 --server https://172.31.33.84 --token 4rjrgss2hq5w6nlmp4frptxqlq68zr7szvd9fd45pm7rfk968snsjk --ca-checksum 79f195454ab982ce478878f4e5525516ad09d6eadc4c611d4d542da9a7fc6c7e --etcd --worker

5、在Rancher页面会看到两个节点加入成功

6、设置防火墙,让Rancher可以通过
我这边打开了2379和10250两个端口(之前K8S的端口已经设置过)

7、选择部署APP

比直接部署K8S方便了很多哦!

搭建Kubernetes环境05

1、关闭swap

sudo swapoff -a

2、自动启用docker.service

sudo systemctl enable docker.service

3、cgroup切换为systemd

#参考https://kubernetes.io/docs/setup/cri/

sudo vi /etc/docker/daemon.json
#内容如下
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

4、一些有用的命令

kubeadm init
kubeadm reset

kubectl api-versions

kubectl config view

kubectl cluster-info
kubectl cluster-info dump

kubectl get nodes
kubectl get nodes -o wide
kubectl describe node mynode

kubectl get rc,namespace

kubectl get pods
kubectl get pods --all-namespaces -o wide
kubectl describe pod mypod

kubectl get deployments
kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
kubectl describe deployment kubernetes-dashboard --namespace=kubernetes-dashboard

kubectl expose deployment hikub01 --type=LoadBalancer

kubectl get services
kubectl get service -n kube-system
kubectl describe services kubernetes-dashboard --namespace=kubernetes-dashboard

kubectl proxy
kubectl proxy --address=' 172.172.172.101'  --accept-hosts='.*' --accept-paths='.*'

kubectl run hikub01 --image=myserver:1.0.0 --port=8080
kubectl create -f  myserver-deployment.yaml
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

kubectl delete deployment mydeployment
kubectl delete node mynode
kubectl delete pod mypod

kubectl get events --namespace=kube-system

kubectl taint node mynode node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/master-

kubectl edit service myservice
kubectl edit service kubernetes-dashboard -n kube-system

kubectl get secret -n kube-system | grep neohope | awk '{print $1}')

搭建Kubernetes环境04

本节采用yaml文件部署应用。

1、编写文件
vi hikub01-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: hikub01-deployment
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: myserver
image:  myserver:1.0.0
ports:
-  containerPort: 8080

2、创建

kubectl create -f hikub01-deployment.yaml

3、暴露端口

kubectl expose deployment hikub01-deployment --type=LoadBalancer

4、测试

查看pods
kubectl get pods -o wide

#查看部署
kubectl get deployments -o wide

#查看服务
kubectl get services -o wide

#可以根据输出,在浏览器或wget访问服务
curl http://ip:port

5、清理

kubectl delete -f hikub01-deployment.yaml

搭建Kubernetes环境03

本节,我们尝试部署一些服务。

1、首先,我们要准备自己的Docker镜像
1.1、准备文件
vi Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

vi myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

1.2、测试myserver.js

nodejs myserver.js

1.3、创建镜像

#构建image
sudo docker build -t myserver:1.0.0 .

1.4、测试container
[code lang="shell"]
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0
curl localhost:8080

2、导出镜像

docker images
sudo docker save 0fb19de44f41 -o myserver.tar

3、导入到其他两个节点

scp myserver.6.12.0.tar ubuntu@node01:/home/ubuntu
ssh node01
sudo docker load -i myserver.6.12.0.tar
sudo docker tag 0fb19de44f41 myserver:6.12.0

3、用kubectl部署服务

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看pods
kubectl get pods -o wide

#查看部署
kubectl get deployments -o wide

#查看服务
kubectl get services -o wide

#可以根据输出,在浏览器或wget访问服务
curl http://ip:port

4、清理

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#删除部署
kubectl delete pod hikub01

搭建Kubernetes环境02

上一节我们搭建了环境,这一节我们部署一些k8s插件。官方插件清单如下:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

本次我们部署两个插件:calico和dashboard

1、由于资源比较少,我们让master也可以进行部署

kubectl taint nodes --all node-role.kubernetes.io/master-

2、部署calico
2.1、部署

kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

2.2、观察部署情况,等待部署成功

watch kubectl get pods --all-namespaces

3、部署dashboard
3.1、部署

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

3.2、观察部署情况,等待部署成功

watch kubectl get pods --all-namespaces

3.3、启动代理

kubectl proxy

3.4、浏览器可以看到登录页面

http://IP:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

但其实这个地方有个坑,因为dashboard要求https登录,而代理当前为http,所以只有IP为localhost时,才能登录成功。在这里浪费了不少时间。

3.5、新建用户

vi neohope-account.yaml

#文件内容
apiVersion: v1
kind: ServiceAccount
metadata:
name: neohope
namespace: kube-system

kubectl create -f neohope-account.yaml

3.6、用户角色配置

vi neohope-role.yaml

#文件内容
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: neohope
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: neohope
namespace: kube-system

kubectl create -f  neohope-role.yaml

3.7、获取Token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep neohope | awk '{print $1}')
Name:         neohope-token-2khbb
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: neohope
kubernetes.io/service-account.uid: fc842f0e-0ef4-4c41-9f30-8a5409c866c2</none>

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImtIRjFiZnI5V3NiRlpZQXpzUk9DanA4cHBCQnFOcFNrek5xTjltUGRLeDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuZW9ob3BlLXRva2VuLTJraGJiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im5lb2hvcGUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYzg0MmYwZS0wZWY0LTRjNDEtOWYzMC04YTU0MDljODY2YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bmVvaG9wZSJ9.Zsk4C3hs58JmLa0rRTdfoQKlY524kMtnlzEHxgMryv7u9kPHS51BA0xiVC1nMLDcbMp1U3YHlnz0-IJkFzVeaboq0qEFea56nnqASMSEtCB1c7IE52zip-4tDWdZ-jYwf7KN5Gwq_4ZUqa4gRf1znVH7nlsxTpaoQ_-yjJsQpqDyUb1BLgGrUGcWOF2hGMHrNPHbZfLyhsPp_ijOvmRviAq57nyrGYiVG9ZiMoGV_1Y5rvn2-L0BHCdgZjSzK6nlfvoMlpnqhQXkrxE0d9EJbeukfx5sOF3xUPkQx-6dKm3QrkqBNXInbDxJXJbj27JalGarpRDA9tsPg1mUqAb-7g

3.8、如果是localhost登录,用上面的Token就可以了

http://IP:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

3.9、如果不是localhost登录,有三种方式

#A、暴露端口
#B、通过api server进行代理访问
#https://IP:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
#C、通过插件,用nginx等代理后访问
#为了偷懒,用方案A

3.10、暴露端口

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
将ClusterIP换为NodePort,然后保存

3.10、查看服务情况

kubectl get service -n kubernetes-dashboard -o wide
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.102.175.21    <none>        8000/TCP        17h   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.102.129.248   <none>        443:31766/TCP   17h   k8s-app=kubernetes-dashboard
#这里可以找到端口31766</none></none>

kubectl get pod -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-566cddb686-vkxvx   1/1     Running   0          17h   192.168.201.133   master   <none>           <none>
kubernetes-dashboard-7b5bf5d559-m6xt7        1/1     Running   0          17h   192.168.201.132   master   <none>           <none>
#这里可以找到主机

3.11这样就可以通过地址直接访问master的服务了

https://MASTER_IP:31766

忽略全部HTTPS安全警告
采用Token登录

搭建Kubernetes环境01

1、硬件网络环境
本次采用了云主机,搭建K8S环境
云主机最低配置要求为2Core,4G内存

节点名称 内网地址
master 172.16.172.101
node01 172.16.172.102
node02 172.16.172.103

2、配置k8s仓库,全部三个节点

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

3、安装必要的软件,全部三个节点

sudo apt-get update
sudo apt-get install -y docker.io kubelet kubeadm kubectl

4、初始化master节点
4.1、可以预先拉取镜像

kubeadm config images pull

4.2、初始化kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu18 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.3.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.525105 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1zdpa5.vmcsacag4wj3a0gv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
--discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

4.3、初始化master节点

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、两个工作节点加入

sudo kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
>    --discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、在master查看节点情况

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   15m   v1.16.2
node01   NotReady   <none>   10m   v1.16.2
node02   NotReady   <none>   10m   v1.16.2

Minikube搭建Kubernetes环境02_MacOS

MacOS搭建方法其实为:

minikube(vm-driver=xhyve)

1、首先要安装docker客户端及xhyve

#安装Docker客户端
curl -Lo docker.tgz https://download.docker.com/mac/static/stable/x86_64/docker-17.09.0-ce.tgz
#解压docker.tgz得到docker(我直接用了一个GUI工具解压的)
chmod +x docker
sudo mv docker/usr/local/bin/

#安装xhyve
#https://brew.sh/
brew install docker-machine-driver-xhyve

2、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.8.4/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

3、启动minikube

minikube version

#直连方式
minikube start --vm-driver=xhyve

#代理方式
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#设置minikube的docker环境
eval $(minikube docker-env)

#构建image,镜像会构建到minikube开启的虚拟机中
docker build -t myserver:1.0.0 .

#在minikube开启的虚拟机中运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget ip:8080

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、查看管理界面

minikube dashboard

7、清理

#退出minikube的docker环境
eval $(minikube docker-env -u)

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

2、有两种方法来解决

2.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,启动minkube时,就要指定代理了
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0