About neohope

一直在努力,还没想过要放弃...

导致惨重代价的运维事故2020

2020年12月:Google Cloud全球服务中断
事件经过:12月14日,Google旗下的多项核心服务(包括YouTube、Gmail、Google Drive、Google Search)在全球范围内发生大规模宕机,持续约1小时。这是近5个月内Google发生的第3次全球性故障。
事故原因:内部基础设施组件故障(据推测为身份验证或负载均衡服务)。
造成损失:全球数亿用户无法访问关键生产力工具和娱乐内容,再次引发了企业界对公有云巨头服务稳定性的担忧。

2020年11月:亚马逊AWS美国东部区域宕机
事件经过:AWS Kinesis数据流式处理服务软件错误,引发连锁故障,导致大量依赖该服务的网站和应用瘫痪,持续超4小时。
事故原因:软件缺陷引发的运维故障。
造成损失:大量企业业务中断,AWS声誉受损,面临客户索赔。

2020年9月:特斯拉全球性宕机
事件经过:9月23日上午11点起,特斯拉系统遭遇全球性宕机,持续约4小时,多国车主无法通过手机App连接车辆,太阳能及储能电池用户无法监控系统状态,部分车主被锁车外、有人在充电桩处被困近两小时。
事故原因:系统级故障。
造成损失:具体经济损失未公布,严重影响车主正常用车体验,品牌形象和用户信任度遭受显著打击。

2020年8月:CenturyLink配置错误导致全球互联网中断
事件经过:美国互联网服务提供商CenturyLink因数据中心错误配置引发连锁反应,全球互联网流量下降3.5%,受影响服务包括Cloudflare、AWS、Garmin等,7小时后故障解决。
事故原因:BGP路由配置错误。
造成损失:成为有史以来最大互联网中断之一,全球大量服务无法正常访问。

2020年8月:Zoom视频会议中断
事件经过:8月24日,正值全球远程办公和在线教学高峰期,Zoom发生了部分服务中断,导致用户无法访问离线会议和在线视频会议,中断持续了3小时。
事故原因:Zoom仅在状态页面表示“找到并解决了问题”,未详细披露是代码缺陷还是容量规划问题。
造成损失:在用户依赖度最高的时期掉链子,严重影响了全球企业的线上会议、学校教学以及商务谈判的正常进行

2020年6月:T-Mobile 美国全国通信中断
事件经过:6月15日,T-Mobile美国网络遭遇了长达13个小时的全国性瘫痪。这是T-Mobile历史上持续时间最长、影响范围最广的一次中断,导致数百万用户无法拨打语音电话或发送短信。
事故原因:网络配置变更失误。起因是东南部一个第三方供应商的光纤电路故障,但由于T-Mobile自身的网络冗余设计失效,加上后续的负载均衡配置问题,导致IP池过载,最终引发全网崩溃。
造成损失:全美范围内的语音和短信服务中断;由于正值疫情期间,严重影响了用户的紧急通讯和正常生活,公司声誉受损。

2020年5月:AWS大规模服务中断
事件经过:AWS发生严重故障,影响Amazon.com等众多网站和服务。
事故原因:路由表配置错误,更新骨干网络时错误路由表形成流量黑洞。
造成损失:全球大量网站和APP数小时无法访问,重创电商及在线服务。

2020年4月:华为云大面积宕机
事件经过:4月10日,华为云登录及管理后台无法访问,北京、广州、上海等地用户受影响,宕机持续约3小时,故障修复后部分客户业务逐步恢复。
事故原因:部分主机异常,具体技术细节未公开。
造成损失:多家公司业务无法正常维持,影响业务连续性。

2020年4月:GitHub服务中断
事件经过:4月21日,微软旗下GitHub多个服务访问异常,持续一个半小时,是当月多次宕机事件之一。
事故原因:未公开披露具体原因。
造成损失:影响开发者源代码存储、提交及协作工作,干扰项目推进。

2020年3月:微软Azure美东数据中心服务中断
事件经过:3月3日,微软美国东部数据中心服务中断6小时,美国北部客户无法使用Azure云服务,最终通过重置冷却系统控制器、重启硬件恢复。
事故原因:冷却系统故障,楼宇自动化控制功能失灵导致气流减少,数据中心温度飙升影响设备性能。
造成损失:计算和存储实例无法访问,影响依赖该区域云服务的企业业务运转。

2020年3月:微软Teams服务中断
事件经过:3月16日,新冠疫情期间Teams平台涌入大量新用户,导致欧洲地区服务宕机2小时。
事故原因:服务支持能力不足,无法承载突发用户量激增压力。
造成损失:对依赖远程办公的企业造成较大影响,干扰正常办公秩序。

2020年3月:谷歌云平台服务中断
事件经过:3月26日,谷歌多个云服务无法访问,用户频繁遇到500、502错误代码,美国东部沿海地区用户受影响最严重。
事故原因:基础设施组件故障。
造成损失:大量用户无法正常使用谷歌云服务,业务推进受阻。

2020年3月:腾讯课堂系统崩溃
事件经过:3月4日,腾讯课堂出现登录失败问题,因凌晨系统升级时部分机器故障引发,当日8:30经紧急抢修恢复正常。
事故原因:系统升级操作不当,部分机器升级过程中出现故障。
造成损失:影响在线教育课程开展,干扰师生教学进度。

2020年2月:微盟员工恶意删库事故
事件经过:2月23日,微盟研发中心运维部核心人员贺某通过个人VPN登入内网跳板机,4分钟内删除服务器全部数据,致300余万用户无法使用SaaS产品,故障持续8天14小时。
事故原因:员工因个人精神、生活问题恶意破坏,滥用运维权限执行高危删除操作。数据最后在腾讯云的帮助下得以恢复。
造成损失:市值蒸发28亿,预计赔偿金1.5亿,直接经济损失2260余万元,含数据恢复费、商户赔偿费等。
小插曲:据悉,该工程师欠了网贷无力偿还,而且当天喝了不少酒,该工程师被判刑6年月。

Git06传输大repository失败

1、最近接手了一个项目,下载代码时,总会报错

git clone https://e.coding.net/xxx/xxx.git
error: RPC failed; curl 18 transfer closed with outstanding read data remaining

2、有建议说将缓存设置大一些,但没有用

#524288000单位为Byte,524288000B也就是 500MB
git config --global http.postBuffer 524288000

#1G
git config --global http.postBuffer 1048576000

3、最后将下载方式从https改为ssh方式就好了

ssh-keygen -t rsa -C "neohope@yahoo.com"
GIT_SSH_COMMAND="ssh -i /PATH_TO_ISA/xxx.rsa" git clone git@e.coding.net:xxx/xxx.git

什么是中台?

在我看来,中台并不是整个企业的数据集合,也不是什么具体的技术,甚至也不应该是一个单纯的IT概念。

中台本身应该是一家企业的一种组织架构方式及业务组织形态。在这种组织形态中,借鉴了IT技术中代码或模块复用的概念,实现了各个维度的能力高度整合与复用,而IT只是其中重要的一环,是一种重要的保障手段。

要建立中台,首先要改变的是企业组织架构,以及业务组织形态。干掉各个山头,从整体战略战术上考虑资源投放方式。保证各个业务环节的互联互通,加强能力复用,避免重复建设,实现企业高效运转。这无疑是一个一把手工程。

中台的最终目标,就是业务封装能力。通过将各领域下的通用功能进行封装,可以快速支持新的业务领域和业务模式。而投射到技术上,就是不要重新造轮子,通过复用大量已有服务,快速搭建系统,对新业务进行支持。

而我们说的建中台,最终都应该是业务中台。其他的各种技术类中台,都属于平台的范畴,服务于业务中台。

OpenShift搭建Kubernetes环境03

常用命令行:

#帮助
oc help

#诊断
oc adm diagnostics

#修改policy
oc adm policy

#启动私有registry
oc adm registry
oc adm registry --config=admin.kubeconfig --service-account=registry
oc adm registry --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig  --service-account=registry

#启动router
oc adm router

#开启关闭cluster
oc cluster up
oc cluster up --public-hostname=172.31.36.215
oc cluster down

#删除
oc delete all --selector app=ruby-ex
oc delete all --selector app=ruby-ex
oc delete services/ruby-ex
oc describe builds/ruby-ex-1
oc describe pod/deployment-example-1-deploy
oc describe secret registry-token-q8dfm

#暴露服务
oc expose svc/nodejs-ex
oc expose svc/ruby-ex

#获取信息
oc get
oc get all
oc get all --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc get all --selector app=registry
oc get all --selector app=ruby-ex
oc get builds
oc get events
oc get projects --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc get secrets

#登录
oc login
oc login -u developer
oc login -u system:admin
oc login -u system:admin --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
oc login https://127.0.0.1:8443 -u developer
oc login https://172.31.36.215:8443 --token=tMgeqgvyGkpxhEH-MhP2AdChbTXCDDHzD-27JvZPfzQ
oc login https://172.31.36.215:8443 -u system:admin

#查看日志
oc logs -f bc/nodejs-ex
oc logs -f bc/ruby-ex

#部署app
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
oc new-app deployment-example:latest
oc new-app https://github.com/sclorg/nodejs-ex -l name=myapp
oc new-app openshift/deployment-example
oc new-app openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git

#新建项目
oc new-project test

#rollout
oc rollout latest docker-registry

#查看组昂头
oc statu
oc status --suggest
oc status -v

#镜像打标签
oc tag --source=docker openshift/deployment-example:v1 deployment-example:latest

#看版本
oc version

#登录用户
oc whoami

OpenShift搭建Kubernetes环境02

1、通过镜像部署应用

#登录,用户名developer密码任意
./oc login -u developer
./oc whoami

#部署应用
#方法1
./oc tag --source=docker openshift/deployment-example:v1 deployment-example:latest
#方法2
./oc tag docker.io/openshift/deployment-example:v1 deployment-example:latest
./oc new-app deployment-example:latest
./oc status
curl http://172.30.192.169:8080

#更新应用
#方法1
./oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
#方法2
oc tag docker.io/openshift/deployment-example:v2 deployment-example:latest
curl http://172.30.192.169:8080

#查看情况
./oc get all
NAME                             READY     STATUS    RESTARTS   AGE
pod/deployment-example-3-4wk9x   1/1       Running   0          3m

NAME                                         DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1   0         0         0         18m
replicationcontroller/deployment-example-2   0         0         0         15m
replicationcontroller/deployment-example-3   1         1         1         4m

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.82.203   <none>        8080/TCP   18m</none>

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   3          1         1         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                    TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   172.30.1.1:5000/myproject/deployment-example   latest    4 minutes ago

2、构建镜像并部署应用

#登录
./oc login https://IP:8443 -u developer

#部署应用
./oc new-app openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git
--> Found Docker image b3b1ce7 (2 years old) from Docker Hub for "openshift/nodejs-010-centos7"

Node.js 0.10
------------
Platform for building and running Node.js 0.10 applications

Tags: builder, nodejs, nodejs010

* An image stream tag will be created as "nodejs-010-centos7:latest" that will track the source image
* A source build using source code from https://github.com/sclorg/nodejs-ex.git will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Every time "nodejs-010-centos7:latest" changes a new build will be triggered
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"

--> Creating resources ...
imagestream.image.openshift.io "nodejs-010-centos7" created
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.

#暴露服务
./oc expose svc/nodejs-ex
route.route.openshift.io/nodejs-ex exposed

#查看状态
./oc status
In project My Project (myproject) on server https://IP:8443

http://nodejs-ex-myproject.IP.nip.io to pod port 8080-tcp (svc/nodejs-ex)
dc/nodejs-ex deploys istag/nodejs-ex:latest <-
bc/nodejs-ex source builds https://github.com/sclorg/nodejs-ex.git on istag/nodejs-010-centos7:latest
build #1 pending for about a minute
deployment #1 waiting on image or update

2 infos identified, use 'oc status --suggest' to see details.

#访问服务
curl  http://nodejs-ex-myproject.127.0.0.1.nip.io

OpenShift搭建Kubernetes环境01

1、环境准备
操作系统Centos7.7

2、安装所需软件

sudo yum update
sudo yum install curl telnet git docker

3、修改Docker配置,支持私有的registry

sudo vi /etc/docker/daemon.json
#内容如下
{
"insecure-registries" : [ "172.30.0.0/16"]
}

4、启动Docker

sudo systemctl start docker
sudo systemctl status docker
sudo systemctl enable docker

5、下载最新版本的openshift origin

https://github.com/openshift/origin/releases/

wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -xf openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz

6、开启cluster

#切换路径
cd openshift

#--public-hostname这个参数,是其他节点访问的地址,也是网站的默认地址
sudo ./oc cluster up --public-hostname=172.31.36.215
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I1112 14:25:54.907027    1428 config.go:40] Running "create-master-config"
I1112 14:25:57.915599    1428 config.go:46] Running "create-node-config"
I1112 14:25:59.062042    1428 flags.go:30] Running "create-kubelet-flags"
I1112 14:25:59.521012    1428 run_kubelet.go:49] Running "start-kubelet"
I1112 14:25:59.721185    1428 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I1112 14:26:21.735024    1428 interface.go:26] Installing "kube-proxy" ...
I1112 14:26:21.735053    1428 interface.go:26] Installing "kube-dns" ...
I1112 14:26:21.735061    1428 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I1112 14:26:21.735068    1428 interface.go:26] Installing "openshift-apiserver" ...
I1112 14:26:21.735089    1428 apply_template.go:81] Installing "kube-proxy"
I1112 14:26:21.735098    1428 apply_template.go:81] Installing "openshift-apiserver"
I1112 14:26:21.735344    1428 apply_template.go:81] Installing "kube-dns"
I1112 14:26:21.736634    1428 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I1112 14:26:25.755466    1428 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
I1112 14:27:47.998244    1428 run_self_hosted.go:242] openshift-apiserver available
I1112 14:27:47.998534    1428 interface.go:26] Installing "openshift-controller-manager" ...
I1112 14:27:47.998554    1428 apply_template.go:81] Installing "openshift-controller-manager"
I1112 14:27:51.521512    1428 interface.go:41] Finished installing "openshift-controller-manager"
Adding default OAuthClient redirect URIs ...
Adding sample-templates ...
Adding centos-imagestreams ...
Adding router ...
Adding web-console ...
Adding registry ...
Adding persistent-volumes ...
I1112 14:27:51.544935    1428 interface.go:26] Installing "sample-templates" ...
I1112 14:27:51.544947    1428 interface.go:26] Installing "centos-imagestreams" ...
I1112 14:27:51.544955    1428 interface.go:26] Installing "openshift-router" ...
I1112 14:27:51.544963    1428 interface.go:26] Installing "openshift-web-console-operator" ...
I1112 14:27:51.544973    1428 interface.go:26] Installing "openshift-image-registry" ...
I1112 14:27:51.544980    1428 interface.go:26] Installing "persistent-volumes" ...
I1112 14:27:51.545540    1428 interface.go:26] Installing "sample-templates/postgresql" ...
I1112 14:27:51.545551    1428 interface.go:26] Installing "sample-templates/cakephp quickstart" ...
I1112 14:27:51.545559    1428 interface.go:26] Installing "sample-templates/dancer quickstart" ...
I1112 14:27:51.545567    1428 interface.go:26] Installing "sample-templates/django quickstart" ...
I1112 14:27:51.545574    1428 interface.go:26] Installing "sample-templates/rails quickstart" ...
I1112 14:27:51.545580    1428 interface.go:26] Installing "sample-templates/jenkins pipeline ephemeral" ...
I1112 14:27:51.545587    1428 interface.go:26] Installing "sample-templates/sample pipeline" ...
I1112 14:27:51.545595    1428 interface.go:26] Installing "sample-templates/mongodb" ...
I1112 14:27:51.545602    1428 interface.go:26] Installing "sample-templates/mysql" ...
I1112 14:27:51.545609    1428 interface.go:26] Installing "sample-templates/nodejs quickstart" ...
I1112 14:27:51.545616    1428 interface.go:26] Installing "sample-templates/mariadb" ...
I1112 14:27:51.545665    1428 apply_list.go:67] Installing "sample-templates/mariadb"
I1112 14:27:51.545775    1428 apply_list.go:67] Installing "centos-imagestreams"
I1112 14:27:51.552201    1428 apply_list.go:67] Installing "sample-templates/rails quickstart"
I1112 14:27:51.552721    1428 apply_template.go:81] Installing "openshift-web-console-operator"
I1112 14:27:51.553283    1428 apply_list.go:67] Installing "sample-templates/postgresql"
I1112 14:27:51.553420    1428 apply_list.go:67] Installing "sample-templates/cakephp quickstart"
I1112 14:27:51.553539    1428 apply_list.go:67] Installing "sample-templates/dancer quickstart"
I1112 14:27:51.553653    1428 apply_list.go:67] Installing "sample-templates/django quickstart"
I1112 14:27:51.553900    1428 apply_list.go:67] Installing "sample-templates/mysql"
I1112 14:27:51.554028    1428 apply_list.go:67] Installing "sample-templates/jenkins pipeline ephemeral"
I1112 14:27:51.554359    1428 apply_list.go:67] Installing "sample-templates/nodejs quickstart"
I1112 14:27:51.554567    1428 apply_list.go:67] Installing "sample-templates/mongodb"
I1112 14:27:51.554692    1428 apply_list.go:67] Installing "sample-templates/sample pipeline"
I1112 14:28:06.634946    1428 interface.go:41] Finished installing "sample-templates/postgresql" "sample-templates/cakephp quickstart" "sample-templates/dancer quickstart" "sample-templates/django quickstart" "sample-templates/rails quickstart" "sample-templates/jenkins pipeline ephemeral" "sample-templates/sample pipeline" "sample-templates/mongodb" "sample-templates/mysql" "sample-templates/nodejs quickstart" "sample-templates/mariadb"
I1112 14:28:28.673589    1428 interface.go:41] Finished installing "sample-templates" "centos-imagestreams" "openshift-router" "openshift-web-console-operator" "openshift-image-registry" "persistent-volumes"
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.

The server is accessible via web console at:
https://172.31.36.215:8443

You are logged in as:
User:     developer
Password: <any value=""></any>

To login as administrator:
oc login -u system:admin

7、登录UI

https://172.31.36.215:8443/console
system/admin

8、管理员访问

#登录
sudo ./oc login -u system:admin --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig

#查看情况
sudo ./oc get all --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
NAME                                READY     STATUS      RESTARTS   AGE
pod/docker-registry-1-rvv44         1/1       Running     0          29m
pod/persistent-volume-setup-88c5t   0/1       Completed   0          30m
pod/router-1-x527s                  1/1       Running     0          29m

NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/docker-registry-1   1         1         1         29m
replicationcontroller/router-1            1         1         1         29m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
service/docker-registry   ClusterIP   172.30.1.1      <none>        5000/TCP                  30m
service/kubernetes        ClusterIP   172.30.0.1      <none>        443/TCP                   31m
service/router            ClusterIP   172.30.190.49   <none>        80/TCP,443/TCP,1936/TCP   29m</none></none></none>

NAME                                DESIRED   SUCCESSFUL   AGE
job.batch/persistent-volume-setup   1         1            30m

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/docker-registry   1          1         1         config
deploymentconfig.apps.openshift.io/router            1          1         1         config

#查看项目清单
sudo ./oc get projects --config=/home/centos/openshift3/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig
NAME                            DISPLAY NAME   STATUS
default                                        Active
kube-dns                                       Active
kube-proxy                                     Active
kube-public                                    Active
kube-system                                    Active
myproject                       My Project     Active
openshift                                      Active
openshift-apiserver                            Active
openshift-controller-manager                   Active
openshift-core-operators                       Active
openshift-infra                                Active
openshift-node                                 Active
openshift-service-cert-signer                  Active
openshift-web-console                          Active

9、查看容器清单

sudo docker ps -a
CONTAINER ID        IMAGE                                                                                                                            COMMAND                  CREATED              STATUS                        PORTS               NAMES
c347c56d2a7c        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift opensh..."   14 seconds ago       Up 13 seconds                                     k8s_c_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_1
7a079835fd87        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-sc..."   16 seconds ago       Up 15 seconds                                     k8s_scheduler_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_1
33edea80b969        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-co..."   18 seconds ago       Up 17 seconds                                     k8s_controllers_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_1
c5c4b4a30927        docker.io/openshift/origin-service-serving-cert-signer@sha256:699e649874fb8549f2e560a83c4805296bdf2cef03a5b41fa82b3820823393de   "service-serving-c..."   20 seconds ago       Up 19 seconds                                     k8s_operator_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_1
9bf5456b9a97        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift experi..."   22 seconds ago       Up 21 seconds                                     k8s_operator_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_1
66f27274adb4        openshift/nodejs-010-centos7@sha256:bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332                             "container-entrypo..."   About a minute ago   Exited (0) 43 seconds ago                         s2i_openshift_nodejs_010_centos7_sha256_bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332_eaab5bb0
e4c52a772a9f        be30b6cce5fa                                                                                                                     "/usr/bin/origin-w..."   About a minute ago   Exited (137) 2 seconds ago                        k8s_webconsole_webconsole-5594d5b67f-8l4b8_openshift-web-console_b5515962-05fc-11ea-84e4-062e09fba9f6_0
a778ec40561e        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           About a minute ago   Exited (0) 2 seconds ago                          k8s_POD_webconsole-5594d5b67f-8l4b8_openshift-web-console_b5515962-05fc-11ea-84e4-062e09fba9f6_0
e15062eac455        docker.io/openshift/origin-docker-registry@sha256:5c2fe22619668face238d1ba8602a95b3102b81e667b54ba2888f1f0ee261ffd               "/bin/sh -c '/usr/..."   6 minutes ago        Up 6 minutes                                      k8s_registry_docker-registry-1-wmp47_default_9cfdaf50-05fc-11ea-84e4-062e09fba9f6_0
861c4c49572a        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_docker-registry-1-wmp47_default_9cfdaf50-05fc-11ea-84e4-062e09fba9f6_0
c6ebd5ad0bba        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift experi..."   7 minutes ago        Exited (255) 24 seconds ago                       k8s_operator_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_0
cddd662f7d86        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_openshift-web-console-operator-664b974ff5-fr7x2_openshift-core-operators_97dcc42f-05fc-11ea-84e4-062e09fba9f6_0
bdca70a2b67f        docker.io/openshift/origin-hypershift@sha256:81ca9b40f0c5ad25420792f128f8ae5693416171b26ecd9af2e581211c0bd070                    "hypershift opensh..."   7 minutes ago        Exited (255) 23 seconds ago                       k8s_c_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_0
9d671211845b        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           7 minutes ago        Up 7 minutes                                      k8s_POD_openshift-controller-manager-v25zz_openshift-controller-manager_8fd42f89-05fc-11ea-84e4-062e09fba9f6_0
8561b5a28a35        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "openshift start n..."   8 minutes ago        Up 8 minutes                                      k8s_kube-proxy_kube-proxy-z9622_kube-proxy_67da606f-05fc-11ea-84e4-062e09fba9f6_0
a240a1ac6457        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "openshift start n..."   8 minutes ago        Up 8 minutes                                      k8s_kube-dns_kube-dns-5xlrh_kube-dns_67da7e68-05fc-11ea-84e4-062e09fba9f6_0
2233dff0c201        docker.io/openshift/origin-service-serving-cert-signer@sha256:699e649874fb8549f2e560a83c4805296bdf2cef03a5b41fa82b3820823393de   "service-serving-c..."   8 minutes ago        Exited (255) 24 seconds ago                       k8s_operator_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_0
b622c82b5ef3        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_kube-proxy-z9622_kube-proxy_67da606f-05fc-11ea-84e4-062e09fba9f6_0
9303e90d164c        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_kube-dns-5xlrh_kube-dns_67da7e68-05fc-11ea-84e4-062e09fba9f6_0
02f9425b8c7b        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           8 minutes ago        Up 8 minutes                                      k8s_POD_openshift-service-cert-signer-operator-6d477f986b-jdhpp_openshift-core-operators_67c4fe2f-05fc-11ea-84e4-062e09fba9f6_0
f279a265ee20        docker.io/openshift/origin-control-plane@sha256:da776a9c4280b820d1b32246212f55667ff34a4370fe3da35e8730e442206be0                 "/bin/bash -c '#!/..."   9 minutes ago        Up 9 minutes                                      k8s_etcd_master-etcd-localhost_kube-system_c1cc5d01ac323a05089a07a6082dbe54_0
7376f93cadce        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-sc..."   9 minutes ago        Exited (1) 24 seconds ago                         k8s_scheduler_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_0
0d250ebb56eb        docker.io/openshift/origin-hyperkube@sha256:ab28b06f9e98d952245f0369c8931fa3d9a9318df73b6179ec87c0894936ecef                     "hyperkube kube-co..."   9 minutes ago        Exited (255) 23 seconds ago                       k8s_controllers_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_0
78f161557ef8        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_kube-scheduler-localhost_kube-system_498d5acc6baf3a83ee1103f42f924cbe_0
adc1aa2a86d8        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_kube-controller-manager-localhost_kube-system_2a0b2be7d0b54a4f34226da11ad7dd6b_0
62e223931bbc        openshift/origin-pod:v3.11                                                                                                       "/usr/bin/pod"           9 minutes ago        Up 9 minutes                                      k8s_POD_master-etcd-localhost_kube-system_c1cc5d01ac323a05089a07a6082dbe54_0
9b30e2734938        openshift/origin-node:v3.11                                                                                                      "hyperkube kubelet..."   9 minutes ago        Up 9 minutes                                      origin

10、清理

#停止cluster
sudo ./oc cluster down
#清理配置
sudo rm -rf openshift.local.clusterup

Rancher搭建Kubernetes环境

1、安装docker

sudo apt-get update
sudo apt-get install docker.io

2、运行Rancher

mkdir /home/ubuntu/rancker
sudo docker run -d -v /home/ubuntu/rancker:/var/lib/rancher/ --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable

3、登录

http://OUTTER_IP
默认用户名admin,需要设置密码
URL一般会填写内网地址,https://172.31.33.84

4、根据向导新建cluster,向导会帮忙生成对应语句
4.1、根据操作指引,在node1运行controlplane和worker

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.2 --server https://172.31.33.84 --token 4rjrgss2hq5w6nlmp4frptxqlq68zr7szvd9fd45pm7rfk968snsjk --ca-checksum 79f195454ab982ce478878f4e5525516ad09d6eadc4c611d4d542da9a7fc6c7e --controlplane --worker

4.2、根据操作指引,在node2运行etcd 和worker

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.2 --server https://172.31.33.84 --token 4rjrgss2hq5w6nlmp4frptxqlq68zr7szvd9fd45pm7rfk968snsjk --ca-checksum 79f195454ab982ce478878f4e5525516ad09d6eadc4c611d4d542da9a7fc6c7e --etcd --worker

5、在Rancher页面会看到两个节点加入成功

6、设置防火墙,让Rancher可以通过
我这边打开了2379和10250两个端口(之前K8S的端口已经设置过)

7、选择部署APP

比直接部署K8S方便了很多哦!

搭建Kubernetes环境05

1、关闭swap

sudo swapoff -a

2、自动启用docker.service

sudo systemctl enable docker.service

3、cgroup切换为systemd

#参考https://kubernetes.io/docs/setup/cri/

sudo vi /etc/docker/daemon.json
#内容如下
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

4、一些有用的命令

kubeadm init
kubeadm reset

kubectl api-versions

kubectl config view

kubectl cluster-info
kubectl cluster-info dump

kubectl get nodes
kubectl get nodes -o wide
kubectl describe node mynode

kubectl get rc,namespace

kubectl get pods
kubectl get pods --all-namespaces -o wide
kubectl describe pod mypod

kubectl get deployments
kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
kubectl describe deployment kubernetes-dashboard --namespace=kubernetes-dashboard

kubectl expose deployment hikub01 --type=LoadBalancer

kubectl get services
kubectl get service -n kube-system
kubectl describe services kubernetes-dashboard --namespace=kubernetes-dashboard

kubectl proxy
kubectl proxy --address=' 172.172.172.101'  --accept-hosts='.*' --accept-paths='.*'

kubectl run hikub01 --image=myserver:1.0.0 --port=8080
kubectl create -f  myserver-deployment.yaml
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

kubectl delete deployment mydeployment
kubectl delete node mynode
kubectl delete pod mypod

kubectl get events --namespace=kube-system

kubectl taint node mynode node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/master-

kubectl edit service myservice
kubectl edit service kubernetes-dashboard -n kube-system

kubectl get secret -n kube-system | grep neohope | awk '{print $1}')

搭建Kubernetes环境04

本节采用yaml文件部署应用。

1、编写文件
vi hikub01-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: hikub01-deployment
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: myserver
image:  myserver:1.0.0
ports:
-  containerPort: 8080

2、创建

kubectl create -f hikub01-deployment.yaml

3、暴露端口

kubectl expose deployment hikub01-deployment --type=LoadBalancer

4、测试

查看pods
kubectl get pods -o wide

#查看部署
kubectl get deployments -o wide

#查看服务
kubectl get services -o wide

#可以根据输出,在浏览器或wget访问服务
curl http://ip:port

5、清理

kubectl delete -f hikub01-deployment.yaml

搭建Kubernetes环境03

本节,我们尝试部署一些服务。

1、首先,我们要准备自己的Docker镜像
1.1、准备文件
vi Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

vi myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

1.2、测试myserver.js

nodejs myserver.js

1.3、创建镜像

#构建image
sudo docker build -t myserver:1.0.0 .

1.4、测试container
[code lang="shell"]
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0
curl localhost:8080

2、导出镜像

docker images
sudo docker save 0fb19de44f41 -o myserver.tar

3、导入到其他两个节点

scp myserver.6.12.0.tar ubuntu@node01:/home/ubuntu
ssh node01
sudo docker load -i myserver.6.12.0.tar
sudo docker tag 0fb19de44f41 myserver:6.12.0

3、用kubectl部署服务

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看pods
kubectl get pods -o wide

#查看部署
kubectl get deployments -o wide

#查看服务
kubectl get services -o wide

#可以根据输出,在浏览器或wget访问服务
curl http://ip:port

4、清理

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#删除部署
kubectl delete pod hikub01