搭建Kubernetes环境01

1、硬件网络环境
本次采用了云主机,搭建K8S环境
云主机最低配置要求为2Core,4G内存

节点名称 内网地址
master 172.16.172.101
node01 172.16.172.102
node02 172.16.172.103

2、配置k8s仓库,全部三个节点

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

3、安装必要的软件,全部三个节点

sudo apt-get update
sudo apt-get install -y docker.io kubelet kubeadm kubectl

4、初始化master节点
4.1、可以预先拉取镜像

kubeadm config images pull

4.2、初始化kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu18 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.3.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.525105 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1zdpa5.vmcsacag4wj3a0gv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
--discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

4.3、初始化master节点

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、两个工作节点加入

sudo kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
>    --discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、在master查看节点情况

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   15m   v1.16.2
node01   NotReady   <none>   10m   v1.16.2
node02   NotReady   <none>   10m   v1.16.2

Docker Swarm环境搭建02

1、本节介绍一下stack的操作

2、环境介绍

主机名 IP地址 节点类别
kub01 172.16.172.71 manager
kub02 172.16.172.72 worker
kub03 172.16.172.73 worker

3、启动环境

#在kub01初始化swarm,会输出一个token
sudo docker swarm init --advertise-addr 172.16.172.71

#kub02和kub03加入该swarm
sudo docker swarm join \
    --token SWMTKN-1-249jjodetz6acnl0mrvotp3ifl4jnd2s53buweoasfedx695jm-cdjp3v2jjq2ndfxlv8o2g49n9 \
    172.16.172.71:2377

#查看节点信息
sudo docker node ls

4、新建docker-compose.yml文件

version: "3"
services:
  web:
    image: myserver:1.0.0
    deploy:
      replicas: 10
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 64M
    ports:
      - "8080:8080"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8090:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
  redis:
    image: redis:3.2
    ports:
      - "6379:6379"
    volumes:
      - /home/hiup/dockervisual/data:/data
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet
networks:
  webnet:

5、需要确认的信息
要保证三个镜像都是可用的
要保证redis的数据文件夹是可用的
三个服务的networks是一样的

6、启用stack

#启动stack
sudo docker stack deploy -c docker-compose.yml mystack01

#查看stack
sudo docker stack ls

#查看服务
sudo docker service ls

7、用浏览器打开本地8080端口,可以在visualizer看到各容器的状态

8、停止stack

停止
sudo docker stack rm mystack01

9、节点退出swarm

#kub02和kub03退出swarm
sudo docker swarm leave

#kub01退出swarm,由于最后一个管理node也退出了,swarm结束
sudo docker swarm leave --force

Docker Swarm环境搭建01

1、Docker1.12以后的版本,都是支持Swarm模式的,不需要其他软件的支持

2、环境介绍

主机名 IP地址 节点类别
kub01 172.16.172.71 manager
kub02 172.16.172.72 worker
kub03 172.16.172.73 worker

3、启动环境

#在kub01初始化swarm,会输出一个token
sudo docker swarm init --advertise-addr 172.16.172.71

#kub02和kub03加入该swarm
sudo docker swarm join \
    --token SWMTKN-1-249jjodetz6acnl0mrvotp3ifl4jnd2s53buweoasfedx695jm-cdjp3v2jjq2ndfxlv8o2g49n9 \
    172.16.172.71:2377

4、节点相关操作

#查看节点
sudo docker node ls

#查看节点详情
sudo docker node inspect kub02 --pretty

#更新节点状态
sudo docker node update --availability (active/pause/drain) kub02

#节点变为manager候选
sudo docker node promote kub02

#节点变为worker
sudo docker node demote kub02

5、服务相关操作

#新建服务
sudo docker service create --name my01 myserver:1.0.0

#查看服务列表
sudo docker service ls

#删除服务
sudo docker service remove my01

#新建两个节点的服务
sudo docker service create --name my02 --replicas 2 --publish 8080:8080 myserver:1.0.0

#删除服务
sudo docker service remove my02

#在每个节点都会启动一个myserver
sudo docker service create --name my03 --mode global --publish 8080:8080 myserver:1.0.0

#删除服务
sudo docker service remove my03

6、节点退出swarm

#kub02和kub03退出swarm
sudo docker swarm leave

#kub01退出swarm,由于最后一个管理node也退出了,swarm结束
sudo docker swarm leave --force

查看Docker daemon日志

Ubuntu (old using upstart )
/var/log/upstart/docker.log

Ubuntu (new using systemd )
journalctl -u docker.service

Boot2Docker
/var/log/docker.log

Debian GNU/Linux
/var/log/daemon.log

CentOS
/var/log/daemon.log | grep docker

CoreOS
journalctl -u docker.service

Fedora
journalctl -u docker.service

Red Hat Enterprise Linux Server
/var/log/messages | grep docker

OpenSuSE
journalctl -u docker.service

OSX
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/d‌​ocker.log

Windows
Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.

Minikube搭建Kubernetes环境02_MacOS

MacOS搭建方法其实为:

minikube(vm-driver=xhyve)

1、首先要安装docker客户端及xhyve

#安装Docker客户端
curl -Lo docker.tgz https://download.docker.com/mac/static/stable/x86_64/docker-17.09.0-ce.tgz
#解压docker.tgz得到docker(我直接用了一个GUI工具解压的)
chmod +x docker
sudo mv docker/usr/local/bin/

#安装xhyve
#https://brew.sh/
brew install docker-machine-driver-xhyve

2、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.8.4/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

3、启动minikube

minikube version

#直连方式
minikube start --vm-driver=xhyve

#代理方式
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#设置minikube的docker环境
eval $(minikube docker-env)

#构建image,镜像会构建到minikube开启的虚拟机中
docker build -t myserver:1.0.0 .

#在minikube开启的虚拟机中运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget ip:8080

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、查看管理界面

minikube dashboard

7、清理

#退出minikube的docker环境
eval $(minikube docker-env -u)

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

2、有两种方法来解决

2.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,启动minkube时,就要指定代理了
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0

Minikube搭建Kubernetes环境01_Ubuntu

由于我的Ubuntu是在VirtualBox虚拟机中搭建的,所以这种搭建方法其实为:

minikube(vm-driver=none) + docker

1、首先要安装docker

apt update
apt upgrade
apt-get install docker

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200);
  response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#测试myserver.js
nodejs myserver.js

#构建image
sudo docker build -t myserver:1.0.0 .

#运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget localhost:8080

3、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

4、启动minikube

minikube version
minikube --vm-driver=none start

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、管理界面

kubectl proxy --address='172.16.172.71'  --accept-hosts='.*' --accept-paths='.*'

浏览器访问:
http://172.16.172.71:8001/
http://172.16.172.71:8001/ui

7、清理

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube 
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#查看docker日志
journalctl -u docker.service

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

有两种方法来解决

1.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,就要配置docker守护进程使用代理了
sudo vim /etc/default/docker
#在文件中增加以下两行
http_proxy="ip:port"
https_proxy="ip:port"

#重启docker守护进程
sudo service docker restart

1.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0

2、未安装auplink

Couldn't run auplink before unmount: exec: "auplink": executable file not found in $PATH 
sudo apt-get install cgroup-lite
sudo apt-get install aufs-tools

Docker私有仓库搭建

1、安装registry

# sudo apt-get install docker docker-registry

2、上传镜像
2.1、客户端允许http

$ sudo vi /etc/defualt/docker
#添加这一行
DOCKER_OPTS="--insecure-registry 192.168.130.191:5000"

2.2、上传镜像

#查看镜像列表
$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                        5.1                 747929f3b12a        2 weeks ago         352.6 MB

#标记镜像
$ sudo docker tag elasticsearch:5.1 192.168.130.191:5000/elasticsearch

#查看镜像列表
$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                        5.1                 747929f3b12a        2 weeks ago         352.6 MB
192.168.130.191:5000/elasticsearch   5.1                 747929f3b12a        2 weeks ago         352.6 MB

#上传镜像
$ sudo docker push 192.168.130.191:5000/elasticsearch:5.1
The push refers to a repository [192.168.130.191:5000/elasticsearch]
cea33faf9668: Pushed
c3707daa9b07: Pushed
a56b404460eb: Pushed
5e48ecb24792: Pushed
f86173bb67f3: Pushed
c87433dfa8d7: Pushed
c9dbd14c23f0: Pushed
b5b4ba1cb64d: Pushed
15ba1125d6c0: Pushed
bd25fcff1b2c: Pushed
8d9c6e6ceb37: Pushed
bc3b6402e94c: Pushed
223c0d04a137: Pushed
fe4c16cbf7a4: Pushed
5.1: digest: sha256:14ec0b594c0bf1b007debc12e3a16a99aee74964724ac182bc851fec3fc5d2b0 size: 3248

3、查询镜像

$ curl -X GET http://192.168.130.191:5000/v2/_catalog
{"repositories":["alpine","elasticsearch","jetty","mongo","mysql","nginx","openjdk","redis","registry","ubuntu","zookeeper"]}

$ curl -X GET http://192.168.130.191:5000/v2/elasticsearch/tags/list
{"name":"elasticsearch","tags":["5.1"]}

#下面的查询命令总是报404错误,api文档中也没有,有些奇怪
$ curl -X GET http://192.168.130.191:5000/v2/search?q=elasticsearch
$ sudo docker search 192.168.130.191:5000/elasticsearch

4、下载镜像

$ sudo docker pull 192.168.130.191:5000/elasticsearch:5.1
5.1: Pulling from elasticsearch
386a066cd84a: Pull complete
75ea84187083: Pull complete
3e2e387eb26a: Pull complete
eef540699244: Pull complete
1624a2f8d114: Pull complete
7018f4ec6e0a: Pull complete
6ca3bc2ad3b3: Pull complete
424638b495a6: Pull complete
2ff72d0b7bea: Pull complete
d0d6a2049bf2: Pull complete
003b957bd67f: Pull complete
14d23bc515af: Pull complete
923836f4bd50: Pull complete
c0b5750bf0f7: Pull complete
Digest: sha256:14ec0b594c0bf1b007debc12e3a16a99aee74964724ac182bc851fec3fc5d2b0
Status: Downloaded newer image for 192.168.130.191:5000/elasticsearch:5.1

5、删除镜像

$ curl -X DELETE /v2/elasticsearch/manifests/5.1

参考github

elasticsearch docker官方镜像无法运行

elasticsearch5.1 docker官方镜像运行时会报错:

ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

那是因为vm.max_map_count达不到es的最低要求262144,修改方式有两种:

#一次生效
sudo sysctl -w vm.max_map_count=262144
#永久生效
sudo vi /etc/sysctl.conf
#添加这一行
vm.max_map_count=262144

#加载配置
sudo sysctl -p