Docker Swarm环境搭建02

1、本节介绍一下stack的操作

2、环境介绍

主机名 IP地址 节点类别
kub01 172.16.172.71 manager
kub02 172.16.172.72 worker
kub03 172.16.172.73 worker

3、启动环境

#在kub01初始化swarm,会输出一个token
sudo docker swarm init --advertise-addr 172.16.172.71

#kub02和kub03加入该swarm
sudo docker swarm join \
    --token SWMTKN-1-249jjodetz6acnl0mrvotp3ifl4jnd2s53buweoasfedx695jm-cdjp3v2jjq2ndfxlv8o2g49n9 \
    172.16.172.71:2377

#查看节点信息
sudo docker node ls

4、新建docker-compose.yml文件

version: "3"
services:
  web:
    image: myserver:1.0.0
    deploy:
      replicas: 10
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 64M
    ports:
      - "8080:8080"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8090:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
  redis:
    image: redis:3.2
    ports:
      - "6379:6379"
    volumes:
      - /home/hiup/dockervisual/data:/data
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet
networks:
  webnet:

5、需要确认的信息
要保证三个镜像都是可用的
要保证redis的数据文件夹是可用的
三个服务的networks是一样的

6、启用stack

#启动stack
sudo docker stack deploy -c docker-compose.yml mystack01

#查看stack
sudo docker stack ls

#查看服务
sudo docker service ls

7、用浏览器打开本地8080端口,可以在visualizer看到各容器的状态

8、停止stack

停止
sudo docker stack rm mystack01

9、节点退出swarm

#kub02和kub03退出swarm
sudo docker swarm leave

#kub01退出swarm,由于最后一个管理node也退出了,swarm结束
sudo docker swarm leave --force

Docker Swarm环境搭建01

1、Docker1.12以后的版本,都是支持Swarm模式的,不需要其他软件的支持

2、环境介绍

主机名 IP地址 节点类别
kub01 172.16.172.71 manager
kub02 172.16.172.72 worker
kub03 172.16.172.73 worker

3、启动环境

#在kub01初始化swarm,会输出一个token
sudo docker swarm init --advertise-addr 172.16.172.71

#kub02和kub03加入该swarm
sudo docker swarm join \
    --token SWMTKN-1-249jjodetz6acnl0mrvotp3ifl4jnd2s53buweoasfedx695jm-cdjp3v2jjq2ndfxlv8o2g49n9 \
    172.16.172.71:2377

4、节点相关操作

#查看节点
sudo docker node ls

#查看节点详情
sudo docker node inspect kub02 --pretty

#更新节点状态
sudo docker node update --availability (active/pause/drain) kub02

#节点变为manager候选
sudo docker node promote kub02

#节点变为worker
sudo docker node demote kub02

5、服务相关操作

#新建服务
sudo docker service create --name my01 myserver:1.0.0

#查看服务列表
sudo docker service ls

#删除服务
sudo docker service remove my01

#新建两个节点的服务
sudo docker service create --name my02 --replicas 2 --publish 8080:8080 myserver:1.0.0

#删除服务
sudo docker service remove my02

#在每个节点都会启动一个myserver
sudo docker service create --name my03 --mode global --publish 8080:8080 myserver:1.0.0

#删除服务
sudo docker service remove my03

6、节点退出swarm

#kub02和kub03退出swarm
sudo docker swarm leave

#kub01退出swarm,由于最后一个管理node也退出了,swarm结束
sudo docker swarm leave --force

查看Docker daemon日志

Ubuntu (old using upstart )
/var/log/upstart/docker.log

Ubuntu (new using systemd )
journalctl -u docker.service

Boot2Docker
/var/log/docker.log

Debian GNU/Linux
/var/log/daemon.log

CentOS
/var/log/daemon.log | grep docker

CoreOS
journalctl -u docker.service

Fedora
journalctl -u docker.service

Red Hat Enterprise Linux Server
/var/log/messages | grep docker

OpenSuSE
journalctl -u docker.service

OSX
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/d‌​ocker.log

Windows
Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.

Minikube搭建Kubernetes环境02_MacOS

MacOS搭建方法其实为:

minikube(vm-driver=xhyve)

1、首先要安装docker客户端及xhyve

#安装Docker客户端
curl -Lo docker.tgz https://download.docker.com/mac/static/stable/x86_64/docker-17.09.0-ce.tgz
#解压docker.tgz得到docker(我直接用了一个GUI工具解压的)
chmod +x docker
sudo mv docker/usr/local/bin/

#安装xhyve
#https://brew.sh/
brew install docker-machine-driver-xhyve

2、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.8.4/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

3、启动minikube

minikube version

#直连方式
minikube start --vm-driver=xhyve

#代理方式
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#设置minikube的docker环境
eval $(minikube docker-env)

#构建image,镜像会构建到minikube开启的虚拟机中
docker build -t myserver:1.0.0 .

#在minikube开启的虚拟机中运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget ip:8080

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、查看管理界面

minikube dashboard

7、清理

#退出minikube的docker环境
eval $(minikube docker-env -u)

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

2、有两种方法来解决

2.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,启动minkube时,就要指定代理了
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0

Minikube搭建Kubernetes环境01_Ubuntu

由于我的Ubuntu是在VirtualBox虚拟机中搭建的,所以这种搭建方法其实为:

minikube(vm-driver=none) + docker

1、首先要安装docker

apt update
apt upgrade
apt-get install docker

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200);
  response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#测试myserver.js
nodejs myserver.js

#构建image
sudo docker build -t myserver:1.0.0 .

#运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget localhost:8080

3、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

4、启动minikube

minikube version
minikube --vm-driver=none start

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、管理界面

kubectl proxy --address='172.16.172.71'  --accept-hosts='.*' --accept-paths='.*'

浏览器访问:
http://172.16.172.71:8001/
http://172.16.172.71:8001/ui

7、清理

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube 
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#查看docker日志
journalctl -u docker.service

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

有两种方法来解决

1.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,就要配置docker守护进程使用代理了
sudo vim /etc/default/docker
#在文件中增加以下两行
http_proxy="ip:port"
https_proxy="ip:port"

#重启docker守护进程
sudo service docker restart

1.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0

2、未安装auplink

Couldn't run auplink before unmount: exec: "auplink": executable file not found in $PATH 
sudo apt-get install cgroup-lite
sudo apt-get install aufs-tools

OpenStack搭建私有云10

本节介绍对象存储的基本操作,仅在CT01进行操作

. user01-openrc
#查看状态
swift stat
#新建container
openstack container create container01
#文件上传
openstack object create container01 hi.txt
#文件ls
openstack object list container01
#查看文件信息
openstack object show container01 hi.txt
#设置tag
openstack object set --property owner=neohope container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取消tag
openstack object unset --property owner container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取回文件
mv hi.txt hi.txt.bak
openstack object save container01 hi.txt
#删除文件
openstack object delete container01 hi.txt

PS:
如果遇到权限问题,可以尝试将/srv/node安全级别降到最低

#chcon -R system_u:object_r:swift_data_t:s0 /srv/node

OpenStack搭建私有云09

本节开始安装swift,用于对对象存储进行管理,需要在CT01、OS01、OS02进行操作
一、在CT01安装对应模块
1、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt swift
openstack role add --project serviceproject --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store

openstack endpoint create --region Region01 object-store public http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store internal http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store admin http://CT01:8080/v1

2、安装

apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

3、修改配置文件
3.1、新建目录/etc/swift,并下载文件

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

3.2修改配置文件
/etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = swift
password = swift
delay_auth_decision = True

[filter:cache]
use = egg:swift#memcache
memcache_servers = CT01:11211

二、在OS01、OS02安装对应模块
1、硬盘初始化(每台虚拟机分配两块硬盘)

apt-get install xfsprogs rsync
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc

2、修改/etc/fstab

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

3、挂载硬盘

mount /srv/node/sdb
mount /srv/node/sdc

4、修改/etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.3.13

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

5、修改/etc/default/rsync

RSYNC_ENABLE=true

6、重启rsync

service rsync start

7、软件安装

apt-get install swift swift-account swift-container swift-object
curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton
curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton
curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

8、修改/etc/swift/account-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

9、修改/etc/swift/container-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon container-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

10、修改/etc/swift/object-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon object-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

11、授权

chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

三、在CT01进行配置
1、创建配置文件

cd /etc/swift

swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance

swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance

2、拷贝配置文件
将account.ring.gz、container.ring.gz和object.ring.gz拷贝到OS02和OS02的目录/etc/swift

3、下载配置文件

sudo curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton

4、编辑/etc/swift/swift.conf

[swift-hash]
swift_hash_path_suffix = neohope
swift_hash_path_prefix = neohope

[storage-policy:0]
name = Policy-0
default = yes

5、拷贝配置文件swift.conf,到所有节点的/etc/swift

6、在非对象存储节点运行

chown -R root:swift /etc/swift
service memcached restart
service swift-proxy restart

7、在对象存储节点运行

chown -R root:swift /etc/swift
swift-init all start

OpenStack搭建私有云08

本节开始用命令行方式启动虚拟机,仅在CT01进行操作

一、网络配置
1、新建虚拟网络(外网)

. admin-openrc
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

2、确认配置文件正确(外网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2_type_flat]
flat_networks = provider

linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:enp0s8

3、创建子网(外网)

openstack subnet create --network provider --allocation-pool start=192.168.12.100,end=192.168.12.120 --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.12.0/24 provider

4、新建虚拟网络(内网)

openstack network create selfservice

5、确认配置文件正确(内网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
tenant_network_types = vxlan

[ml2_type_vxlan]
vni_ranges = 1:1000

6、创建子网(内网)

openstack subnet create --network selfservice --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.13.0/24 selfservice

7、创建路由,让内网可以通过外网访问外网

. admin-openrc
openstack router create router
neutron router-interface-add router selfservice
neutron router-gateway-set router provider

ip netns
neutron router-port-list router
ping -c 4 192.168.12.107

二、虚拟机flavor配置

openstack flavor list
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 2 flavor02

三、虚拟机keypair配置

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack keypair list

四、虚拟机security group配置

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

五、查看配置

openstack flavor list
openstack image list
openstack network list
openstack security group list

六、创建虚拟机,并访问
1、外网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=PROVIDER_NET_ID --security-group default --key-name mykey provider-instance

openstack server list
openstack console url show provider-instance
ping -c 4 192.168.12.1
ping -c 4 openstack.org

ping -c 4 192.168.12.104 
ssh cirros@192.168.12.104 

2、内网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=SELFSERVICE_NET_ID --security-group default --key-name mykey selfservice-instance

openstack server list
openstack console url show selfservice-instance
ping -c 4 192.168.13.1
ping -c 4 openstack.org

openstack floating ip create provider
openstack server add floating ip selfservice-instance 192.168.12.106
openstack server list
ping -c 4 192.168.12.106
ssh cirros@192.168.12.106

七、创建挂载块存储
1、创建并挂载

. admin-openrc
openstack volume create --size 2 volumeA
openstack volume list
openstack server add volume provider-instance volumeA

2、虚拟机中验证

sudo fdisk -l