查看Docker daemon日志

Ubuntu (old using upstart )
/var/log/upstart/docker.log

Ubuntu (new using systemd )
journalctl -u docker.service

Boot2Docker
/var/log/docker.log

Debian GNU/Linux
/var/log/daemon.log

CentOS
/var/log/daemon.log | grep docker

CoreOS
journalctl -u docker.service

Fedora
journalctl -u docker.service

Red Hat Enterprise Linux Server
/var/log/messages | grep docker

OpenSuSE
journalctl -u docker.service

OSX
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/d‌​ocker.log

Windows
Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.

Minikube搭建Kubernetes环境02_MacOS

MacOS搭建方法其实为:

minikube(vm-driver=xhyve)

1、首先要安装docker客户端及xhyve

#安装Docker客户端
curl -Lo docker.tgz https://download.docker.com/mac/static/stable/x86_64/docker-17.09.0-ce.tgz
#解压docker.tgz得到docker(我直接用了一个GUI工具解压的)
chmod +x docker
sudo mv docker/usr/local/bin/

#安装xhyve
#https://brew.sh/
brew install docker-machine-driver-xhyve

2、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.8.4/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

3、启动minikube

minikube version

#直连方式
minikube start --vm-driver=xhyve

#代理方式
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#设置minikube的docker环境
eval $(minikube docker-env)

#构建image,镜像会构建到minikube开启的虚拟机中
docker build -t myserver:1.0.0 .

#在minikube开启的虚拟机中运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget ip:8080

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、查看管理界面

minikube dashboard

7、清理

#退出minikube的docker环境
eval $(minikube docker-env -u)

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

2、有两种方法来解决

2.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,启动minkube时,就要指定代理了
minikube start --vm-driver=xhyve --docker-env HTTP_PROXY=http://ip:port --docker-env HTTPS_PROXY=http://ip:port

2.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0

Minikube搭建Kubernetes环境01_Ubuntu

由于我的Ubuntu是在VirtualBox虚拟机中搭建的,所以这种搭建方法其实为:

minikube(vm-driver=none) + docker

1、首先要安装docker

apt update
apt upgrade
apt-get install docker

2、准备一个自己的docker虚拟机
Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

myserver.js

var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200);
  response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

创建镜像

#测试myserver.js
nodejs myserver.js

#构建image
sudo docker build -t myserver:1.0.0 .

#运行container
sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0

#测试
wget localhost:8080

3、下载minikube及kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

4、启动minikube

minikube version
minikube --vm-driver=none start

5、用kubectl部署服务

#切换context
kubectl config use-context minikube

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#查看pods
kubectl get pods

#查看部署
kubectl get deployments

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看服务
kubectl get services

#查看服务信息
minikube service hikub01

#可以根据输出,在浏览器或wget访问服务

6、管理界面

kubectl proxy --address='172.16.172.71'  --accept-hosts='.*' --accept-paths='.*'

浏览器访问:
http://172.16.172.71:8001/
http://172.16.172.71:8001/ui

7、清理

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#停掉minikube 
minikube stop

#清理全部下载的信息
minikube delete

常见问题:
1、如果一直无法创建pod,那就是无法从google下载需要的镜像,需要配置docker的代理

#查看pod状况
kubectl get pods

#查看docker日志
journalctl -u docker.service

#测试是否可以联通
curl --proxy "" https://cloud.google.com/container-registry/

有两种方法来解决

1.1、用代理的方法来解决

#测试代理是否可以联通
curl --proxy "http://ip:port" https://cloud.google.com/container-registry/

#如果代理可以联通,就要配置docker守护进程使用代理了
sudo vim /etc/default/docker
#在文件中增加以下两行
http_proxy="ip:port"
https_proxy="ip:port"

#重启docker守护进程
sudo service docker restart

1.2、用国内镜像来解决

sudo docker pull registry.aliyuncs.com/archon/pause-amd64:3.0

2、未安装auplink

Couldn't run auplink before unmount: exec: "auplink": executable file not found in $PATH 
sudo apt-get install cgroup-lite
sudo apt-get install aufs-tools

OpenStack搭建私有云10

本节介绍对象存储的基本操作,仅在CT01进行操作

. user01-openrc
#查看状态
swift stat
#新建container
openstack container create container01
#文件上传
openstack object create container01 hi.txt
#文件ls
openstack object list container01
#查看文件信息
openstack object show container01 hi.txt
#设置tag
openstack object set --property owner=neohope container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取消tag
openstack object unset --property owner container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取回文件
mv hi.txt hi.txt.bak
openstack object save container01 hi.txt
#删除文件
openstack object delete container01 hi.txt

PS:
如果遇到权限问题,可以尝试将/srv/node安全级别降到最低

#chcon -R system_u:object_r:swift_data_t:s0 /srv/node

OpenStack搭建私有云09

本节开始安装swift,用于对对象存储进行管理,需要在CT01、OS01、OS02进行操作
一、在CT01安装对应模块
1、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt swift
openstack role add --project serviceproject --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store

openstack endpoint create --region Region01 object-store public http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store internal http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store admin http://CT01:8080/v1

2、安装

apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

3、修改配置文件
3.1、新建目录/etc/swift,并下载文件

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

3.2修改配置文件
/etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = swift
password = swift
delay_auth_decision = True

[filter:cache]
use = egg:swift#memcache
memcache_servers = CT01:11211

二、在OS01、OS02安装对应模块
1、硬盘初始化(每台虚拟机分配两块硬盘)

apt-get install xfsprogs rsync
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc

2、修改/etc/fstab

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

3、挂载硬盘

mount /srv/node/sdb
mount /srv/node/sdc

4、修改/etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.3.13

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

5、修改/etc/default/rsync

RSYNC_ENABLE=true

6、重启rsync

service rsync start

7、软件安装

apt-get install swift swift-account swift-container swift-object
curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton
curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton
curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

8、修改/etc/swift/account-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

9、修改/etc/swift/container-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon container-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

10、修改/etc/swift/object-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon object-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

11、授权

chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

三、在CT01进行配置
1、创建配置文件

cd /etc/swift

swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance

swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance

2、拷贝配置文件
将account.ring.gz、container.ring.gz和object.ring.gz拷贝到OS02和OS02的目录/etc/swift

3、下载配置文件

sudo curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton

4、编辑/etc/swift/swift.conf

[swift-hash]
swift_hash_path_suffix = neohope
swift_hash_path_prefix = neohope

[storage-policy:0]
name = Policy-0
default = yes

5、拷贝配置文件swift.conf,到所有节点的/etc/swift

6、在非对象存储节点运行

chown -R root:swift /etc/swift
service memcached restart
service swift-proxy restart

7、在对象存储节点运行

chown -R root:swift /etc/swift
swift-init all start

OpenStack搭建私有云08

本节开始用命令行方式启动虚拟机,仅在CT01进行操作

一、网络配置
1、新建虚拟网络(外网)

. admin-openrc
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

2、确认配置文件正确(外网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2_type_flat]
flat_networks = provider

linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:enp0s8

3、创建子网(外网)

openstack subnet create --network provider --allocation-pool start=192.168.12.100,end=192.168.12.120 --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.12.0/24 provider

4、新建虚拟网络(内网)

openstack network create selfservice

5、确认配置文件正确(内网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
tenant_network_types = vxlan

[ml2_type_vxlan]
vni_ranges = 1:1000

6、创建子网(内网)

openstack subnet create --network selfservice --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.13.0/24 selfservice

7、创建路由,让内网可以通过外网访问外网

. admin-openrc
openstack router create router
neutron router-interface-add router selfservice
neutron router-gateway-set router provider

ip netns
neutron router-port-list router
ping -c 4 192.168.12.107

二、虚拟机flavor配置

openstack flavor list
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 2 flavor02

三、虚拟机keypair配置

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack keypair list

四、虚拟机security group配置

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

五、查看配置

openstack flavor list
openstack image list
openstack network list
openstack security group list

六、创建虚拟机,并访问
1、外网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=PROVIDER_NET_ID --security-group default --key-name mykey provider-instance

openstack server list
openstack console url show provider-instance
ping -c 4 192.168.12.1
ping -c 4 openstack.org

ping -c 4 192.168.12.104 
ssh cirros@192.168.12.104 

2、内网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=SELFSERVICE_NET_ID --security-group default --key-name mykey selfservice-instance

openstack server list
openstack console url show selfservice-instance
ping -c 4 192.168.13.1
ping -c 4 openstack.org

openstack floating ip create provider
openstack server add floating ip selfservice-instance 192.168.12.106
openstack server list
ping -c 4 192.168.12.106
ssh cirros@192.168.12.106

七、创建挂载块存储
1、创建并挂载

. admin-openrc
openstack volume create --size 2 volumeA
openstack volume list
openstack server add volume provider-instance volumeA

2、虚拟机中验证

sudo fdisk -l

OpenStack搭建私有云07

本节开始安装cinder,用于对块存储进行管理,需要在CT01及BS01进行操作

一、在CT01安装相应模块
1、创建数据库

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

2、创建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt cinder
openstack role add --project serviceproject --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

openstack endpoint create --region Region01 volume public http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume internal http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume admin http://CT01:8776/v1/%\(tenant_id\)s

openstack endpoint create --region Region01 volumev2 public http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 internal http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 admin http://CT01:8776/v2/%\(tenant_id\)s

3、安装

apt install cinder-api cinder-scheduler

4、修改配置
4.1、/etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
my_ip = 10.0.3.10

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4.2、/etc/nova/nova.conf

[cinder]
os_region_name = Region01

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "cinder-manage db sync" cinder

service nova-api restart
service cinder-scheduler restart
service apache2 restart

二、在BS01安装相关模块
1、安装lvm2并作初始化处理

apt install lvm2

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

2、修改lvm配置文件
/etc/lvm/lvm.conf

devices {
    filter = [ "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "r/.*/"]
}

3、安装cinder-volume

apt install cinder-volume

4、修改配置文件
/etc/cinder/cinder.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.0.12
enabled_backends = lvm
glance_api_servers = http://CT01:9292

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address=10.0.3.12

5、并重启服务

service tgt restart
service cinder-volume restart

三、在CT01验证

. admin-openrc
openstack volume service list

然后,就可以在Dashboard中,新建并分配块存储咯。

OpenStack搭建私有云06

本节开始安装Dashboard,用于对OS进行管理,仅在CT01进行操作

1、安装

apt install openstack-dashboard

2、修改配置
/etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "CT01"
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#Dashboard节点
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    ' default' : {
        ' BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache' ,
        ' LOCATION' : 'CT01:11211' ,
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

TIME_ZONE = "Asia/Shanghai"

3、重启服务

service apache2 reload

4、浏览器打开网页
http://CT01/horizon
可以用admin或user01用户进行访问

PS:出现500错误

#查看apache日志发现,是下面文件权限设置有问题,改一下就好了
sudo chown www-data:www-data /var/lib/openstack-dashboard/secret_key

5、用下面的步骤创建实例
创建网络、创建配置、创建实例

6、实例启动后,点击进入实例,就可以通过控制台连接实例了

OpenStack搭建私有云05

本节开始安装Neutron服务,Neutron用于管理虚拟网络,在CT01和PC01分别进行相关模块的安装

一、在CT01安装相关模块
1、新建数据库

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project serviceproject --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region Region01 network public http://CT01:9696
openstack endpoint create --region Region01 network internal http://CT01:9696
openstack endpoint create --region Region01 network admin http://CT01:9696

3、安装

apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

4、修改配置
4.1、/etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:neutron@CT01/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

[nova]
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = nova
password = nova

4.2、/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

4.3、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.10
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

4.4、/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

4.5、 /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

4.6、 /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_ip = CT01
metadata_proxy_shared_secret = metadata

4.7、 /etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

sudo service nova-api restart
sudo service neutron-server restart
sudo service neutron-linuxbridge-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
sudo service neutron-l3-agent restart

二、在PC01安装相关模块
1、安装

apt install neutron-linuxbridge-agent

2、修改配置文件
2.1、/etc/neutron/neutron.conf

[database]
#注释下面内容
#connection

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

2.2、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.11
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

2.3、/etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron

3、重启服务

service nova-compute restart
service neutron-linuxbridge-agent restart

三、在CT01进行验证
1、验证

. admin-openrc
openstack extension list --network
openstack network agent list

OpenStack搭建私有云04

本节开始安装Nova服务,Nova用于管理虚拟计算,在CT01和PC01分别进行相关模块的安装。

一、首先在CT01,安装相关模块

1、新建数据库

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt nova
openstack role add --project serviceproject --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute

openstack endpoint create --region Region01 compute public http://CT01:8774/v2.1
openstack endpoint create --region Region01 compute internal http://CT01:8774/v2.1
openstack endpoint create --region Region01 compute admin http://CT01:8774/v2.1

openstack user create --domain default --password-prompt placement
openstack role add --project serviceproject --user placement admin
openstack service create --name placement --description "Placement API" placement

openstack endpoint create --region Region01 placement public http://CT01:8778
openstack endpoint create --region Region01 placement internal http://CT01:8778
openstack endpoint create --region Region01 placement admin http://CT01:8778

3、安装nova

apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api

4、修改配置文件
/etc/nova/nova.conf

[api_database]
connection = mysql+pymysql://nova:nova@CT01/nova_api

[database]
connection = mysql+pymysql://nova:nova@CT01/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = nova
password = nova

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://CT01:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = Region01
project_domain_name = Default
project_name = serviceproject
auth_type = password
user_domain_name = Default
auth_url = http://CT01:35357/v3
username = placement
password = placement

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.3.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#移除下面的节点
#log_dir 

5、初始化

sudo su -s /bin/sh -c "nova-manage api_db sync" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
sudo su -s /bin/sh -c "nova-manage db sync" nova
sudo nova-manage cell_v2 list_cells

6、重启服务

sudo service nova-api restart
sudo service nova-consoleauth restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart

二、然后在PC01,安装相关模块
1、安装

apt install nova-compute
apt install nova-compute-qemu

2、修改配置
2.1、 /etc/nova/nova.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.3.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#log_dir

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = nova
password = nova

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://CT01:6080/vnc_auto.html

[glance]
api_servers = http://CT01:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = Region01
project_domain_name = Default
project_name = serviceproject
auth_type = password
user_domain_name = Default
auth_url = http://CT01:35357/v3
username = placement
password = placement

2.2、 /etc/nova/nova-compute.conf

[libvirt]
#egrep -c '(vmx|svm)' /proc/cpuinfo
#如果命令等于0,要改为qemu
virt_type = qemu

3、重启服务

service nova-compute restart

三、然后在CT01,进行相关操作
1、将PC01加入管理
1A、执行命令

. admin-openrc
openstack hypervisor list
sudo su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
[code]

1B、修改配置文件
/etc/nova/nova.conf
[code lang="shell"]
[scheduler]
discover_hosts_in_cells_interval = 300

2、验证安装

. admin-openrc
openstack compute service list
openstack catalog list
openstack image list