OpenStack搭建私有云07

本节开始安装cinder,用于对块存储进行管理,需要在CT01及BS01进行操作

一、在CT01安装相应模块
1、创建数据库

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

2、创建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt cinder
openstack role add --project serviceproject --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

openstack endpoint create --region Region01 volume public http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume internal http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume admin http://CT01:8776/v1/%\(tenant_id\)s

openstack endpoint create --region Region01 volumev2 public http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 internal http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 admin http://CT01:8776/v2/%\(tenant_id\)s

3、安装

apt install cinder-api cinder-scheduler

4、修改配置
4.1、/etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
my_ip = 10.0.3.10

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4.2、/etc/nova/nova.conf

[cinder]
os_region_name = Region01

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "cinder-manage db sync" cinder

service nova-api restart
service cinder-scheduler restart
service apache2 restart

二、在BS01安装相关模块
1、安装lvm2并作初始化处理

apt install lvm2

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

2、修改lvm配置文件
/etc/lvm/lvm.conf

devices {
    filter = [ "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "r/.*/"]
}

3、安装cinder-volume

apt install cinder-volume

4、修改配置文件
/etc/cinder/cinder.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.0.12
enabled_backends = lvm
glance_api_servers = http://CT01:9292

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address=10.0.3.12

5、并重启服务

service tgt restart
service cinder-volume restart

三、在CT01验证

. admin-openrc
openstack volume service list

然后,就可以在Dashboard中,新建并分配块存储咯。

OpenStack搭建私有云06

本节开始安装Dashboard,用于对OS进行管理,仅在CT01进行操作

1、安装

apt install openstack-dashboard

2、修改配置
/etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "CT01"
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#Dashboard节点
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    ' default' : {
        ' BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache' ,
        ' LOCATION' : 'CT01:11211' ,
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

TIME_ZONE = "Asia/Shanghai"

3、重启服务

service apache2 reload

4、浏览器打开网页
http://CT01/horizon
可以用admin或user01用户进行访问

PS:出现500错误

#查看apache日志发现,是下面文件权限设置有问题,改一下就好了
sudo chown www-data:www-data /var/lib/openstack-dashboard/secret_key

5、用下面的步骤创建实例
创建网络、创建配置、创建实例

6、实例启动后,点击进入实例,就可以通过控制台连接实例了

OpenStack搭建私有云05

本节开始安装Neutron服务,Neutron用于管理虚拟网络,在CT01和PC01分别进行相关模块的安装

一、在CT01安装相关模块
1、新建数据库

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project serviceproject --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region Region01 network public http://CT01:9696
openstack endpoint create --region Region01 network internal http://CT01:9696
openstack endpoint create --region Region01 network admin http://CT01:9696

3、安装

apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

4、修改配置
4.1、/etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:neutron@CT01/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

[nova]
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = nova
password = nova

4.2、/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

4.3、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.10
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

4.4、/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

4.5、 /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

4.6、 /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_ip = CT01
metadata_proxy_shared_secret = metadata

4.7、 /etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

sudo service nova-api restart
sudo service neutron-server restart
sudo service neutron-linuxbridge-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
sudo service neutron-l3-agent restart

二、在PC01安装相关模块
1、安装

apt install neutron-linuxbridge-agent

2、修改配置文件
2.1、/etc/neutron/neutron.conf

[database]
#注释下面内容
#connection

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

2.2、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.11
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

2.3、/etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron

3、重启服务

service nova-compute restart
service neutron-linuxbridge-agent restart

三、在CT01进行验证
1、验证

. admin-openrc
openstack extension list --network
openstack network agent list

OpenStack搭建私有云04

本节开始安装Nova服务,Nova用于管理虚拟计算,在CT01和PC01分别进行相关模块的安装。

一、首先在CT01,安装相关模块

1、新建数据库

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt nova
openstack role add --project serviceproject --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute

openstack endpoint create --region Region01 compute public http://CT01:8774/v2.1
openstack endpoint create --region Region01 compute internal http://CT01:8774/v2.1
openstack endpoint create --region Region01 compute admin http://CT01:8774/v2.1

openstack user create --domain default --password-prompt placement
openstack role add --project serviceproject --user placement admin
openstack service create --name placement --description "Placement API" placement

openstack endpoint create --region Region01 placement public http://CT01:8778
openstack endpoint create --region Region01 placement internal http://CT01:8778
openstack endpoint create --region Region01 placement admin http://CT01:8778

3、安装nova

apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api

4、修改配置文件
/etc/nova/nova.conf

[api_database]
connection = mysql+pymysql://nova:nova@CT01/nova_api

[database]
connection = mysql+pymysql://nova:nova@CT01/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = nova
password = nova

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://CT01:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = Region01
project_domain_name = Default
project_name = serviceproject
auth_type = password
user_domain_name = Default
auth_url = http://CT01:35357/v3
username = placement
password = placement

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.3.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#移除下面的节点
#log_dir 

5、初始化

sudo su -s /bin/sh -c "nova-manage api_db sync" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
sudo su -s /bin/sh -c "nova-manage db sync" nova
sudo nova-manage cell_v2 list_cells

6、重启服务

sudo service nova-api restart
sudo service nova-consoleauth restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart

二、然后在PC01,安装相关模块
1、安装

apt install nova-compute
apt install nova-compute-qemu

2、修改配置
2.1、 /etc/nova/nova.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.3.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#log_dir

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = nova
password = nova

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://CT01:6080/vnc_auto.html

[glance]
api_servers = http://CT01:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = Region01
project_domain_name = Default
project_name = serviceproject
auth_type = password
user_domain_name = Default
auth_url = http://CT01:35357/v3
username = placement
password = placement

2.2、 /etc/nova/nova-compute.conf

[libvirt]
#egrep -c '(vmx|svm)' /proc/cpuinfo
#如果命令等于0,要改为qemu
virt_type = qemu

3、重启服务

service nova-compute restart

三、然后在CT01,进行相关操作
1、将PC01加入管理
1A、执行命令

. admin-openrc
openstack hypervisor list
sudo su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
[code]

1B、修改配置文件
/etc/nova/nova.conf
[code lang="shell"]
[scheduler]
discover_hosts_in_cells_interval = 300

2、验证安装

. admin-openrc
openstack compute service list
openstack catalog list
openstack image list

OpenStack搭建私有云03

本节开始安装Glance服务,Glance用于管理虚拟镜像,仅在CT01进行操作

1、新建数据库

CREATE DATABASE glance CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

2、新建OS用户及endpoint

. admin-openrc

openstack user create --domain default --password-prompt glance
openstack role add --project serviceproject --user glance admin
openstack service create --name glance --description "OpenStack Image" image

openstack endpoint create --region Region01 image public http://CT01:9292
openstack endpoint create --region Region01 image internal http://CT01:9292
openstack endpoint create --region Region01 image admin http://CT01:9292

3、安装glance

apt install glance

4、修改配置文件
4.1、/etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:glance@CT01/glance

[keystone_authtoken]
#注释掉其他内容
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

4.2、/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:glance@CT01/glance

[keystone_authtoken]
#注释掉其他内容
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = glance
password = glance

[paste_deploy]
flavor = keystone

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "glance-manage db_sync" glance

service glance-registry restart
service glance-api restart

6、下载系统镜像,并上传

. admin-openrc

wget -O cirros-0.3.5-x86_64-disk.img http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

openstack image list

OpenStack搭建私有云02

本节开始安装Keystone服务,Keystone用于管理OS内的全部权限,仅在CT01进行操作

1、安装mysql及pymysql

#安装mysql
apt-get install mysql-server

#修改配置文件
vi /etc/mysql/my.cnf
#添加下面内容
[client]
default-character-set=utf8
[mysqld]
character-set-server=utf8
 
#重启mysql
/etc/init.d/mysql restart

#安装pymysql 
pip install pymysql 

2、安装rabbitmq

#安装
apt install rabbitmq-server

#并设置权限
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

3、安装memcached

#安装
apt install memcached python-memcache

#修改配置文件
vi /etc/memcached.conf
-l CT01

#重启服务
service memcached restart

4、创建Keystone库

CREATE DATABASE keystone CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

5、安装Keysotne

apt install keystone

6、修改Keysotne配置文件
/etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:keystone@CT01/keystone
[token]
provider = fernet

7、初始化

#填充数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone

#初始化
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password bootstrap --bootstrap-admin-url http://CT01:35357/v3/ --bootstrap-internal-url http://CT01:5000/v3/ --bootstrap-public-url http://CT01:5000/v3/ --bootstrap-region-id Region01

#删除不需要的库
rm -f /var/lib/keystone/keystone.db

#进行配置
keystone-install-configure

8、运行下面的命令

export OS_USERNAME=admin
export OS_PASSWORD=bootstrap
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://CT01:35357/v3
export OS_IDENTITY_API_VERSION=3

9、创建project、用户及角色

openstack project create --domain default --description "service os project" serviceproject
openstack project create --domain default --description "user os project" userproject

openstack user create --domain default --password-prompt user01
openstack role create user
openstack role add --project userproject --user user01 user

10、禁用部分授权
/etc/keystone/keystone-paste.ini

#删掉下面节点中admin_token_auth的内容
[pipeline:public_api],[pipeline:admin_api],[pipeline:api_v3] 

11、验证安装

unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://CT01:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://CT01:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name userproject --os-username user01 token issue

12、编写两个授权脚本
12.1、admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=bootstrap
export OS_AUTH_URL=http://CT01:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

12.2、user01-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=userproject
export OS_USERNAME=user01
export OS_PASSWORD=user01
export OS_AUTH_URL=http://CT01:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

12.3、验证

. admin-openrc
openstack token issue

. user01-openrc
openstack token issue

OpenStack搭建私有云01

1、常用模块介绍

openstackclient 客户端
keystone Identity 权限管理
glance 镜像管理
nova 虚拟计算
placement 资源跟踪
neutron 虚拟网络
cinder 块存储
swift 对象存储

2、主机资源规划
一共使用了五台虚拟机,一台主控,一台做云计算(需打开虚拟化支持),一台做云存储,两台做对象存储
每台虚拟机都有两块网卡,一块为HostOnly用于内部通讯,一块为NAT,用于软件安装

HostName HostOnly IP NAT IP
CT01 10.0.3.10 172.16.172.70
PC01 10.0.3.11 172.16.172.71
BS01 10.0.3.12 172.16.172.72
OS01 10.0.3.13 172.16.172.73
OS02 10.0.3.14 172.16.172.74

3、IP及Hostname设置
以主控为例,每一个节点都要设置
/etc/hostname

CT01

/etc/hosts

10.0.3.10   CT01
10.0.3.11   PC01
10.0.3.12   BS01
10.0.3.13   OS01
10.0.3.14   OS02

/etc/network/interfaces

#hostonly
auto enp0s3
iface enp0s3 inet static
address 10.0.3.10
netmask 255.255.255.0

#nat
auto enp0s8
iface enp0s8 inet static
address 172.16.172.70
netmask 255.255.0.0
dns-nameserver 8.8.8.8
dns-nameserver 114.114.114.114

4、系统升级
每一个节点都要执行

apt install software-properties-common
add-apt-repository cloud-archive:ocata
apt update
apt dist-upgrade

5、时间同步
5.1、主控节点

#安装chrony
apt install chrony

#编辑配置文件,修改下面几行
vi /etc/chrony/chrony.conf
server 52.187.51.163 iburst
allow 10.0.3.0/24
allow 172.16.172.0/24

#重启服务,并同步时间
service chrony restart
chronyc sources

5.2、其他节点

#安装chrony
apt install chrony

#编辑配置文件,修改下面几行
vi /etc/chrony/chrony.conf
server CT01 iburst

#重启服务,并同步时间
service chrony restart
chronyc sources

6、安装python-openstackclient
每一个节点都要执行

apt install python-openstackclient

Docker私有仓库搭建

1、安装registry

# sudo apt-get install docker docker-registry

2、上传镜像
2.1、客户端允许http

$ sudo vi /etc/defualt/docker
#添加这一行
DOCKER_OPTS="--insecure-registry 192.168.130.191:5000"

2.2、上传镜像

#查看镜像列表
$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                        5.1                 747929f3b12a        2 weeks ago         352.6 MB

#标记镜像
$ sudo docker tag elasticsearch:5.1 192.168.130.191:5000/elasticsearch

#查看镜像列表
$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                        5.1                 747929f3b12a        2 weeks ago         352.6 MB
192.168.130.191:5000/elasticsearch   5.1                 747929f3b12a        2 weeks ago         352.6 MB

#上传镜像
$ sudo docker push 192.168.130.191:5000/elasticsearch:5.1
The push refers to a repository [192.168.130.191:5000/elasticsearch]
cea33faf9668: Pushed
c3707daa9b07: Pushed
a56b404460eb: Pushed
5e48ecb24792: Pushed
f86173bb67f3: Pushed
c87433dfa8d7: Pushed
c9dbd14c23f0: Pushed
b5b4ba1cb64d: Pushed
15ba1125d6c0: Pushed
bd25fcff1b2c: Pushed
8d9c6e6ceb37: Pushed
bc3b6402e94c: Pushed
223c0d04a137: Pushed
fe4c16cbf7a4: Pushed
5.1: digest: sha256:14ec0b594c0bf1b007debc12e3a16a99aee74964724ac182bc851fec3fc5d2b0 size: 3248

3、查询镜像

$ curl -X GET http://192.168.130.191:5000/v2/_catalog
{"repositories":["alpine","elasticsearch","jetty","mongo","mysql","nginx","openjdk","redis","registry","ubuntu","zookeeper"]}

$ curl -X GET http://192.168.130.191:5000/v2/elasticsearch/tags/list
{"name":"elasticsearch","tags":["5.1"]}

#下面的查询命令总是报404错误,api文档中也没有,有些奇怪
$ curl -X GET http://192.168.130.191:5000/v2/search?q=elasticsearch
$ sudo docker search 192.168.130.191:5000/elasticsearch

4、下载镜像

$ sudo docker pull 192.168.130.191:5000/elasticsearch:5.1
5.1: Pulling from elasticsearch
386a066cd84a: Pull complete
75ea84187083: Pull complete
3e2e387eb26a: Pull complete
eef540699244: Pull complete
1624a2f8d114: Pull complete
7018f4ec6e0a: Pull complete
6ca3bc2ad3b3: Pull complete
424638b495a6: Pull complete
2ff72d0b7bea: Pull complete
d0d6a2049bf2: Pull complete
003b957bd67f: Pull complete
14d23bc515af: Pull complete
923836f4bd50: Pull complete
c0b5750bf0f7: Pull complete
Digest: sha256:14ec0b594c0bf1b007debc12e3a16a99aee74964724ac182bc851fec3fc5d2b0
Status: Downloaded newer image for 192.168.130.191:5000/elasticsearch:5.1

5、删除镜像

$ curl -X DELETE /v2/elasticsearch/manifests/5.1

参考github

elasticsearch docker官方镜像无法运行

elasticsearch5.1 docker官方镜像运行时会报错:

ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

那是因为vm.max_map_count达不到es的最低要求262144,修改方式有两种:

#一次生效
sudo sysctl -w vm.max_map_count=262144
#永久生效
sudo vi /etc/sysctl.conf
#添加这一行
vm.max_map_count=262144

#加载配置
sudo sysctl -p

常用docker镜像命令(Compose)

1、拉取镜像

#ubuntu-16.04.1-server-amd64
sudo apt-get install docker
sudo apt-get install docker-compose
#拉取镜像
sudo docker pull mysql:5.7
sudo docker pull redis:3.2
sudo docker pull mongo:3.4
sudo docker pull jetty:9.3-jre8
sudo docker pull nginx:1.11
sudo docker pull elasticsearch:5.1
sudo docker pull ubuntu:16.04

2、新建网络

sudo docker network create hiup

3、整理yml文件
3.1yml版本1

h01-mysql02:
  image: mysql:5.7
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-mysql02
  net: hiup
  ports:
    - "3306:3306"
  volumes:
   - /home/hiup/docker/data/mysql/var/lib/mysql:/var/lib/mysql
   - /home/hiup/docker/data/mysql/etc/mysql/conf.d:/etc/mysql/conf.d
  environment:
   - MYSQL_ROOT_PASSWORD=hiup
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-redis02:
  image: redis:3.2
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-redis02
  net: hiup
  volumes:
   - /home/hiup/docker/data/redis/etc/redis/:/etc/redis/
   - /home/hiup/docker/data/redis/data:/data
  ports:
   - "6379:6379"
  command: redis-server /etc/redis/redis.conf
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-mongo02:
  image: mongo:3.4
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-mongo02
  net: hiup
  ports:
   - "27017:27017"
  volumes:
   - /home/hiup/docker/data/mongo/etc/mongod.conf:/etc/mongod.conf
   - /home/hiup/docker/data/mongo/data/db:/data/db
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-jetty02:
  image: jetty:9.3-jre8
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-jetty02
  net: hiup
  ports:
   - "8080:8080"
  volumes:
   - /home/hiup/docker/data/jetty/usr/local/jetty/etc:/usr/local/jetty/etc
   - /home/hiup/docker/data/jetty/webapps:/var/lib/jetty/webapps
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-nginx02:
  image: nginx:1.11
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-nginx02
  net: hiup
  ports:
   - "80:80"
  volumes:
   - /home/hiup/docker/data/nginx/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
   - /home/hiup/docker/data/nginx/usr/share/nginx/html:/usr/share/nginx/html
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-es02:
  image: elasticsearch:5.1
  mem_limit: 640m
  cpu_shares: 100
  tty: true
  hostname: h01-es02
  net: hiup
  ports:
   - "9200:9200"
   - "9300:9300"
  volumes:
   - /home/hiup/docker/data/es/usr/share/elasticsearch/config:/usr/share/elasticsearch/config
   - /home/hiup/docker/data/es/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
  command: elasticsearch
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-ubuntu02:
  image: ubuntu:16.04
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-ubuntu02
  net: hiup
  #ports:
  #volumes:
  command: /bin/bash
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

3.2yml版本2

version: '2'
services:
  h01-mysql02:
    image: mysql:5.7
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-mysql02
    network_mode: hiup
    ports:
      - "3306:3306"
    volumes:
     - /home/hiup/docker/data/mysql/var/lib/mysql:/var/lib/mysql
     - /home/hiup/docker/data/mysql/etc/mysql/conf.d:/etc/mysql/conf.d
    environment:
     - MYSQL_ROOT_PASSWORD=hiup
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-redis02:
    image: redis:3.2
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-redis02
    network_mode: hiup
    volumes:
     - /home/hiup/docker/data/redis/etc/redis/:/etc/redis/
     - /home/hiup/docker/data/redis/data:/data
    ports:
     - "6379:6379"
    command: redis-server /etc/redis/redis.conf
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-mongo02:
    image: mongo:3.4
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-mongo02
    network_mode: hiup
    ports:
     - "27017:27017"
    volumes:
     - /home/hiup/docker/data/mongo/etc/mongod.conf:/etc/mongod.conf
     - /home/hiup/docker/data/mongo/data/db:/data/db
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-jetty02:
    image: jetty:9.3-jre8
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-jetty02
    network_mode: hiup
    ports:
     - "8080:8080"
    volumes:
     - /home/hiup/docker/data/jetty/usr/local/jetty/etc:/usr/local/jetty/etc
     - /home/hiup/docker/data/jetty/webapps:/var/lib/jetty/webapps
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-nginx02:
    image: nginx:1.11
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-nginx02
    network_mode: hiup
    ports:
     - "80:80"
    volumes:
     - /home/hiup/docker/data/nginx/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
     - /home/hiup/docker/data/nginx/usr/share/nginx/html:/usr/share/nginx/html
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-es02:
    image: elasticsearch:5.1
    mem_limit: 640m
    cpu_shares: 100
    tty: true
    hostname: h01-es02
    network_mode: hiup
    ports:
     - "9200:9200"
     - "9300:9300"
    volumes:
     - /home/hiup/docker/data/es/usr/share/elasticsearch/config:/usr/share/elasticsearch/config
     - /home/hiup/docker/data/es/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
    command: elasticsearch
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-ubuntu02:
    image: ubuntu:16.04
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-ubuntu02
    network_mode: hiup
    ports:
    volumes:
    command: /bin/bash
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"

4、运行

sudo docker-compose up -d

5、配置文件目录

.
├── es
│   └── usr
│       └── share
│           └── elasticsearch
│               ├── config
│               │   ├── elasticsearch.yml
│               │   ├── log4j2.properties
│               │   └── scripts
│               └── data
│                   └── nodes
│                       └── 0
│                           ├── node.lock
│                           └── _state
│                               ├── global-0.st
│                               └── node-0.st
├── jetty
│   ├── usr
│   │   └── local
│   │       └── jetty
│   │           └── etc
│   │               ├── example-quickstart.xml
│   │               ├── gcloud-memcached-session-context.xml
│   │               ├── gcloud-session-context.xml
│   │               ├── hawtio.xml
│   │               ├── home-base-warning.xml
│   │               ├── jamon.xml
│   │               ├── jdbcRealm.properties
│   │               ├── jetty-alpn.xml
│   │               ├── jetty-annotations.xml
│   │               ├── jetty-cdi.xml
│   │               ├── jetty.conf
│   │               ├── jetty-debuglog.xml
│   │               ├── jetty-debug.xml
│   │               ├── jetty-deploy.xml
│   │               ├── jetty-gcloud-memcached-sessions.xml
│   │               ├── jetty-gcloud-session-idmgr.xml
│   │               ├── jetty-gcloud-sessions.xml
│   │               ├── jetty-gzip.xml
│   │               ├── jetty-http2c.xml
│   │               ├── jetty-http2.xml
│   │               ├── jetty-http-forwarded.xml
│   │               ├── jetty-https.xml
│   │               ├── jetty-http.xml
│   │               ├── jetty-infinispan.xml
│   │               ├── jetty-ipaccess.xml
│   │               ├── jetty-jaas.xml
│   │               ├── jetty-jdbc-sessions.xml
│   │               ├── jetty-jmx-remote.xml
│   │               ├── jetty-jmx.xml
│   │               ├── jetty-logging.xml
│   │               ├── jetty-lowresources.xml
│   │               ├── jetty-monitor.xml
│   │               ├── jetty-nosql.xml
│   │               ├── jetty-plus.xml
│   │               ├── jetty-proxy-protocol-ssl.xml
│   │               ├── jetty-proxy-protocol.xml
│   │               ├── jetty-proxy.xml
│   │               ├── jetty-requestlog.xml
│   │               ├── jetty-rewrite-customizer.xml
│   │               ├── jetty-rewrite.xml
│   │               ├── jetty-setuid.xml
│   │               ├── jetty-spring.xml
│   │               ├── jetty-ssl-context.xml
│   │               ├── jetty-ssl.xml
│   │               ├── jetty-started.xml
│   │               ├── jetty-stats.xml
│   │               ├── jetty-threadlimit.xml
│   │               ├── jetty.xml
│   │               ├── jminix.xml
│   │               ├── jolokia.xml
│   │               ├── krb5.ini
│   │               ├── README.spnego
│   │               ├── rewrite-compactpath.xml
│   │               ├── spnego.conf
│   │               ├── spnego.properties
│   │               └── webdefault.xml
│   └── webapps
│       └── jvmjsp.war
├── mongo
│   ├── data
│   │   └── db
│   │       ├── collection-0-4376730799513530636.wt
│   │       ├── collection-2-4376730799513530636.wt
│   │       ├── collection-5-4376730799513530636.wt
│   │       ├── diagnostic.data
│   │       │   └── metrics.2016-12-27T08-57-50Z-00000
│   │       ├── index-1-4376730799513530636.wt
│   │       ├── index-3-4376730799513530636.wt
│   │       ├── index-4-4376730799513530636.wt
│   │       ├── index-6-4376730799513530636.wt
│   │       ├── journal
│   │       │   ├── WiredTigerLog.0000000001
│   │       │   ├── WiredTigerPreplog.0000000001
│   │       │   └── WiredTigerPreplog.0000000002
│   │       ├── _mdb_catalog.wt
│   │       ├── mongod.lock
│   │       ├── sizeStorer.wt
│   │       ├── storage.bson
│   │       ├── WiredTiger
│   │       ├── WiredTigerLAS.wt
│   │       ├── WiredTiger.lock
│   │       ├── WiredTiger.turtle
│   │       └── WiredTiger.wt
│   └── etc
│       └── mongod.conf
├── mysql
│   ├── etc
│   │   └── mysql
│   │       └── conf.d
│   │           ├── docker.cnf
│   │           └── mysql.cnf
│   └── var
│       └── lib
│           └── mysql
│               ├── auto.cnf
│               ├── ib_buffer_pool
│               ├── ibdata1
│               ├── ib_logfile0
│               ├── ib_logfile1
│               ├── mysql [error opening dir]
│               ├── performance_schema [error opening dir]
│               └── sys [error opening dir]
├── nginx
│   ├── etc
│   │   └── nginx
│   │       └── nginx.conf
│   └── usr
│       └── share
│           └── nginx
│               └── html
│                   ├── 50x.html
│                   └── index.html
└── redis
    ├── data
    │   └── dump.rdb
    └── etc
        └── redis
            ├── redis.conf
            └── sentinel.conf

6、本地工具安装

sudo apt-get install mysql-client
sudo apt-get install redis-tools
sudo apt-get install mongodb-clients
sudo apt-get install curl