OpenStack搭建私有云03

本节开始安装Glance服务,Glance用于管理虚拟镜像,仅在CT01进行操作

1、新建数据库

CREATE DATABASE glance CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

2、新建OS用户及endpoint

. admin-openrc

openstack user create --domain default --password-prompt glance
openstack role add --project serviceproject --user glance admin
openstack service create --name glance --description "OpenStack Image" image

openstack endpoint create --region Region01 image public http://CT01:9292
openstack endpoint create --region Region01 image internal http://CT01:9292
openstack endpoint create --region Region01 image admin http://CT01:9292

3、安装glance

apt install glance

4、修改配置文件
4.1、/etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:glance@CT01/glance

[keystone_authtoken]
#注释掉其他内容
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

4.2、/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:glance@CT01/glance

[keystone_authtoken]
#注释掉其他内容
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = glance
password = glance

[paste_deploy]
flavor = keystone

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "glance-manage db_sync" glance

service glance-registry restart
service glance-api restart

6、下载系统镜像,并上传

. admin-openrc

wget -O cirros-0.3.5-x86_64-disk.img http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

openstack image list

OpenStack搭建私有云02

本节开始安装Keystone服务,Keystone用于管理OS内的全部权限,仅在CT01进行操作

1、安装mysql及pymysql

#安装mysql
apt-get install mysql-server

#修改配置文件
vi /etc/mysql/my.cnf
#添加下面内容
[client]
default-character-set=utf8
[mysqld]
character-set-server=utf8
 
#重启mysql
/etc/init.d/mysql restart

#安装pymysql 
pip install pymysql 

2、安装rabbitmq

#安装
apt install rabbitmq-server

#并设置权限
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

3、安装memcached

#安装
apt install memcached python-memcache

#修改配置文件
vi /etc/memcached.conf
-l CT01

#重启服务
service memcached restart

4、创建Keystone库

CREATE DATABASE keystone CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

5、安装Keysotne

apt install keystone

6、修改Keysotne配置文件
/etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:keystone@CT01/keystone
[token]
provider = fernet

7、初始化

#填充数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone

#初始化
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password bootstrap --bootstrap-admin-url http://CT01:35357/v3/ --bootstrap-internal-url http://CT01:5000/v3/ --bootstrap-public-url http://CT01:5000/v3/ --bootstrap-region-id Region01

#删除不需要的库
rm -f /var/lib/keystone/keystone.db

#进行配置
keystone-install-configure

8、运行下面的命令

export OS_USERNAME=admin
export OS_PASSWORD=bootstrap
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://CT01:35357/v3
export OS_IDENTITY_API_VERSION=3

9、创建project、用户及角色

openstack project create --domain default --description "service os project" serviceproject
openstack project create --domain default --description "user os project" userproject

openstack user create --domain default --password-prompt user01
openstack role create user
openstack role add --project userproject --user user01 user

10、禁用部分授权
/etc/keystone/keystone-paste.ini

#删掉下面节点中admin_token_auth的内容
[pipeline:public_api],[pipeline:admin_api],[pipeline:api_v3] 

11、验证安装

unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://CT01:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://CT01:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name userproject --os-username user01 token issue

12、编写两个授权脚本
12.1、admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=bootstrap
export OS_AUTH_URL=http://CT01:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

12.2、user01-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=userproject
export OS_USERNAME=user01
export OS_PASSWORD=user01
export OS_AUTH_URL=http://CT01:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

12.3、验证

. admin-openrc
openstack token issue

. user01-openrc
openstack token issue

OpenStack搭建私有云01

1、常用模块介绍

openstackclient 客户端
keystone Identity 权限管理
glance 镜像管理
nova 虚拟计算
placement 资源跟踪
neutron 虚拟网络
cinder 块存储
swift 对象存储

2、主机资源规划
一共使用了五台虚拟机,一台主控,一台做云计算(需打开虚拟化支持),一台做云存储,两台做对象存储
每台虚拟机都有两块网卡,一块为HostOnly用于内部通讯,一块为NAT,用于软件安装

HostName HostOnly IP NAT IP
CT01 10.0.3.10 172.16.172.70
PC01 10.0.3.11 172.16.172.71
BS01 10.0.3.12 172.16.172.72
OS01 10.0.3.13 172.16.172.73
OS02 10.0.3.14 172.16.172.74

3、IP及Hostname设置
以主控为例,每一个节点都要设置
/etc/hostname

CT01

/etc/hosts

10.0.3.10   CT01
10.0.3.11   PC01
10.0.3.12   BS01
10.0.3.13   OS01
10.0.3.14   OS02

/etc/network/interfaces

#hostonly
auto enp0s3
iface enp0s3 inet static
address 10.0.3.10
netmask 255.255.255.0

#nat
auto enp0s8
iface enp0s8 inet static
address 172.16.172.70
netmask 255.255.0.0
dns-nameserver 8.8.8.8
dns-nameserver 114.114.114.114

4、系统升级
每一个节点都要执行

apt install software-properties-common
add-apt-repository cloud-archive:ocata
apt update
apt dist-upgrade

5、时间同步
5.1、主控节点

#安装chrony
apt install chrony

#编辑配置文件,修改下面几行
vi /etc/chrony/chrony.conf
server 52.187.51.163 iburst
allow 10.0.3.0/24
allow 172.16.172.0/24

#重启服务,并同步时间
service chrony restart
chronyc sources

5.2、其他节点

#安装chrony
apt install chrony

#编辑配置文件,修改下面几行
vi /etc/chrony/chrony.conf
server CT01 iburst

#重启服务,并同步时间
service chrony restart
chronyc sources

6、安装python-openstackclient
每一个节点都要执行

apt install python-openstackclient

Docker私有仓库搭建

1、安装registry

# sudo apt-get install docker docker-registry

2、上传镜像
2.1、客户端允许http

$ sudo vi /etc/defualt/docker
#添加这一行
DOCKER_OPTS="--insecure-registry 192.168.130.191:5000"

2.2、上传镜像

#查看镜像列表
$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                        5.1                 747929f3b12a        2 weeks ago         352.6 MB

#标记镜像
$ sudo docker tag elasticsearch:5.1 192.168.130.191:5000/elasticsearch

#查看镜像列表
$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                        5.1                 747929f3b12a        2 weeks ago         352.6 MB
192.168.130.191:5000/elasticsearch   5.1                 747929f3b12a        2 weeks ago         352.6 MB

#上传镜像
$ sudo docker push 192.168.130.191:5000/elasticsearch:5.1
The push refers to a repository [192.168.130.191:5000/elasticsearch]
cea33faf9668: Pushed
c3707daa9b07: Pushed
a56b404460eb: Pushed
5e48ecb24792: Pushed
f86173bb67f3: Pushed
c87433dfa8d7: Pushed
c9dbd14c23f0: Pushed
b5b4ba1cb64d: Pushed
15ba1125d6c0: Pushed
bd25fcff1b2c: Pushed
8d9c6e6ceb37: Pushed
bc3b6402e94c: Pushed
223c0d04a137: Pushed
fe4c16cbf7a4: Pushed
5.1: digest: sha256:14ec0b594c0bf1b007debc12e3a16a99aee74964724ac182bc851fec3fc5d2b0 size: 3248

3、查询镜像

$ curl -X GET http://192.168.130.191:5000/v2/_catalog
{"repositories":["alpine","elasticsearch","jetty","mongo","mysql","nginx","openjdk","redis","registry","ubuntu","zookeeper"]}

$ curl -X GET http://192.168.130.191:5000/v2/elasticsearch/tags/list
{"name":"elasticsearch","tags":["5.1"]}

#下面的查询命令总是报404错误,api文档中也没有,有些奇怪
$ curl -X GET http://192.168.130.191:5000/v2/search?q=elasticsearch
$ sudo docker search 192.168.130.191:5000/elasticsearch

4、下载镜像

$ sudo docker pull 192.168.130.191:5000/elasticsearch:5.1
5.1: Pulling from elasticsearch
386a066cd84a: Pull complete
75ea84187083: Pull complete
3e2e387eb26a: Pull complete
eef540699244: Pull complete
1624a2f8d114: Pull complete
7018f4ec6e0a: Pull complete
6ca3bc2ad3b3: Pull complete
424638b495a6: Pull complete
2ff72d0b7bea: Pull complete
d0d6a2049bf2: Pull complete
003b957bd67f: Pull complete
14d23bc515af: Pull complete
923836f4bd50: Pull complete
c0b5750bf0f7: Pull complete
Digest: sha256:14ec0b594c0bf1b007debc12e3a16a99aee74964724ac182bc851fec3fc5d2b0
Status: Downloaded newer image for 192.168.130.191:5000/elasticsearch:5.1

5、删除镜像

$ curl -X DELETE /v2/elasticsearch/manifests/5.1

参考github

elasticsearch docker官方镜像无法运行

elasticsearch5.1 docker官方镜像运行时会报错:

ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

那是因为vm.max_map_count达不到es的最低要求262144,修改方式有两种:

#一次生效
sudo sysctl -w vm.max_map_count=262144
#永久生效
sudo vi /etc/sysctl.conf
#添加这一行
vm.max_map_count=262144

#加载配置
sudo sysctl -p

常用docker镜像命令(Compose)

1、拉取镜像

#ubuntu-16.04.1-server-amd64
sudo apt-get install docker
sudo apt-get install docker-compose
#拉取镜像
sudo docker pull mysql:5.7
sudo docker pull redis:3.2
sudo docker pull mongo:3.4
sudo docker pull jetty:9.3-jre8
sudo docker pull nginx:1.11
sudo docker pull elasticsearch:5.1
sudo docker pull ubuntu:16.04

2、新建网络

sudo docker network create hiup

3、整理yml文件
3.1yml版本1

h01-mysql02:
  image: mysql:5.7
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-mysql02
  net: hiup
  ports:
    - "3306:3306"
  volumes:
   - /home/hiup/docker/data/mysql/var/lib/mysql:/var/lib/mysql
   - /home/hiup/docker/data/mysql/etc/mysql/conf.d:/etc/mysql/conf.d
  environment:
   - MYSQL_ROOT_PASSWORD=hiup
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-redis02:
  image: redis:3.2
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-redis02
  net: hiup
  volumes:
   - /home/hiup/docker/data/redis/etc/redis/:/etc/redis/
   - /home/hiup/docker/data/redis/data:/data
  ports:
   - "6379:6379"
  command: redis-server /etc/redis/redis.conf
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-mongo02:
  image: mongo:3.4
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-mongo02
  net: hiup
  ports:
   - "27017:27017"
  volumes:
   - /home/hiup/docker/data/mongo/etc/mongod.conf:/etc/mongod.conf
   - /home/hiup/docker/data/mongo/data/db:/data/db
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-jetty02:
  image: jetty:9.3-jre8
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-jetty02
  net: hiup
  ports:
   - "8080:8080"
  volumes:
   - /home/hiup/docker/data/jetty/usr/local/jetty/etc:/usr/local/jetty/etc
   - /home/hiup/docker/data/jetty/webapps:/var/lib/jetty/webapps
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-nginx02:
  image: nginx:1.11
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-nginx02
  net: hiup
  ports:
   - "80:80"
  volumes:
   - /home/hiup/docker/data/nginx/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
   - /home/hiup/docker/data/nginx/usr/share/nginx/html:/usr/share/nginx/html
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-es02:
  image: elasticsearch:5.1
  mem_limit: 640m
  cpu_shares: 100
  tty: true
  hostname: h01-es02
  net: hiup
  ports:
   - "9200:9200"
   - "9300:9300"
  volumes:
   - /home/hiup/docker/data/es/usr/share/elasticsearch/config:/usr/share/elasticsearch/config
   - /home/hiup/docker/data/es/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
  command: elasticsearch
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

h01-ubuntu02:
  image: ubuntu:16.04
  mem_limit: 128m
  cpu_shares: 100
  tty: true
  hostname: h01-ubuntu02
  net: hiup
  #ports:
  #volumes:
  command: /bin/bash
  log_driver: "json-file"
  log_opt:
    max-size: "10m"
    max-file: "10"

3.2yml版本2

version: '2'
services:
  h01-mysql02:
    image: mysql:5.7
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-mysql02
    network_mode: hiup
    ports:
      - "3306:3306"
    volumes:
     - /home/hiup/docker/data/mysql/var/lib/mysql:/var/lib/mysql
     - /home/hiup/docker/data/mysql/etc/mysql/conf.d:/etc/mysql/conf.d
    environment:
     - MYSQL_ROOT_PASSWORD=hiup
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-redis02:
    image: redis:3.2
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-redis02
    network_mode: hiup
    volumes:
     - /home/hiup/docker/data/redis/etc/redis/:/etc/redis/
     - /home/hiup/docker/data/redis/data:/data
    ports:
     - "6379:6379"
    command: redis-server /etc/redis/redis.conf
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-mongo02:
    image: mongo:3.4
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-mongo02
    network_mode: hiup
    ports:
     - "27017:27017"
    volumes:
     - /home/hiup/docker/data/mongo/etc/mongod.conf:/etc/mongod.conf
     - /home/hiup/docker/data/mongo/data/db:/data/db
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-jetty02:
    image: jetty:9.3-jre8
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-jetty02
    network_mode: hiup
    ports:
     - "8080:8080"
    volumes:
     - /home/hiup/docker/data/jetty/usr/local/jetty/etc:/usr/local/jetty/etc
     - /home/hiup/docker/data/jetty/webapps:/var/lib/jetty/webapps
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-nginx02:
    image: nginx:1.11
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-nginx02
    network_mode: hiup
    ports:
     - "80:80"
    volumes:
     - /home/hiup/docker/data/nginx/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
     - /home/hiup/docker/data/nginx/usr/share/nginx/html:/usr/share/nginx/html
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-es02:
    image: elasticsearch:5.1
    mem_limit: 640m
    cpu_shares: 100
    tty: true
    hostname: h01-es02
    network_mode: hiup
    ports:
     - "9200:9200"
     - "9300:9300"
    volumes:
     - /home/hiup/docker/data/es/usr/share/elasticsearch/config:/usr/share/elasticsearch/config
     - /home/hiup/docker/data/es/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
    command: elasticsearch
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"
  
  h01-ubuntu02:
    image: ubuntu:16.04
    mem_limit: 128m
    cpu_shares: 100
    tty: true
    hostname: h01-ubuntu02
    network_mode: hiup
    ports:
    volumes:
    command: /bin/bash
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"

4、运行

sudo docker-compose up -d

5、配置文件目录

.
├── es
│   └── usr
│       └── share
│           └── elasticsearch
│               ├── config
│               │   ├── elasticsearch.yml
│               │   ├── log4j2.properties
│               │   └── scripts
│               └── data
│                   └── nodes
│                       └── 0
│                           ├── node.lock
│                           └── _state
│                               ├── global-0.st
│                               └── node-0.st
├── jetty
│   ├── usr
│   │   └── local
│   │       └── jetty
│   │           └── etc
│   │               ├── example-quickstart.xml
│   │               ├── gcloud-memcached-session-context.xml
│   │               ├── gcloud-session-context.xml
│   │               ├── hawtio.xml
│   │               ├── home-base-warning.xml
│   │               ├── jamon.xml
│   │               ├── jdbcRealm.properties
│   │               ├── jetty-alpn.xml
│   │               ├── jetty-annotations.xml
│   │               ├── jetty-cdi.xml
│   │               ├── jetty.conf
│   │               ├── jetty-debuglog.xml
│   │               ├── jetty-debug.xml
│   │               ├── jetty-deploy.xml
│   │               ├── jetty-gcloud-memcached-sessions.xml
│   │               ├── jetty-gcloud-session-idmgr.xml
│   │               ├── jetty-gcloud-sessions.xml
│   │               ├── jetty-gzip.xml
│   │               ├── jetty-http2c.xml
│   │               ├── jetty-http2.xml
│   │               ├── jetty-http-forwarded.xml
│   │               ├── jetty-https.xml
│   │               ├── jetty-http.xml
│   │               ├── jetty-infinispan.xml
│   │               ├── jetty-ipaccess.xml
│   │               ├── jetty-jaas.xml
│   │               ├── jetty-jdbc-sessions.xml
│   │               ├── jetty-jmx-remote.xml
│   │               ├── jetty-jmx.xml
│   │               ├── jetty-logging.xml
│   │               ├── jetty-lowresources.xml
│   │               ├── jetty-monitor.xml
│   │               ├── jetty-nosql.xml
│   │               ├── jetty-plus.xml
│   │               ├── jetty-proxy-protocol-ssl.xml
│   │               ├── jetty-proxy-protocol.xml
│   │               ├── jetty-proxy.xml
│   │               ├── jetty-requestlog.xml
│   │               ├── jetty-rewrite-customizer.xml
│   │               ├── jetty-rewrite.xml
│   │               ├── jetty-setuid.xml
│   │               ├── jetty-spring.xml
│   │               ├── jetty-ssl-context.xml
│   │               ├── jetty-ssl.xml
│   │               ├── jetty-started.xml
│   │               ├── jetty-stats.xml
│   │               ├── jetty-threadlimit.xml
│   │               ├── jetty.xml
│   │               ├── jminix.xml
│   │               ├── jolokia.xml
│   │               ├── krb5.ini
│   │               ├── README.spnego
│   │               ├── rewrite-compactpath.xml
│   │               ├── spnego.conf
│   │               ├── spnego.properties
│   │               └── webdefault.xml
│   └── webapps
│       └── jvmjsp.war
├── mongo
│   ├── data
│   │   └── db
│   │       ├── collection-0-4376730799513530636.wt
│   │       ├── collection-2-4376730799513530636.wt
│   │       ├── collection-5-4376730799513530636.wt
│   │       ├── diagnostic.data
│   │       │   └── metrics.2016-12-27T08-57-50Z-00000
│   │       ├── index-1-4376730799513530636.wt
│   │       ├── index-3-4376730799513530636.wt
│   │       ├── index-4-4376730799513530636.wt
│   │       ├── index-6-4376730799513530636.wt
│   │       ├── journal
│   │       │   ├── WiredTigerLog.0000000001
│   │       │   ├── WiredTigerPreplog.0000000001
│   │       │   └── WiredTigerPreplog.0000000002
│   │       ├── _mdb_catalog.wt
│   │       ├── mongod.lock
│   │       ├── sizeStorer.wt
│   │       ├── storage.bson
│   │       ├── WiredTiger
│   │       ├── WiredTigerLAS.wt
│   │       ├── WiredTiger.lock
│   │       ├── WiredTiger.turtle
│   │       └── WiredTiger.wt
│   └── etc
│       └── mongod.conf
├── mysql
│   ├── etc
│   │   └── mysql
│   │       └── conf.d
│   │           ├── docker.cnf
│   │           └── mysql.cnf
│   └── var
│       └── lib
│           └── mysql
│               ├── auto.cnf
│               ├── ib_buffer_pool
│               ├── ibdata1
│               ├── ib_logfile0
│               ├── ib_logfile1
│               ├── mysql [error opening dir]
│               ├── performance_schema [error opening dir]
│               └── sys [error opening dir]
├── nginx
│   ├── etc
│   │   └── nginx
│   │       └── nginx.conf
│   └── usr
│       └── share
│           └── nginx
│               └── html
│                   ├── 50x.html
│                   └── index.html
└── redis
    ├── data
    │   └── dump.rdb
    └── etc
        └── redis
            ├── redis.conf
            └── sentinel.conf

6、本地工具安装

sudo apt-get install mysql-client
sudo apt-get install redis-tools
sudo apt-get install mongodb-clients
sudo apt-get install curl

常用docker镜像命令(Shell)

1、拉取镜像

#ubuntu-16.04.1-server-amd64
sudo apt-get install docker
sudo apt-get install docker-compose
#拉取镜像
sudo docker pull mysql:5.7
sudo docker pull redis:3.2
sudo docker pull mongo:3.4
sudo docker pull jetty:9.3-jre8
sudo docker pull nginx:1.11
sudo docker pull elasticsearch:5.1
sudo docker pull ubuntu:16.04

2、新建网络

sudo docker network create hiup

3、启动容器
3.1、第一次启动容器

#mysql
sudo docker run --net=hiup --name h01-mysql01 -h h01-mysql01 -p3306:3306 -c 100 -m 128m -e MYSQL_ROOT_PASSWORD=hiup -v /home/hiup/docker/data/mysql/var/lib/mysql:/var/lib/mysql -v /home/hiup/docker/data/mysql/etc/mysql/conf.d:/etc/mysql/conf.d -itd mysql:5.7

#redis
sudo docker run --net=hiup --name h01-redis01 -h h01-redis01 -p6379:6379 -c 100 -m 128m  -v /home/hiup/docker/data/redis/etc/redis/:/etc/redis/ -v /home/hiup/docker/data/redis/data:/data -itd redis:3.2 redis-server /etc/redis/redis.conf
#下面配置提供持久化支持
#redis-server --appendonly yes

#mongodb
sudo docker run --net=hiup --name h01-mongo01 -h h01-mongo01 -p27017:27017 -c 100 -m 128m -v /home/hiup/docker/data/mongo/etc/mongod.conf:/etc/mongod.conf -v /home/hiup/docker/data/mongo/data/db:/data/db -itd mongo:3.4
#提供授权支持
#--auth

#jetty
sudo docker run --net=hiup --name h01-jetty01 -h h01-jetty01 -p8080:8080 -c 100 -m 128m -v /home/hiup/docker/data/jetty/usr/local/jetty/etc:/usr/local/jetty/etc -v /home/hiup/docker/data/jetty/webapps:/var/lib/jetty/webapps -itd jetty:9.3-jre8
#默认环境变量
#JETTY_HOME    =  /usr/local/jetty
#JETTY_BASE    =  /var/lib/jetty
#TMPDIR        =  /tmp/jetty
#Deploy dir is /var/lib/jetty/webapps
#内存设置
#-e JAVA_OPTIONS="-Xmx1g"
#参数列表
#--list-config

#nginx
sudo docker run --net=hiup --name h01-nginx01 -h h01-nginx01 -p80:80 -c 100 -m 128m -v /home/hiup/docker/data/nginx/etc/nginx/nginx.conf:/etc/nginx/nginx.conf -v /home/hiup/docker/data/nginx/usr/share/nginx/html:/usr/share/nginx/html -itd nginx:1.11

#elasticsearch
sudo docker run --net=hiup --name h01-es01 -h h01-es01 -p9200:9200 -p9300:9300 -c 100 -m 640m -v /home/hiup/docker/data/es/usr/share/elasticsearch/config:/usr/share/elasticsearch/config -v /home/hiup/docker/data/es/usr/share/elasticsearch/data:/usr/share/elasticsearch/data -itd elasticsearch:5.1

#ubuntu
sudo docker run --net=hiup --name h01-ubuntu01 -h h01-ubuntu01 -c 100 -m 128m -itd ubuntu:16.04
sudo docker attach h01-ubuntu01

3.2、第n次启动容器(n>1)

sudo docker start h01-mysql01
sudo docker start h01-redis01
sudo docker start h01-mongo01
sudo docker start h01-jetty01
sudo docker start h01-nginx01
sudo docker start h01-es01
sudo docker start h01-ubuntu01

4、配置文件目录

.
├── es
│   └── usr
│       └── share
│           └── elasticsearch
│               ├── config
│               │   ├── elasticsearch.yml
│               │   ├── log4j2.properties
│               │   └── scripts
│               └── data
│                   └── nodes
│                       └── 0
│                           ├── node.lock
│                           └── _state
│                               ├── global-0.st
│                               └── node-0.st
├── jetty
│   ├── usr
│   │   └── local
│   │       └── jetty
│   │           └── etc
│   │               ├── example-quickstart.xml
│   │               ├── gcloud-memcached-session-context.xml
│   │               ├── gcloud-session-context.xml
│   │               ├── hawtio.xml
│   │               ├── home-base-warning.xml
│   │               ├── jamon.xml
│   │               ├── jdbcRealm.properties
│   │               ├── jetty-alpn.xml
│   │               ├── jetty-annotations.xml
│   │               ├── jetty-cdi.xml
│   │               ├── jetty.conf
│   │               ├── jetty-debuglog.xml
│   │               ├── jetty-debug.xml
│   │               ├── jetty-deploy.xml
│   │               ├── jetty-gcloud-memcached-sessions.xml
│   │               ├── jetty-gcloud-session-idmgr.xml
│   │               ├── jetty-gcloud-sessions.xml
│   │               ├── jetty-gzip.xml
│   │               ├── jetty-http2c.xml
│   │               ├── jetty-http2.xml
│   │               ├── jetty-http-forwarded.xml
│   │               ├── jetty-https.xml
│   │               ├── jetty-http.xml
│   │               ├── jetty-infinispan.xml
│   │               ├── jetty-ipaccess.xml
│   │               ├── jetty-jaas.xml
│   │               ├── jetty-jdbc-sessions.xml
│   │               ├── jetty-jmx-remote.xml
│   │               ├── jetty-jmx.xml
│   │               ├── jetty-logging.xml
│   │               ├── jetty-lowresources.xml
│   │               ├── jetty-monitor.xml
│   │               ├── jetty-nosql.xml
│   │               ├── jetty-plus.xml
│   │               ├── jetty-proxy-protocol-ssl.xml
│   │               ├── jetty-proxy-protocol.xml
│   │               ├── jetty-proxy.xml
│   │               ├── jetty-requestlog.xml
│   │               ├── jetty-rewrite-customizer.xml
│   │               ├── jetty-rewrite.xml
│   │               ├── jetty-setuid.xml
│   │               ├── jetty-spring.xml
│   │               ├── jetty-ssl-context.xml
│   │               ├── jetty-ssl.xml
│   │               ├── jetty-started.xml
│   │               ├── jetty-stats.xml
│   │               ├── jetty-threadlimit.xml
│   │               ├── jetty.xml
│   │               ├── jminix.xml
│   │               ├── jolokia.xml
│   │               ├── krb5.ini
│   │               ├── README.spnego
│   │               ├── rewrite-compactpath.xml
│   │               ├── spnego.conf
│   │               ├── spnego.properties
│   │               └── webdefault.xml
│   └── webapps
│       └── jvmjsp.war
├── mongo
│   ├── data
│   │   └── db
│   │       ├── collection-0-4376730799513530636.wt
│   │       ├── collection-2-4376730799513530636.wt
│   │       ├── collection-5-4376730799513530636.wt
│   │       ├── diagnostic.data
│   │       │   └── metrics.2016-12-27T08-57-50Z-00000
│   │       ├── index-1-4376730799513530636.wt
│   │       ├── index-3-4376730799513530636.wt
│   │       ├── index-4-4376730799513530636.wt
│   │       ├── index-6-4376730799513530636.wt
│   │       ├── journal
│   │       │   ├── WiredTigerLog.0000000001
│   │       │   ├── WiredTigerPreplog.0000000001
│   │       │   └── WiredTigerPreplog.0000000002
│   │       ├── _mdb_catalog.wt
│   │       ├── mongod.lock
│   │       ├── sizeStorer.wt
│   │       ├── storage.bson
│   │       ├── WiredTiger
│   │       ├── WiredTigerLAS.wt
│   │       ├── WiredTiger.lock
│   │       ├── WiredTiger.turtle
│   │       └── WiredTiger.wt
│   └── etc
│       └── mongod.conf
├── mysql
│   ├── etc
│   │   └── mysql
│   │       └── conf.d
│   │           ├── docker.cnf
│   │           └── mysql.cnf
│   └── var
│       └── lib
│           └── mysql
│               ├── auto.cnf
│               ├── ib_buffer_pool
│               ├── ibdata1
│               ├── ib_logfile0
│               ├── ib_logfile1
│               ├── mysql [error opening dir]
│               ├── performance_schema [error opening dir]
│               └── sys [error opening dir]
├── nginx
│   ├── etc
│   │   └── nginx
│   │       └── nginx.conf
│   └── usr
│       └── share
│           └── nginx
│               └── html
│                   ├── 50x.html
│                   └── index.html
└── redis
    ├── data
    │   └── dump.rdb
    └── etc
        └── redis
            ├── redis.conf
            └── sentinel.conf

5、本地工具安装

sudo apt-get install mysql-client
sudo apt-get install redis-cli
sudo apt-get install mongodb-clients
sudo apt-get install curl

Docker的几种联网方式

一、Docker联网常用概念
Port Expose:
标识image暴露某端口

Port Binding:
将虚拟机的端口,映射到宿主机的某端口

Linking:
contianer B link到container A,则B可以访问A

network:
虚拟机联网类型

二、Docker默认有以下几种联网方式:
none:无网络
host:与宿主机公用网卡
bridge:docker0做路由
container:容器共享,容器共享的虚拟机之间可以相互访问
用户定义network:在同一network中的主机是可以相互访问的,host也是相互知道的
用户定义overlay network:主要用于跨宿主机的docker虚拟机之间的通讯

困了,先写个题纲。。。

从Scratch开始建立Docker镜像(二)

首先,还是说明一下,正确的做法是使用工具直接生成你需要的镜像,尽量不要自己折腾。
但我是手工处理的,为了就是折腾。

第二部分的目标,就是在neodeb01的基础上,实现Linux的常用网络功能。

1、新建文件夹neodeb02,把你想放到Docker中的文件放到这个目录下面,比如,我这边结构如下:

├── bin
│   ├── dnsdomainname
│   ├── domainname
│   ├── ip
│   ├── netstat
│   ├── ping
│   └── ping6
├── build.sh
├── Dockerfile
├── etc
│   ├── hosts
│   ├── network
│   │   └── interfaces
│   ├── resolvconf
│   │   └── update-libc.d
│   │       └── avahi-daemon
│   └── resolv.conf
├── lib
│   ├── libip4tc.so.0
│   ├── libip4tc.so.0.1.0
│   ├── libip6tc.so.0
│   ├── libip6tc.so.0.1.0
│   ├── libxtables.so.10
│   ├── libxtables.so.10.0.0
│   └── x86_64-linux-gnu
│       ├── libcom_err.so.2
│       ├── libcom_err.so.2.1
│       ├── libdns-export.so.100
│       ├── libgcc_s.so.1
│       ├── libgnutls-deb0.so.28
│       ├── libgnutls-deb0.so.28.41.0
│       ├── libirs-export.so.91
│       ├── libirs-export.so.91.0.0
│       ├── libisccfg-export.so.90
│       ├── libisccfg-export.so.90.1.0
│       ├── libisc-export.so.95
│       ├── libisc-export.so.95.5.0
│       ├── libkeyutils.so.1
│       ├── libkeyutils.so.1.5
│       ├── liblzma.so.5
│       ├── liblzma.so.5.0.0
│       ├── libnss_dns-2.19.so
│       ├── libnss_dns.so.2
│       ├── libresolv-2.19.so
│       └── libresolv.so.2
├── sbin
│   ├── dhclient
│   ├── ifconfig
│   ├── ifdown
│   ├── ifup
│   ├── ip
│   ├── iptables
│   └── route
└── usr
    ├── bin
    │   ├── base64
    │   ├── host
    │   ├── nslookup
    │   ├── traceroute
    │   ├── traceroute6
    │   ├── wget
    │   └── whois
    ├── lib
    │   ├── libdns.so.100
    │   ├── libdns.so.100.2.2
    │   ├── libisccc.so.90
    │   ├── libisccc.so.90.0.6
    │   ├── libisccfg.so.90
    │   ├── libisccfg.so.90.1.0
    │   ├── libisc.so.95
    │   ├── libisc.so.95.5.0
    │   ├── liblwres.so.90
    │   ├── liblwres.so.90.0.7
    │   └── x86_64-linux-gnu
    │       ├── libbind9.so.90
    │       ├── libbind9.so.90.0.9
    │       ├── libcrypto.a
    │       ├── libcrypto.so
    │       ├── libcrypto.so.1.0.0
    │       ├── libffi.so.6
    │       ├── libffi.so.6.0.2
    │       ├── libGeoIP.so.1
    │       ├── libGeoIP.so.1.6.2
    │       ├── libgnutls-openssl.so.27
    │       ├── libgnutls-openssl.so.27.0.2
    │       ├── libgssapi_krb5.so.2
    │       ├── libgssapi_krb5.so.2.2
    │       ├── libhogweed.so.2
    │       ├── libhogweed.so.2.5
    │       ├── libicudata.so.52
    │       ├── libicudata.so.52.1
    │       ├── libicuuc.so.52
    │       ├── libicuuc.so.52.1
    │       ├── libidn.so.11
    │       ├── libidn.so.11.6.12
    │       ├── libk5crypto.so.3
    │       ├── libk5crypto.so.3.1
    │       ├── libkrb5.so.26
    │       ├── libkrb5.so.26.0.0
    │       ├── libkrb5.so.3
    │       ├── libkrb5.so.3.3
    │       ├── libkrb5support.so.0
    │       ├── libkrb5support.so.0.1
    │       ├── libnettle.so.4
    │       ├── libnettle.so.4.7
    │       ├── libp11-kit.so.0
    │       ├── libp11-kit.so.0.0.0
    │       ├── libpsl.so.0
    │       ├── libpsl.so.0.2.2
    │       ├── libstdc++.so.6
    │       ├── libstdc++.so.6.0.20
    │       ├── libtasn1.so.6
    │       ├── libtasn1.so.6.3.2
    │       ├── libxml2.so.2
    │       ├── libxml2.so.2.9.1
    │       └── openssl-1.0.0
    │           └── engines
    │               ├── lib4758cca.so
    │               ├── libaep.so
    │               ├── libatalla.so
    │               ├── libcapi.so
    │               ├── libchil.so
    │               ├── libcswift.so
    │               ├── libgmp.so
    │               ├── libgost.so
    │               ├── libnuron.so
    │               ├── libpadlock.so
    │               ├── libsureware.so
    │               └── libubsec.so
    └── sbin
        ├── arp
        └── arpd

2、Dockerfile文件

From	neodeb01
ENV	PATH /bin:/sbin:/usr/bin:/usr/sbin
COPY	.	/
CMD	/bin/bash

3、build.sh文件

#/bin/sh
sudo docker build -t neodeb02 .

4、.dockerignore文件

Dockerfile
build.sh
*.swp

5、新建镜像并运行

sudo docker build -t neodeb02 .
sudo docker run -it neodeb02

PS:
如果你遇到了nslookup等,无法初始化安全插件的问题,一般是缺少这个文件夹:
/usr/lib/x86_64-linux-gnu/openssl-1.0.0

PS1:
如果你遇到了可以解析域名,可以ping通ip,但无法ping通域名的时候,除了修改常用的一些网络配置文件。
可以尝试增加libnss_dns。

从Scratch开始建立Docker镜像(一)

最近尝试了从Scratch开始建立Docker镜像,整体来说并不难,就是从现在运行良好的linux机器上,把需要的文件放到Docker镜像里就好了。正确的做法是,使用工具直接生成你需要的镜像,尽量不要自己折腾。但我是手工处理的,为了就是折腾。

第一部分的目标,就是实现Linux的基本功能。

1、新建文件夹neodeb01,把你想放到Docker中的文件放到这个目录下面,比如,我这边结构如下:

├── bin
│   ├── bash
│   ├── cat
│   ├── chmod
│   ├── chown
│   ├── cp
│   ├── date
│   ├── dd
│   ├── df
│   ├── dir
│   ├── echo
│   ├── egrep
│   ├── false
│   ├── grep
│   ├── hostname
│   ├── kill
│   ├── less
│   ├── ln
│   ├── login
│   ├── ls
│   ├── mkdir
│   ├── more
│   ├── mount
│   ├── mv
│   ├── ps
│   ├── pwd
│   ├── rm
│   ├── rmdir
│   ├── sed
│   ├── sh
│   ├── sleep
│   ├── su
│   ├── systemd
│   ├── touch
│   ├── true
│   ├── umount
│   ├── uname
│   └── which
├── boot
├── build.sh
├── dev
├── Dockerfile
├── etc
│   ├── bash.bashrc
│   ├── default
│   │   ├── cron
│   │   ├── locale
│   │   ├── networking
│   │   └── useradd
│   ├── group
│   ├── group-
│   ├── gshadow
│   ├── gshadow-
│   ├── hostname
│   ├── init
│   │   └── networking.conf
│   ├── init.d
│   │   ├── hostname.sh
│   │   └── networking
│   ├── login.defs
│   ├── nsswitch.conf
│   ├── pam.conf
│   ├── pam.d
│   │   ├── atd
│   │   ├── chfn
│   │   ├── chpasswd
│   │   ├── chsh
│   │   ├── common-account
│   │   ├── common-auth
│   │   ├── common-password
│   │   ├── common-session
│   │   ├── common-session-noninteractive
│   │   ├── cron
│   │   ├── gdm-autologin
│   │   ├── gdm-launch-environment
│   │   ├── gdm-password
│   │   ├── login
│   │   ├── newusers
│   │   ├── other
│   │   ├── passwd
│   │   ├── polkit-1
│   │   ├── ppp
│   │   ├── runuser
│   │   ├── runuser-l
│   │   ├── sshd
│   │   ├── su
│   │   ├── sudo
│   │   └── systemd-user
│   ├── passwd
│   ├── passwd-
│   ├── profile
│   ├── security
│   │   ├── access.conf
│   │   ├── group.conf
│   │   ├── limits.conf
│   │   ├── limits.d
│   │   ├── namespace.conf
│   │   ├── namespace.d
│   │   ├── namespace.init
│   │   ├── opasswd
│   │   ├── pam_env.conf
│   │   ├── pwquality.conf
│   │   ├── sepermit.conf
│   │   └── time.conf
│   ├── services
│   ├── shadow
│   ├── shadow-
│   ├── skel
│   ├── subgid
│   └── subuid
├── home
├── lib
│   ├── terminfo
│   │   └── l
│   │       └── linux
│   └── x86_64-linux-gnu
│       ├── libacl.so.1
│       ├── libattr.so.1
│       ├── libaudit.so.1
│       ├── libaudit.so.1.0.0
│       ├── libblkid.so.1
│       ├── libbz2.so.1
│       ├── libbz2.so.1.0
│       ├── libbz2.so.1.0.4
│       ├── libcap.so.2
│       ├── libcrypt-2.19.so
│       ├── libcryptsetup.so.4
│       ├── libcryptsetup.so.4.6.0
│       ├── libcrypt.so.1
│       ├── libc.so.6
│       ├── libdl-2.19.so
│       ├── libdl.so.2
│       ├── libkmod.so.2
│       ├── libkmod.so.2.2.8
│       ├── libmount.so.1
│       ├── libm.so.6
│       ├── libncurses.so.5
│       ├── libncurses.so.5.9
│       ├── libnsl-2.19.so
│       ├── libnsl.so.1
│       ├── libnss_compat-2.19.so
│       ├── libnss_compat.so.2
│       ├── libpamc.so.0
│       ├── libpamc.so.0.82.1
│       ├── libpam_misc.so.0
│       ├── libpam_misc.so.0.82.0
│       ├── libpam.so.0
│       ├── libpam.so.0.83.1
│       ├── libpcre.so.3
│       ├── libprocps.so.3
│       ├── libpthread.so.0
│       ├── libreadline.so.6
│       ├── libreadline.so.6.3
│       ├── librt.so.1
│       ├── libselinux.so.1
│       ├── libsepol.so.1
│       ├── libtinfo.so.5
│       ├── libuuid.so.1
│       ├── libz.so.1
│       └── libz.so.1.2.8
├── lib64
│   └── ld-linux-x86-64.so.2
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── srv
├── sys
├── tmp
├── usr
│   ├── bin
│   │   ├── awk
│   │   ├── basename
│   │   ├── clear
│   │   ├── diff
│   │   ├── diff3
│   │   ├── du
│   │   ├── env
│   │   ├── find
│   │   ├── gawk
│   │   ├── id
│   │   ├── passwd
│   │   ├── size
│   │   ├── sort
│   │   ├── time
│   │   ├── vi
│   │   ├── which
│   │   ├── who
│   │   └── whoami
│   ├── lib
│   │   ├── libbfd-2.25-system.so
│   │   └── x86_64-linux-gnu
│   │       ├── libgmp.so.10
│   │       ├── libgmp.so.10.2.0
│   │       ├── libmpfr.so.4
│   │       ├── libmpfr.so.4.1.2
│   │       ├── libsemanage.so.1
│   │       ├── libsigsegv.so.2
│   │       ├── libsigsegv.so.2.0.3
│   │       └── libustr-1.0.so.1
│   └── sbin
│       ├── chgpasswd
│       ├── chpasswd
│       ├── chroot
│       ├── groupadd
│       ├── groupdel
│       ├── groupmod
│       ├── service
│       ├── useradd
│       ├── userdel
│       └── usermod
└── var

2、Dockerfile文件

From	scratch 
ENV	PATH /bin:/sbin:/usr/bin:/usr/sbin:$PATH
COPY	.	/
RUN	/bin/ln -s lib64/ld-linux-x86-64.so.2	lib/x86_64-linux-gnu/
CMD	/bin/bash

3、build.sh文件

#/bin/sh
sudo docker build -t neodeb01 .

4、.dockerignore文件

Dockerfile
build.sh
*.swp

5、新建镜像并运行

sudo docker build -t neodeb01 .
sudo docker run -it neodeb01

PS:
一定要先把ld-linux-x86-64.so.2放到镜像中,否则无论你执行什么命令,镜像系统都告诉你

System error: no such file or directory

我在这个上面浪费了不少时间,学艺不精啊

PS1:
如果遇到I have no name的错误,一般是因为etc下的几个配置文件错误导致的,shadow gshadow group passwd。
但精简掉libnss_compat后,也会报同样的错误。

PS2:
如果遇到unknown terminal type的问题,要修改两个地方。一个是在.bashrc中

export TERM=linux

另一个是,要把文件夹/lib/terminfo中需要的内容,一起拷贝过去。