About neohope

一直在努力,还没想过要放弃...

OpenKM6搭建说明

近期发现,公司的文档管理十分的混乱,于是准备找一个文档管理工具来。

主要找了几类工具,第一类是Wiki,第二类是内容管理系统CMS,第三类是文档管理系统DMS,第四类是企业文档管理系统EDMS,第五类为在线协作软件。

我们的要求有:
1、可以兼容大量的历史文档
2、文档要在本地,不能在云端
3、最好可以直接版本管理和检索功能
4、最好免费

查看了这些软件的主流厂商,发现只有DMS文档比较适合我们:
1、wiki、多数的CMS希望大家用在线工具进行写作,我们有大量的word和execel文档,很难直接导入
2、在线协作软件要把文档托管在云端,也不适合我们
3、EDMS系统看了几个,发现功能太多,也不适合我们

最后选用了DMS,找了Alfresco免费版、LogicalDOC免费版和OpenKM免费版。周末自己进行了搭建测试,发现我的思路和Alfresco格格不入,设计理念太逆天了。LogicalDOC和OpenKM都不错,最后感觉OpenKM操作更顺畅一些,选用了OpenKM。

OpenKM有这几种搭建方式:
1、通过OKMInstaller.jar
https://sourceforge.net/projects/openkm/files/common/
2、通过打好的安装包,包括linux和windows版本,推荐
https://sourceforge.net/projects/openkm/files/6.3.2/
3、通过bundle
https://sourceforge.net/projects/openkm/files/6.3.2/
4、纯手工,从war包开始
https://sourceforge.net/projects/openkm/files/6.3.4/

本文介绍第3种方式。因为第1、2种方式相对简单,第4种方式和第3种方式基本一样(一定要到上面提到的common目录下下载tomcat)。

1、首先,下载最新的bundle,选择openkm-6.3.2-community-tomcat-bundle.zip,并解压
https://sourceforge.net/projects/openkm/files/6.3.2/

2、下载extra,并解压
https://sourceforge.net/projects/openkm/files/common/

3、安装mysql,新建数据库,新建用户,授权

4、修改解压后bundle文件夹中的OpenKM.cfg文件

# OpenKM Hibernate configuration values
#把方言改为MySQL
hibernate.dialect=org.hibernate.dialect.MySQLDialect
#第一次运行,一定要改为create
#运行后,系统会自动改为null
hibernate.hbm2ddl=none

# Initial configuration - Linux
#system.imagemagick.convert=/usr/bin/convert
#system.openoffice.path=/usr/lib/libreoffice
#system.swftools.pdf2swf=/opt/openkm/bin/pdf2swf -f -T 9 -t -s storeallcharacters ${fileIn} -o ${fileOut}

# Initial configuration - Windows
# 按extra文件的位置,改一下路径
system.imagemagick.convert=C:/NeoECM/OpenKM/Tomcat7/bin/convert.exe
system.openoffice.path=C:/NeoECM/OpenKM/extras/ApacheOpenOffice_4.1.1/Bin/OpenOffice 4
system.swftools.pdf2swf=C:/NeoECM/OpenKM/Tomcat7/bin/pdf2swf.exe -f -T 9 -t -s storeallcharacters ${fileIn} -o ${fileOut}

5、修改解压后bundle文件夹中conf/server.xml,修改数据库连接方式为mysql

    <Resource name="jdbc/OpenKMDS" auth="Container" type="javax.sql.DataSource"
            maxActive="100" maxIdle="30" maxWait="10000" validationQuery="select 1"
            username="openkm" password="openkm" driverClassName="com.mysql.jdbc.Driver"
            url="jdbc:mysql://localhost:3306/openkm?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF8"/>
                
    <!--Resource name="jdbc/OpenKMDS" auth="Container" type="javax.sql.DataSource"
            maxActive="100" maxIdle="30" maxWait="10000" validationQuery="select 1 from INFORMATION_SCHEMA.SYSTEM_USERS"
            username="sa" password="" driverClassName="org.hsqldb.jdbcDriver"
            url="jdbc:hsqldb:${catalina.base}/repository/okmdb"/-->

6、运行startup.bat

7、如果有错误,请查看日志。如果没有错误,就可以用okmadmin/admin进行登录了。

8、关掉控制台

9、将bin/win-x64下两个文件,拷贝到bin下面

10、命令行注册为服务

service install OpenVM

11、启动服务编辑界面

tomcat7w //ES//OpenVM

12、点击启动服务即可

13、如果遇到问题,请排查一下内容
A、JVM版本和tomcat的tomcat7版本是否同为32或同为64
B、tomcat7w界面中jvm.dll选择是否正确
C、如果还报错,可以把jdk/jre下的msvc*.dll拷贝到bin目录下面

Webmin搭建说明

1、下载安装包

wget http://prdownloads.sourceforge.net/webadmin/webmin_1.840_all.deb

2、安装依赖包

apt-get install perl libnet-ssleay-perl openssl libauthen-pam-perl libpam-runtime libio-pty-perl apt-show-versions python libapt-pkg-perl

3、安装webmin

dpkg --install webmin_1.840_all.deb

4、访问地址http://localhost:10000/
用户名:root,密码:系统root密码

Spring Boot各模块作用

1、spring-boot
web容器整合,快速开发,上下文、外部配置、日志的统一管理

2、spring-boot-autoconfigure
自动配置,自动判断需要的jar包

3、spring-boot-actuator
生产环境管理,用rest方式给出多种接口:
mappings/autoconfig/configprops/beans
env/info/health/heapdump/metrics
loggers/logfile/dump/trace
shutdown/auditevents

4、spring-boot-starters
各种配置好的功能模块,用于快速拼装各种需要的功能

5、spring-boot-loader
用于加载jar包中的jar,可以实现单个jar/war的运行
注意:这种jar包放到jar包时,不要再次压缩

6、spring-boot-cli
快速开发groovy

7、spring-boot-devtools
方便调试,远程调试

OpenStack搭建私有云10

本节介绍对象存储的基本操作,仅在CT01进行操作

. user01-openrc
#查看状态
swift stat
#新建container
openstack container create container01
#文件上传
openstack object create container01 hi.txt
#文件ls
openstack object list container01
#查看文件信息
openstack object show container01 hi.txt
#设置tag
openstack object set --property owner=neohope container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取消tag
openstack object unset --property owner container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取回文件
mv hi.txt hi.txt.bak
openstack object save container01 hi.txt
#删除文件
openstack object delete container01 hi.txt

PS:
如果遇到权限问题,可以尝试将/srv/node安全级别降到最低

#chcon -R system_u:object_r:swift_data_t:s0 /srv/node

OpenStack搭建私有云09

本节开始安装swift,用于对对象存储进行管理,需要在CT01、OS01、OS02进行操作
一、在CT01安装对应模块
1、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt swift
openstack role add --project serviceproject --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store

openstack endpoint create --region Region01 object-store public http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store internal http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store admin http://CT01:8080/v1

2、安装

apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

3、修改配置文件
3.1、新建目录/etc/swift,并下载文件

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

3.2修改配置文件
/etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = swift
password = swift
delay_auth_decision = True

[filter:cache]
use = egg:swift#memcache
memcache_servers = CT01:11211

二、在OS01、OS02安装对应模块
1、硬盘初始化(每台虚拟机分配两块硬盘)

apt-get install xfsprogs rsync
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc

2、修改/etc/fstab

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

3、挂载硬盘

mount /srv/node/sdb
mount /srv/node/sdc

4、修改/etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.3.13

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

5、修改/etc/default/rsync

RSYNC_ENABLE=true

6、重启rsync

service rsync start

7、软件安装

apt-get install swift swift-account swift-container swift-object
curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton
curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton
curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

8、修改/etc/swift/account-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

9、修改/etc/swift/container-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon container-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

10、修改/etc/swift/object-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon object-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

11、授权

chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

三、在CT01进行配置
1、创建配置文件

cd /etc/swift

swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance

swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance

2、拷贝配置文件
将account.ring.gz、container.ring.gz和object.ring.gz拷贝到OS02和OS02的目录/etc/swift

3、下载配置文件

sudo curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton

4、编辑/etc/swift/swift.conf

[swift-hash]
swift_hash_path_suffix = neohope
swift_hash_path_prefix = neohope

[storage-policy:0]
name = Policy-0
default = yes

5、拷贝配置文件swift.conf,到所有节点的/etc/swift

6、在非对象存储节点运行

chown -R root:swift /etc/swift
service memcached restart
service swift-proxy restart

7、在对象存储节点运行

chown -R root:swift /etc/swift
swift-init all start

OpenStack搭建私有云08

本节开始用命令行方式启动虚拟机,仅在CT01进行操作

一、网络配置
1、新建虚拟网络(外网)

. admin-openrc
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

2、确认配置文件正确(外网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2_type_flat]
flat_networks = provider

linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:enp0s8

3、创建子网(外网)

openstack subnet create --network provider --allocation-pool start=192.168.12.100,end=192.168.12.120 --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.12.0/24 provider

4、新建虚拟网络(内网)

openstack network create selfservice

5、确认配置文件正确(内网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
tenant_network_types = vxlan

[ml2_type_vxlan]
vni_ranges = 1:1000

6、创建子网(内网)

openstack subnet create --network selfservice --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.13.0/24 selfservice

7、创建路由,让内网可以通过外网访问外网

. admin-openrc
openstack router create router
neutron router-interface-add router selfservice
neutron router-gateway-set router provider

ip netns
neutron router-port-list router
ping -c 4 192.168.12.107

二、虚拟机flavor配置

openstack flavor list
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 2 flavor02

三、虚拟机keypair配置

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack keypair list

四、虚拟机security group配置

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

五、查看配置

openstack flavor list
openstack image list
openstack network list
openstack security group list

六、创建虚拟机,并访问
1、外网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=PROVIDER_NET_ID --security-group default --key-name mykey provider-instance

openstack server list
openstack console url show provider-instance
ping -c 4 192.168.12.1
ping -c 4 openstack.org

ping -c 4 192.168.12.104 
ssh cirros@192.168.12.104 

2、内网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=SELFSERVICE_NET_ID --security-group default --key-name mykey selfservice-instance

openstack server list
openstack console url show selfservice-instance
ping -c 4 192.168.13.1
ping -c 4 openstack.org

openstack floating ip create provider
openstack server add floating ip selfservice-instance 192.168.12.106
openstack server list
ping -c 4 192.168.12.106
ssh cirros@192.168.12.106

七、创建挂载块存储
1、创建并挂载

. admin-openrc
openstack volume create --size 2 volumeA
openstack volume list
openstack server add volume provider-instance volumeA

2、虚拟机中验证

sudo fdisk -l

OpenStack搭建私有云07

本节开始安装cinder,用于对块存储进行管理,需要在CT01及BS01进行操作

一、在CT01安装相应模块
1、创建数据库

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

2、创建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt cinder
openstack role add --project serviceproject --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

openstack endpoint create --region Region01 volume public http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume internal http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume admin http://CT01:8776/v1/%\(tenant_id\)s

openstack endpoint create --region Region01 volumev2 public http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 internal http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 admin http://CT01:8776/v2/%\(tenant_id\)s

3、安装

apt install cinder-api cinder-scheduler

4、修改配置
4.1、/etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
my_ip = 10.0.3.10

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4.2、/etc/nova/nova.conf

[cinder]
os_region_name = Region01

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "cinder-manage db sync" cinder

service nova-api restart
service cinder-scheduler restart
service apache2 restart

二、在BS01安装相关模块
1、安装lvm2并作初始化处理

apt install lvm2

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

2、修改lvm配置文件
/etc/lvm/lvm.conf

devices {
    filter = [ "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "r/.*/"]
}

3、安装cinder-volume

apt install cinder-volume

4、修改配置文件
/etc/cinder/cinder.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.0.12
enabled_backends = lvm
glance_api_servers = http://CT01:9292

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address=10.0.3.12

5、并重启服务

service tgt restart
service cinder-volume restart

三、在CT01验证

. admin-openrc
openstack volume service list

然后,就可以在Dashboard中,新建并分配块存储咯。

OpenStack搭建私有云06

本节开始安装Dashboard,用于对OS进行管理,仅在CT01进行操作

1、安装

apt install openstack-dashboard

2、修改配置
/etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "CT01"
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#Dashboard节点
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    ' default' : {
        ' BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache' ,
        ' LOCATION' : 'CT01:11211' ,
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

TIME_ZONE = "Asia/Shanghai"

3、重启服务

service apache2 reload

4、浏览器打开网页
http://CT01/horizon
可以用admin或user01用户进行访问

PS:出现500错误

#查看apache日志发现,是下面文件权限设置有问题,改一下就好了
sudo chown www-data:www-data /var/lib/openstack-dashboard/secret_key

5、用下面的步骤创建实例
创建网络、创建配置、创建实例

6、实例启动后,点击进入实例,就可以通过控制台连接实例了

OpenStack搭建私有云05

本节开始安装Neutron服务,Neutron用于管理虚拟网络,在CT01和PC01分别进行相关模块的安装

一、在CT01安装相关模块
1、新建数据库

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project serviceproject --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region Region01 network public http://CT01:9696
openstack endpoint create --region Region01 network internal http://CT01:9696
openstack endpoint create --region Region01 network admin http://CT01:9696

3、安装

apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

4、修改配置
4.1、/etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:neutron@CT01/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

[nova]
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = nova
password = nova

4.2、/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

4.3、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.10
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

4.4、/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

4.5、 /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

4.6、 /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_ip = CT01
metadata_proxy_shared_secret = metadata

4.7、 /etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

sudo service nova-api restart
sudo service neutron-server restart
sudo service neutron-linuxbridge-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
sudo service neutron-l3-agent restart

二、在PC01安装相关模块
1、安装

apt install neutron-linuxbridge-agent

2、修改配置文件
2.1、/etc/neutron/neutron.conf

[database]
#注释下面内容
#connection

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

2.2、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.11
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

2.3、/etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron

3、重启服务

service nova-compute restart
service neutron-linuxbridge-agent restart

三、在CT01进行验证
1、验证

. admin-openrc
openstack extension list --network
openstack network agent list