Use Octavia to Implement HTTPS Health Monitors (by quqi99)
問題
採用Neutron LBaaS v2實現HTTPS Health Monitors時的配置如下(步驟見附件 - Neutron LBaaS v2)
backend 52112201-05ce-4f4d-b5a8-9e67de2a895a mode tcp balance leastconn timeout check 10s option httpchk GET / http-check expect rstatus 200 option ssl-hello-chk server 37a1f5a8-ec7e-4208-9c96-27d2783a594f 192.168.21.13:443 weight 1 check inter 5s fall 2 server 8e722b4b-08b8-4089-bba5-8fa5dd26a87f 192.168.21.8:443 weight 1 check inter 5s fall 2
這種配置會有一個問題, 當使用自定義簽名證書時一切正常, 但使用機構頒發的證書時反而有問題. 原因在這網頁(https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html)其實已經有說明, 如下:
HTTPS health monitors operate exactly like HTTP health monitors, but with ssl back-end servers. Unfortunately, this causes problems if the servers are performing client certificate validation, as HAProxy won’t have a valid cert. In this case, using TLS-HELLO type monitoring is an alternative.
TLS-HELLO health monitors simply ensure the back-end server responds to SSLv3 client hello messages. It will not check any other health metrics, like status code or body contents.
解決此問題有兩種方法:
1, 當有"option httpchk"項時, 下面兩個網頁提出一種方法新增"check check-ssl verify none"引數禁止對客戶端引數進行驗證.
https://stackoverflow.com/questions/16719388/haproxy-https-health-checks
https://serverfault.com/questions/924477/haproxy-health-check-for-https-backend
2, 或者去掉"option httpchk"項, 採用TLS-HELLO模式.
由於LBaaS v2無法對實現引數定製實現上述兩種解決方案, 所以需要採用Octavia.
安裝Octavia
juju add-model bionic-barbican-octavia
./generate-bundle.sh --series bionic --barbican
#./generate-bundle.sh --series bionic --release rocky --barbican
juju deploy ./b/openstack.yaml --overlay ./b/o/barbican.yaml
#https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/overlays/loadbalancer-octavia.yaml
#NOTE: need to comment to:lxd related lines from loadbalancer-octavia.yaml, and change nova-compute num to 3
juju deploy ./b/openstack.yaml --overlay ./overlays/loadbalancer-octavia.yaml
# Or we can:
# 2018-12-25 03:30:39 DEBUG update-status fatal error: runtime: out of memory
juju deploy octavia --config openstack-origin=cloud:bionic:queens --constraints mem=4G
juju deploy octavia-dashboard
juju add-relation octavia-dashboard openstack-dashboard
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia neutron-openvswitch
juju add-relation octavia neutron-api
# Initialize and unseal vault
# https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-vault.html
# https://lingxiankong.github.io/2018-07-16-barbican-introduction.html
# /snap/vault/1315/bin/vault server -config /var/snap/vault/common/vault.hcl
sudo snap install vault
export VAULT_ADDR="http://$(juju run --unit vault/0 unit-get private-address):8200"
[email protected]:~$ vault operator init -key-shares=5 -key-threshold=3
Unseal Key 1: UB7XDri5FRcMLirKBIysdUb2PN7Ia5EVMP0Z9wD9Hyll
Unseal Key 2: mD8Gnr3hdB2LjjNB4ugxvvsvb8+EQQ/0AXm2p+c2qYFT
Unseal Key 3: vymYLAdou3qky24IEKDufYsZXAIPLWtErAKy/RkfgghS
Unseal Key 4: xOwDbqgNLLipsZbp+FAmVhBc3ZxA8CI3DchRc4AClRyQ
Unseal Key 5: nRlZ8WX6CS9nOw2ct5U9o0Za5jlUAtjN/6XLxjf62CnR
Initial Root Token: s.VJKGhNvIFCTgHVbQ6WvL0OLe
vault operator unseal UB7XDri5FRcMLirKBIysdUb2PN7Ia5EVMP0Z9wD9Hyll
vault operator unseal mD8Gnr3hdB2LjjNB4ugxvvsvb8+EQQ/0AXm2p+c2qYFT
vault operator unseal vymYLAdou3qky24IEKDufYsZXAIPLWtErAKy/RkfgghS
export VAULT_TOKEN=s.VJKGhNvIFCTgHVbQ6WvL0OLe
vault token create -ttl=10m
$ vault token create -ttl=10m
Key Value
--- -----
token s.7ToXh9HqE6FiiJZybFhevL9v
token_accessor 6dPkFpsPmx4D7g8yNJXvEpKN
token_duration 10m
token_renewable true
token_policies ["root"]
identity_policies []
policies ["root"]
# Authorize vault charm to use a root token to be able to create secrets storage back-ends and roles to allow other app to access vault
juju run-action vault/0 authorize-charm token=s.7ToXh9HqE6FiiJZybFhevL9v
# upload Amphora image
source ~/stsstack-bundles/openstack/novarc
http_proxy=http://squid.internal:3128 wget http://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
#openstack image create --tag octavia-amphora --disk-format=qcow2 --container-format=bare --private amphora-haproxy-xenial --file ./test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
glance image-create --tag octavia-amphora --disk-format qcow2 --name amphora-haproxy-xenial --file ./test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2 --visibility public --container-format bare --progress
cd stsstack-bundles/openstack/
./configure
./tools/sec_groups.sh
./tools/instance_launch.sh 2 xenial
neutron floatingip-create ext_net
neutron floatingip-associate $(neutron floatingip-list |grep 10.5.150.4 |awk '{print $2}') $(neutron port-list |grep '192.168.21.3' |awk '{print $2}')
cd ~/ca #https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html
juju config octavia \
lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \
lb-mgmt-issuing-ca-key-passphrase=foobar \
lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"
配置資源:
# the code search 'configure_resources'
juju config octavia create-mgmt-network
juju run-action --wait octavia/0 configure-resources
# some deubg ways:
openstack security group rule create $(openstack security group show lb-mgmt-sec-grp -f value -c id) --protocol udp --dst-port 546 --ethertype IPv6
openstack security group rule create $(openstack security group show lb-mgmt-sec-grp -f value -c id) --protocol icmp --ethertype IPv6
neutron security-group-rule-create --protocol icmpv6 --direction egress --ethertype IPv6 lb-mgmt-sec-grp
neutron security-group-rule-create --protocol icmpv6 --direction ingress --ethertype IPv6 lb-mgmt-sec-grp
neutron port-show octavia-health-manager-octavia-0-listen-port -f value -c status
neutron port-update --admin-state-up True octavia-health-manager-octavia-0-listen-port
AGENT=$(neutron l3-agent-list-hosting-router lb-mgmt -f value -c id)
neutron l3-agent-router-remove $AGENT lb-mgmt
neutron l3-agent-router-add $AGENT lb-mgmt
上面configure-resources命令 (juju run-action --wait octavia/0 configure-resources)將會自動配置IPv6管理網段, 並且會配置一個binding:host在octavia/0節點上的名為octavia-health-manager-octavia-0-listen-port的port.
[email protected]:~$ neutron router-list |grep mgmt
| 0a839377-6b19-419b-9868-616def4d749f | lb-mgmt | null | False | False |
[email protected]:~$ neutron net-list |grep mgmt
| ae580dc8-31d6-4ec3-9d44-4a9c7b9e80b6 | lb-mgmt-net | ea9c7d5c-d224-4dd3-b40c-3acae9690657 fc00:4a9c:7b9e:80b6::/64 |
[email protected]:~$ neutron subnet-list |grep mgmt
| ea9c7d5c-d224-4dd3-b40c-3acae9690657 | lb-mgmt-subnetv6 | fc00:4a9c:7b9e:80b6::/64 | {"start": "fc00:4a9c:7b9e:80b6::2", "end": "fc00:4a9c:7b9e:80b6:ffff:ffff:ffff:ffff"} |
[email protected]:~$ neutron port-list |grep fc00
| 5cb6e3f3-ebe5-4284-9c05-ea272e8e599b | | fa:16:3e:9e:82:6a | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6::1"} |
| 983c56d2-46dd-416c-abc8-5096d76f75e2 | octavia-health-manager-octavia-0-listen-port | fa:16:3e:99:8c:ab | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6:f816:3eff:fe99:8cab"} |
| af38a60d-a370-4ddb-80ac-517fda175535 | | fa:16:3e:5f:cd:ae | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6:f816:3eff:fe5f:cdae"} |
| b65f90d1-2e1f-4994-a0e9-2bb13ead4cab | | fa:16:3e:10:34:84 | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6:f816:3eff:fe10:3484"} |
並且在octavia/0上會建立一個名為o-hm0的介面, 此介面的IP地址與octavia-health-manager-octavia-0-listen-port port同.
[email protected]:~$ juju ssh octavia/0 -- ip addr show o-hm0 |grep global
Connection to 10.5.0.110 closed.
inet6 fc00:4a9c:7b9e:80b6:f816:3eff:fe99:8cab/64 scope global dynamic mngtmpaddr noprefixroute
[email protected]:~$ juju ssh octavia/0 -- sudo ovs-vsctl show
490bbb36-1c7d-412d-8b44-31e6f796306a
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "gre-0a05006b"
Interface "gre-0a05006b"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.5.0.110", out_key=flow, remote_ip="10.5.0.107"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-0a050016"
Interface "gre-0a050016"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.5.0.110", out_key=flow, remote_ip="10.5.0.22"}
Bridge br-data
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-data
Interface phy-br-data
type: patch
options: {peer=int-br-data}
Port br-data
Interface br-data
type: internal
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "o-hm0"
tag: 1
Interface "o-hm0"
type: internal
Port br-int
Interface br-int
type: internal
Port int-br-data
Interface int-br-data
type: patch
options: {peer=phy-br-data}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.10.0"
[email protected]:~$ juju ssh neutron-gateway/0 -- sudo ovs-vsctl show
ec3e2cb6-5261-4c22-8afd-5bacb0e8ce85
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "tap62c03d3b-b1"
tag: 2
Interface "tap62c03d3b-b1"
Port "tapb65f90d1-2e"
tag: 3
Interface "tapb65f90d1-2e"
Port int-br-data
Interface int-br-data
type: patch
options: {peer=phy-br-data}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tap6f1478be-b1"
tag: 1
Interface "tap6f1478be-b1"
Port "tap01efd82b-53"
tag: 2
Interface "tap01efd82b-53"
Port "tap5cb6e3f3-eb"
tag: 3
Interface "tap5cb6e3f3-eb"
Port br-int
Interface br-int
type: internal
Bridge br-data
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "ens7"
Interface "ens7"
Port br-data
Interface br-data
type: internal
Port phy-br-data
Interface phy-br-data
type: patch
options: {peer=int-br-data}
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-0a05007a"
Interface "gre-0a05007a"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.122"}
Port "gre-0a05006b"
Interface "gre-0a05006b"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.107"}
Port br-tun
Interface br-tun
type: internal
Port "gre-0a050079"
Interface "gre-0a050079"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.121"}
Port "gre-0a05006e"
Interface "gre-0a05006e"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.110"}
ovs_version: "2.10.0"
[email protected]:~⟫ juju ssh neutron-gateway/0 -- cat /var/lib/neutron/ra/0a839377-6b19-419b-9868-616def4d749f.radvd.conf
interface qr-5cb6e3f3-eb
{
AdvSendAdvert on;
MinRtrAdvInterval 30;
MaxRtrAdvInterval 100;
AdvLinkMTU 1458;
prefix fc00:4a9c:7b9e:80b6::/64
{
AdvOnLink on;
AdvAutonomous on;
};
};
[email protected]:~$ openstack security group rule list lb-health-mgr-sec-grp
+--------------------------------------+-------------+----------+------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+----------+------------+-----------------------+
| 09a92cb2-9942-44d4-8a96-9449a6758967 | None | None | | None |
| 20daa06c-9de6-4c91-8a1e-59645f23953a | udp | None | 5555:5555 | None |
| 8f7b9966-c255-4727-a172-60f22f0710f9 | None | None | | None |
| 90f86b27-12f8-4a9a-9924-37b31d26cbd8 | icmpv6 | None | | None |
+--------------------------------------+-------------+----------+------------+-----------------------+
[email protected]:~$ openstack security group rule list lb-mgmt-sec-grp
+--------------------------------------+-------------+----------+------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+----------+------------+-----------------------+
| 54f79f92-a6c5-411d-a309-a02b39cc384b | icmpv6 | None | | None |
| 574f595e-3d96-460e-a3f2-329818186492 | None | None | | None |
| 5ecb0f58-f5dd-4d52-bdfa-04fd56968bd8 | tcp | None | 22:22 | None |
| 7ead3a3a-bc45-4434-b7a2-e2a6c0dc3ce9 | None | None | | None |
| cf82d108-e0f8-4916-95d4-0c816b6eb156 | tcp | None | 9443:9443 | None |
+--------------------------------------+-------------+----------+------------+-----------------------+
[email protected]:~$ source ~/novarc
[email protected]:~$ openstack security group rule list default
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
| 15b56abd-c2af-4c0a-8585-af68a8f09e3c | icmpv6 | None | | None |
| 2ad77fa3-32c7-4a20-a572-417bea782eff | icmp | 0.0.0.0/0 | | None |
| 2c2aec15-e4ad-4069-abd2-0191fe80f9bb | None | None | | None |
| 3b775807-3c61-45a3-9677-aaf9631db677 | udp | 0.0.0.0/0 | 3389:3389 | None |
| 3e9a6e7f-b9a2-47c9-97ca-042b22fbf308 | icmpv6 | None | | None |
| 42a3c09e-91c8-471d-b4a8-c1fe87dab066 | None | None | | None |
| 47f9cec2-4bc0-4d71-9a02-3a27d46b59f8 | icmp | None | | None |
| 94297175-9439-4df2-8c93-c5576e52e138 | udp | None | 546:546 | None |
| 9c6ac9d2-3b9e-4bab-a55a-04a1679b66be | None | None | | c48a1bf5-7b7e-4337-afdf-8057ae8025af |
| b6e95f76-1b64-4135-8b62-b058ec989f7e | None | None | | c48a1bf5-7b7e-4337-afdf-8057ae8025af |
| de5132a5-72e2-4f03-8b6a-dcbc2b7811c3 | tcp | 0.0.0.0/0 | 3389:3389 | None |
| e72bea9f-84ce-4e3a-8597-c86d40b9b5ef | tcp | 0.0.0.0/0 | 22:22 | None |
| ecf1415c-c6e9-4cf6-872c-4dac1353c014 | tcp | 0.0.0.0/0 | | None |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
底層OpenStack環境(OpenStack Over Openstack)需要做 (見: https://blog.csdn.net/quqi99/article/details/78437988 ):
openstack security group rule create $secgroup --protocol udp --dst-port 546 --ethertype IPv6
最容易出現的問題是health-manager-octavia-0-listen-port port為DOWN, 從而o-hm0網路不通而無法從dhcp server處獲得IP, 網段不通多半是br-int上的flow rules的問題, 我多次遇到這種情況, 但後來重建環境不知為什麼又好了.
[email protected]:~# ovs-ofctl dump-flows br-int
cookie=0x5dc634635bd398eb, duration=424018.932s, table=0, n_packets=978, n_bytes=76284, priority=10,icmp6,in_port="o-hm0",icmp_type=136 actions=resubmit(,24)
cookie=0x5dc634635bd398eb, duration=424018.930s, table=0, n_packets=0, n_bytes=0, priority=10,arp,in_port="o-hm0" actions=resubmit(,24)
cookie=0x5dc634635bd398eb, duration=425788.219s, table=0, n_packets=0, n_bytes=0, priority=2,in_port="int-br-data" actions=drop
cookie=0x5dc634635bd398eb, duration=424018.943s, table=0, n_packets=10939, n_bytes=2958167, priority=9,in_port="o-hm0" actions=resubmit(,25)
cookie=0x5dc634635bd398eb, duration=425788.898s, table=0, n_packets=10032, n_bytes=1608826, priority=0 actions=resubmit(,60)
cookie=0x5dc634635bd398eb, duration=425788.903s, table=23, n_packets=0, n_bytes=0, priority=0 actions=drop
cookie=0x5dc634635bd398eb, duration=424018.940s, table=24, n_packets=675, n_bytes=52650, priority=2,icmp6,in_port="o-hm0",icmp_type=136,nd_target=fc00:4a9c:7b9e:80b6:f816:3eff:fe99:8cab actions=resubmit(,60)
cookie=0x5dc634635bd398eb, duration=424018.938s, table=24, n_packets=0, n_bytes=0, priority=2,icmp6,in_port="o-hm0",icmp_type=136,nd_target=fe80::f816:3eff:fe99:8cab actions=resubmit(,60)
cookie=0x5dc634635bd398eb, duration=425788.879s, table=24, n_packets=303, n_bytes=23634, priority=0 actions=drop
cookie=0x5dc634635bd398eb, duration=424018.951s, table=25, n_packets=10939, n_bytes=2958167, priority=2,in_port="o-hm0",dl_src=fa:16:3e:99:8c:ab actions=resubmit(,60)
cookie=0x5dc634635bd398eb, duration=425788.896s, table=60, n_packets=21647, n_bytes=4620009, priority=3 actions=NORMAL
[email protected]:~# ovs-ofctl dump-flows br-data
cookie=0xb41c0c7781ded568, duration=426779.130s, table=0, n_packets=16816, n_bytes=3580386, priority=2,in_port="phy-br-data" actions=drop
cookie=0xb41c0c7781ded568, duration=426779.201s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL
如果o-hm0總是無法獲得IP, 我們也可以手工配置一個IPv4管理網段試試.
neutron router-gateway-clear lb-mgmt
neutron router-interface-delete lb-mgmt lb-mgmt-subnetv6
neutron subnet-delete lb-mgmt-subnetv6
neutron port-list |grep fc00
#neutron port-delete 464e6d47-9830-4966-a2b7-e188c19c407a
openstack subnet create --subnet-range 192.168.0.0/24 --allocation-pool start=192.168.0.2,end=192.168.0.200 --network lb-mgmt-net lb-mgmt-subnet
neutron router-interface-add lb-mgmt lb-mgmt-subnet
#neutron router-gateway-set lb-mgmt ext_net
neutron port-list |grep 192.168.0.1
#openstack security group create lb-mgmt-sec-grp --project $(openstack security group show lb-mgmt-sec-grp -f value -c project_id)
openstack security group rule create --protocol udp --dst-port 5555 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp
openstack security group rule create --protocol icmp lb-mgmt-sec-grp
openstack security group show lb-mgmt-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol icmp lb-health-mgr-sec-grp
# create a management port o-hm0 on octavia/0 node, first use neutron to allocate a port, then call ovs-vsctl to add-port
LB_HOST=$(juju ssh octavia/0 -- hostname)
juju ssh octavia/0 -- sudo ovs-vsctl del-port br-int o-hm0
# Use LB_HOST to replace juju-70ea4e-bionic-barbican-octavia-11, don't know why it said 'bind failed' when using $LB_HOST directly
neutron port-create --name octavia-health-manager-octavia-0-listen-port --security-group $(openstack security group show lb-health-mgr-sec-grp -f value -c id) --device-owner Octavia:health-mgr --binding:host_id=juju-70ea4e-bionic-barbican-octavia-11 lb-mgmt-net --tenant-id $(openstack security group show lb-health-mgr-sec-grp -f value -c project_id)
juju ssh octavia/0 -- sudo ovs-vsctl --may-exist add-port br-int o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$(neutron port-show octavia-health-manager-octavia-0-listen-port -f value -c mac_address) -- set Interface o-hm0 external-ids:iface-id=$(neutron port-show octavia-health-manager-octavia-0-listen-port -f value -c id)
juju ssh octavia/0 -- sudo ip link set dev o-hm0 address $(neutron port-show octavia-health-manager-octavia-0-listen-port -f value -c mac_address)
juju ssh octavia/0 -- sudo ip link set o-hm0 mtu 1458
sudo mkdir -p /etc/octavia/dhcp
sudo bash -c 'cat >/etc/octavia/dhcp/dhclient.conf' <<EOF
request subnet-mask,broadcast-address,interface-mtu;
do-forward-updates false;
EOF
#dhclient -v o-hm0 -cf /etc/octavia/dhcp/dhclient.conf
ping 192.168.0.2
測試虛機中安裝HTTPS測試服務
# Prepare CA and ssl pairs for lb server
openssl genrsa -passout pass:password -out ca.key
openssl req -x509 -passin pass:password -new -nodes -key ca.key -days 3650 -out ca.crt -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl genrsa -passout pass:password -out lb.key
openssl req -new -key lb.key -out lb.csr -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl x509 -req -in lb.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out lb.crt -days 3650
cat lb.crt lb.key > lb.pem
#openssl pkcs12 -export -inkey lb.key -in lb.crt -certfile ca.crt -passout pass:password -out lb.p12
# Create two test servers and run
sudo apt install python-minimal -y
sudo bash -c 'cat >simple-https-server.py' <<EOF
#!/usr/bin/env python
# coding=utf-8
import BaseHTTPServer, SimpleHTTPServer
import ssl
httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='./lb.pem', server_side=True)
httpd.serve_forever()
EOF
sudo bash -c 'cat >index.html' <<EOF
test1
EOF
nohup sudo python simple-https-server.py &
[email protected]:~$ curl -k https://10.5.150.4
test1
[email protected]:~$ curl -k https://10.5.150.5
test2
[email protected]:~$ curl --cacert ~/ca/ca.crt https://10.5.150.4
curl: (51) SSL: certificate subject name (www.quqi.com) does not match target host name '10.5.150.4'
[email protected]:~$ curl --resolve www.quqi.com:443:10.5.150.4 --cacert ~/ca/ca.crt https://www.quqi.com
test1
[email protected]:~$ curl --resolve www.quqi.com:443:10.5.150.4 -k https://www.quqi.com
test1
How to ssh into amphora service vm
sudo mkdir -p /etc/octavia/.ssh && sudo chown -R $(id -u):$(id -g) /etc/octavia/.ssh
ssh-keygen -b 2048 -t rsa -N "" -f /etc/octavia/.ssh/octavia_ssh_key
openstack user list --domain service_domain
# NOTE: we must add '--user' option to avoid the error 'Invalid key_name provided'
nova keypair-add --pub-key=/etc/octavia/.ssh/octavia_ssh_key.pub octavia_ssh_key --user $(openstack user show octavia --domain service_domain -f value -c id)
nova keypair-list --user $(openstack user show octavia --domain service_domain -f value -c id)
vim /etc/octavia/octavia.conf
vim /var/lib/juju/agents/unit-octavia-0/charm/templates/rocky/octavia.conf
vim /usr/lib/python3/dist-packages/octavia/compute/drivers/nova_driver.py
vim /usr/lib/python3/dist-packages/octavia/controller/worker/tasks/compute_tasks.py #import pdb;pdb.set_trace()
[controller_worker]
amp_ssh_key_name = octavia_ssh_key
amp_ssh_access_allowed = True
sudo ip netns exec qrouter-0a839377-6b19-419b-9868-616def4d749f ssh -6 -i ~/octavia_ssh_key [email protected]:4a9c:7b9e:80b6:f816:3eff:fe5f:cdae
Deploy a non-terminated HTTPS load balancer
sudo apt install python-octaviaclient
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
#lb_vip_port_id=$(openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id private_subnet)
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
openstack loadbalancer show lb1
nova list --all
openstack loadbalancer listener create --name listener1 --protocol HTTPS --protocol-port 443 lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTPS
#openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTPS --url-path / pool1
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TLS-HELLO pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.10 --protocol-port 443 pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.12 --protocol-port 443 pool1
openstack loadbalancer member list pool1
[email protected]:~# ps -ef |grep haproxy
root 1459 1 0 04:34 ? 00:00:00 /usr/sbin/haproxy-systemd-wrapper -f /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.pid -L UlKGE8M_cxJTcktjV8M-eKJkh-g
nobody 1677 1459 0 04:35 ? 00:00:00 /usr/sbin/haproxy -f /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.pid -L UlKGE8M_cxJTcktjV8M-eKJkh-g -Ds -sf 1625
nobody 1679 1677 0 04:35 ? 00:00:00 /usr/sbin/haproxy -f /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.pid -L UlKGE8M_cxJTcktjV8M-eKJkh-g -Ds -sf 1625
root 1701 1685 0 04:36 pts/0 00:00:00 grep --color=auto haproxy
[email protected]:~#
[email protected]:~# cat /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg
# Configuration for loadbalancer eda3efa5-dd91-437c-81d9-b73d28b5312f
global
daemon
user nobody
log /dev/log local0
log /dev/log local1 notice
stats socket /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.sock mode 0666 level user
maxconn 1000000
defaults
log global
retries 3
option redispatch
frontend b9d5a192-1a6a-4df7-83d4-fe96ac9574c0
option tcplog
maxconn 1000000
bind 192.168.21.16:443
mode tcp
default_backend 502b6689-40ad-4201-b704-f221e0fddd58
timeout client 50000
backend 502b6689-40ad-4201-b704-f221e0fddd58
mode tcp
balance roundrobin
timeout check 10s
option ssl-hello-chk
fullconn 1000000
option allbackups
timeout connect 5000
timeout server 50000
server 49f16402-69f4-49bb-8dc0-5ec13a0f1791 192.168.21.10:443 weight 1 check inter 5s fall 3 rise 4
server 1ab624e1-9cd8-49f3-9297-4fa031a3ca58 192.168.21.12:443 weight 1 check inter 5s fall 3 rise 4
Deploy a TLS-terminated HTTPS load balancer
openssl genrsa -passout pass:password -out ca.key
openssl req -x509 -passin pass:password -new -nodes -key ca.key -days 3650 -out ca.crt -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl genrsa -passout pass:password -out lb.key
openssl req -new -key lb.key -out lb.csr -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl x509 -req -in lb.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out lb.crt -days 3650
cat lb.crt lb.key > lb.pem
openssl pkcs12 -export -inkey lb.key -in lb.crt -certfile ca.crt -passout pass:password -out lb.p12
sudo apt install python-barbicanclient
openstack secret store --name='tls_lb_secret' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < lb.p12)"
openstack secret list
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
openstack loadbalancer show lb1
openstack loadbalancer member list pool1
openstack loadbalancer member delete pool1 <member>
openstack loadbalancer pool delete pool1
openstack loadbalancer listener delete listener1
openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_lb_secret / {print $2}') lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.10 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.12 --protocol-port 80 pool1
但是出錯了:
[email protected]:~/ca⟫ openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_lb_secret / {print $2}') lb1
Could not retrieve certificate: ['http://10.5.0.25:9312/v1/secrets/7c706fb2-4319-46fc-b78d-81f34393f581'] (HTTP 400) (Request-ID: req-c0c0e4d5-f395-424c-9aab-5c4c4e72fb3d)
附件 - Neutron LBaaS v2
https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
neutron lbaas-loadbalancer-create --name test-lb private_subnet
neutron lbaas-listener-create --name test-lb-https --loadbalancer test-lb --protocol HTTPS --protocol-port 443
neutron lbaas-pool-create --name test-lb-pool-https --lb-algorithm LEAST_CONNECTIONS --listener test-lb-https --protocol HTTPS
neutron lbaas-member-create --subnet private_subnet --address 192.168.21.13 --protocol-port 443 test-lb-pool-https
neutron lbaas-member-create --subnet private_subnet --address 192.168.21.8 --protocol-port 443 test-lb-pool-https
neutron lbaas-healthmonitor-create --delay 5 --max-retries 2 --timeout 10 --type HTTPS --pool test-lb-pool-https --name monitor1
[email protected]:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl -k https://192.168.21.14
test1
[email protected]:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl -k https://192.168.21.14
test2
[email protected]:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl --cacert /home/ubuntu/lb.pem https://192.168.21.14
curl: (51) SSL: certificate subject name (www.quqi.com) does not match target host name '192.168.21.14'
[email protected]:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl --cacert /home/ubuntu/lb.pem --resolve www.quqi.com:443:192.168.21.14 https://www.quqi.com
test1
[email protected]:~# echo 'show stat;show table' | socat stdio /var/lib/neutron/lbaas/v2/84fd9a6c-24a2-4c0f-912b-eedc254ac1f4/haproxy_stats.sock
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
c2a42906-e160-44dd-8590-968af2077b4a,FRONTEND,,,0,0,2000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,0,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,
52112201-05ce-4f4d-b5a8-9e67de2a895a,37a1f5a8-ec7e-4208-9c96-27d2783a594f,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,1,1,0,,,,,,1,3,1,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
52112201-05ce-4f4d-b5a8-9e67de2a895a,8e722b4b-08b8-4089-bba5-8fa5dd26a87f,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,1,1,0,,,,,,1,3,2,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
52112201-05ce-4f4d-b5a8-9e67de2a895a,BACKEND,0,0,0,0,200,0,0,0,0,0,,0,0,0,0,UP,2,2,0,,0,117,0,,1,3,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
[email protected]:~# cat /var/lib/neutron/lbaas/v2/84fd9a6c-24a2-4c0f-912b-eedc254ac1f4/haproxy.conf
# Configuration for test-lb
global
daemon
user nobody
group nogroup
log /dev/log local0
log /dev/log local1 notice
maxconn 2000
stats socket /var/lib/neutron/lbaas/v2/84fd9a6c-24a2-4c0f-912b-eedc254ac1f4/haproxy_stats.sock mode 0666 level user
defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend c2a42906-e160-44dd-8590-968af2077b4a
option tcplog
bind 192.168.21.14:443
mode tcp
default_backend 52112201-05ce-4f4d-b5a8-9e67de2a895a
backend 52112201-05ce-4f4d-b5a8-9e67de2a895a
mode tcp
balance leastconn
timeout check 10s
option httpchk GET /
http-check expect rstatus 200
option ssl-hello-chk
server 37a1f5a8-ec7e-4208-9c96-27d2783a594f 192.168.21.13:443 weight 1 check inter 5s fall 2
server 8e722b4b-08b8-4089-bba5-8fa5dd26a87f 192.168.21.8:443 weight 1 check inter 5s fall 2
# TCP monitor
neutron lbaas-healthmonitor-delete monitor1
neutron lbaas-healthmonitor-create --delay 5 --max-retries 2 --timeout 10 --type TCP --pool test-lb-pool-https --name monitor1 --url-path /
backend 52112201-05ce-4f4d-b5a8-9e67de2a895a
mode tcp
balance leastconn
timeout check 10s
server 37a1f5a8-ec7e-4208-9c96-27d2783a594f 192.168.21.13:443 weight 1 check inter 5s fall 2
server 8e722b4b-08b8-4089-bba5-8fa5dd26a87f 192.168.21.8:443 weight 1 check inter 5s fall 2