Troubleshooting

P: How to fix ESXi error - Module 'MonitorLoop' power on failed S: How to fix

due to disk space shortage

  • either decrease VM RAM or add vmfs

P: Can't ping floating or instance ip S:

After spending several days trying to figure this out, I had several problems and this is the list of items that corrected this. My concern over multiple bridges (one supporting libvirt based vm's) was not founded as this was left unaltered. Two public ip addresses on the same network on two different bridges did not matter either.

The instructions in the installation guide (kilo) do briefly mention (sort-of) necessary interface/bridge configuration of the system you're installing this on, but it doesn't show exactly what was needed. In the end, mine looked like this;

auto eth1.1234 auto eth1

iface eth1.1234 inet manual up ifconfig $IFACE 0.0.0.0 up up ip link set $IFACE promisc on down ip link set $IFACE promisc off down ifconfig $IFACE down

auto br-ex iface br-ex inet static address 10.20.82.150 netmask 255.255.252.0 gateway 10.20.83.254 bridge_stp on bridge_fd 0 bridge_maxwait 0 dns-nameservers 10.20.0.2 10.20.0.3

More on this later...

I needed to add the security group rules show above, using the tenant project NOT from admin. This was a key omission that took many hours to figure out. I needed to source the tenant project file (demo-openrs.sh in the instructions) then do the adds using either the neutron cli (above) or the nova cli (tried this as well). I need to do this as I'm NOT using the Noop driver for the firewall. In ml2_conf.ini, in the [securitygroup] section mine is set to

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Still broke. Can't ping my floating ip, nor the other way around from the two namespaces.

The routes to my floating ip address on the system hosting the openstack installation were missing the routes to the public ip address. Since my floating ip's are not a range that can be defined with a netmask. I had to add static routes, i.e.

sudo ip route add 10.20.82.153/32 dev br-ex

For each public ip. This is not a permanent thing, as I noticed when I cycled (ifdown/ifup) br-ex I would lose the routes. Next step will be to put these in the /etc/network/interfaces file.

Once I did this, I could ping from the two namespaces to anything on my local system. Progress, but still broke as I can't ping my external gateway yet.

Many similar questions suggested that you need to modify the iptables NAT table to MASQUERADE the traffic. I added this rule to the NAT table on the openstack system itself.

sudo iptables -t nat -A POSTROUTING -o br-ex -j MASQUERADE

No change. Then changed it to eth1 (I'm using eth1 as my interface, not eth2 as in the installation guide).

sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

To be honest, I'll not sure I need this, as I really don't want to NAT the floating ip addresses, however I probably do want to NAT addresses on the tenant network (192.168.1.x). My use case does not need this yet, so I still may need to play with this. I quick check shows my only instance can ping my external gateway, so I''m probably good here. Actually I thing the rule may be incorrect, given other changes that I've made. But, things work.

A few things to note. My datacenter uses VLANs and by default anything on the datacenters private network (note the 10.x address on the floating ips) can not ping out of the network. So, the verification connectivity step on page 90 will never work for me, as I can't ping out of this network This was a BIG gotcha for me, as my original interface definition was missing the VLAN ID (note eth1.1234). Once the interface file was changed, 'poof', everything worked. I could now ping my external gateway from the two namespaces.I could already ping w/o using namespaces since I have another interface connected to the network and the two bridges are not connected, this only added to the confusion as this path worked, through br-ex did not.

One thing to note if others have a similar existing private external network. Connectivity to the 'real' world is only allowed through proxies. This allows me to update the systems, add sotware etc, basically http/https. Take special care at looking at your no_proxy environment variable and be sure your local system is in the exclusion list. Before I did this, I was getting authentication errors when doing certain operations in the dashboard. As an example, I would get a 500 error when clicking on a instance to see its details. Launch Instance worked, so this is another odd behavior.

Hope this helps someone out there!

P: Invalid input for operation: network_type value 'local' not supported S: http://stackopen.blogspot.com/2016/08/invalid-input-for-operation-networktype.html

sudo cat -n /etc/neutron/plugins/ml2/ml2_conf.ini | grep type_dr

Uncomment this line.

type_drivers = local,flat,vlan,gre,vxlan,geneve

sudo systemctl restart neutron-server

neutron net-create mgmt --provider:network_type=vlan --provider:physical_network=physnet_em1 --provider:segmentation_id=500 --shared

Unable to connect rabbit@domain node

Edit /etc/hosts.

127.0.0.1 yourdomain

results matching ""

    No results matching ""