I came across an interesting issue with puppet today. Im using the Amazon Linux AMI
and I discovered that it is running an older version puppet. I did not think much of
it. So I installed the puppet labs yum repo and installed the new shiny 3.7.x version
of puppet. When I did my test run. I received lots of errors which caused me to do
some googling. I finally got everything working. Im posting what I did here in order
to save someone else some time and hair.
This blog post will show you how to delete the vm of your choosing and it’s
primary disk from a Citrix Xen Server. A few things to keep in mind. Some of
the disk will show up as xvda vs hda. This depends on how you have configured
your server.
Virtual Block Device (VBD): A VBD is a software object that connects a VM to the VDI, which represents the contents of the virtual disk. The VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on), while the VDI has the information on the physical attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/write or read only, and so on).
Virtual Disk Image (VDI): A VDI is a software object that represents the contents of the virtual disk seen by a VM, as opposed to the VBD, which is a connector object that ties a VM to the VDI. The VDI has the information on the physical attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/write or read only, and so on), while the VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on).
Xen Server version 6.2.0 documentation (cli xe command vbd). Retrieved from cli xe command reference
Not all companies will hand you a shiny new or old windows laptop. Not all companies
will purchase a Windows license to run as a VM. This is will force you to
use the cli to get your job done. The blog post will show you have to find the
mac address of a vm running on Citrix Xen Server.
12345
[root@xenserver ~]% xe vm-list name-label=<your vm name here> params=uuid
uuid ( RO) : 3df485ee-0e99-2851-cf6c-e0c7517e68fd
[root@xenserver ~]% xe vif-list vm-uuid=3df485ee-0e99-2851-cf6c-e0c7517e68fd params=MAC
MAC ( RO) : 3a:c3:6f:ee:ab:c8
References
Virtual Inter Face (VIF): A VIF, which represents a virtual NIC on a virtual machine. VIF objects have a name and description, a globally unique UUID, and the network and VM they are connected to.
Xen Server version 6.2.0 documentation (cli xe command vif). Retrieved from cli xe command reference
Recently at work, I received an email from my networking team on why they were seeing the following error.
%SW_MATM-4-MACFLAP_NOTIF: Host dddd.dddd.dddd in vlan 100 is flapping between port Te1/0/1 and port Gi2/0/22.
I tracked down the mac from the message above. (mac changed for security) I logged in to the box and discovered that the nic bond was running in mode 0 instead of mode 1 which it was configured for. The root cause was Red Hat changed the way parameters for the bonding kernel module were loaded. Starting in RHEL 6.0 you’ll need to add the bonding options to the bonded interface file (ifcfg-bondX).
Here are the commands I took to correct the issue.
[root@host ~]% ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=255 time=1.89 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=255 time=0.872 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=255 time=2.25 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=255 time=0.880 ms
— 1.1.1.1 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.872/1.476/2.253/0.612 ms
[root@host ~]% host www.google.com
host www.google.com
www.google.com has address 74.125.224.177
www.google.com has address 74.125.224.178
www.google.com has address 74.125.224.179
www.google.com has address 74.125.224.180
www.google.com has address 74.125.224.176
www.google.com has IPv6 address 2607:f8b0:4007:800::1010
[root@host ~]% cat /proc/net/bonding/bond0 | grep -i mode
Bonding Mode: fault-tolerance (active-backup)[root@host ~]% cat /sys/class/net/bond0/bonding/mode
active-backup 1
[root@host ~]% ifdown eth0
from another host ping your server, if the ping is good, then ssh to it. If you can reach your server via ping and ssh then the nic bonding is working as it should.
[root@host ~]% ifup eth0