Keith Hurlic

Sr. Systems Engineer by day. Forever a Skateboarder at heart.

about me

A little something about me.

recent public projects

Status updating…

found on

Install Puppet 3.x on Amazon Linux AMI

- -

I came across an interesting issue with puppet today. Im using the Amazon Linux AMI
and I discovered that it is running an older version puppet. I did not think much of
it. So I installed the puppet labs yum repo and installed the new shiny 3.7.x version
of puppet. When I did my test run. I received lots of errors which caused me to do
some googling. I finally got everything working. Im posting what I did here in order
to save someone else some time and hair.

1
2
3
4
5
6
7
8
[user@host ~]% sudo yum erase puppet
[user@host ~]% sudo rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm
[user@host ~]% sudo sed -i'' -e '/[main].*/ {N; s/enabled = 0/enabled = 04/g}' /etc/yum/pluginconf.d/priorities.conf
[user@host ~]% sudo yum install puppet
[user@host ~]% sudo yum install rubygem18-json.x86_64
[user@host ~]% sudo alternatives --set ruby /usr/bin/ruby1.8
[user@host ~]% sudo puppet --version
3.7.4

References

Puppet 3.X is now broken on Amazon AWS due to Ruby 2.0 being the default Retrieved from puppet jira ticket

Xen Delete My Vm

- - posted in linux, tech, xen

This blog post will show you how to delete the vm of your choosing and it’s primary disk from a Citrix Xen Server. A few things to keep in mind. Some of the disk will show up as xvda vs hda. This depends on how you have configured your server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@xenserver ~]% xe vm-list name-label=testserver params=uuid
uuid ( RO)    : 3df485ee-0e99-2851-cf6c-e0c7517e68fd

[root@xenserver ~]% xe vm-shutdown uuid=3df485ee-0e99-2851-cf6c-e0c7517e68fd

[root@xenserver ~]% xe vbd-list vm-uuid=3df485ee-0e99-2851-cf6c-e0c7517e68fd device=hda params=uuid
uuid ( RO)    : bfdb0ba5-b397-e6f5-3ba1-5aa9df6dd0ce

[root@xenserver ~]% xe vdi-list vbd-uuids=bfdb0ba5-b397-e6f5-3ba1-5aa9df6dd0ce params=uuid
uuid ( RO)    : 07d5c7f8-40d1-4276-ba0c-5fde960ab527

[root@xenserver ~]% xe vdi-destroy uuid=07d5c7f8-40d1-4276-ba0c-5fde960ab527

[root@xenserver ~]% xe vm-destroy uuid=3df485ee-0e99-2851-cf6c-e0c7517e68fd

References

Virtual Block Device (VBD): A VBD is a software object that connects a VM to the VDI, which represents the contents of the virtual disk. The VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on), while the VDI has the information on the physical attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/write or read only, and so on).

Virtual Disk Image (VDI): A VDI is a software object that represents the contents of the virtual disk seen by a VM, as opposed to the VBD, which is a connector object that ties a VM to the VDI. The VDI has the information on the physical attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/write or read only, and so on), while the VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on).

Xen Server version 6.2.0 documentation (cli xe command vbd).
Retrieved from
cli xe command reference

Xen Find My Mac

- - posted in linux, tech, xen

Not all companies will hand you a shiny new or old windows laptop. Not all companies will purchase a Windows license to run as a VM. This is will force you to use the cli to get your job done. The blog post will show you have to find the mac address of a vm running on Citrix Xen Server.

1
2
3
4
5
[root@xenserver ~]% xe vm-list name-label=<your vm name here> params=uuid
uuid ( RO)    : 3df485ee-0e99-2851-cf6c-e0c7517e68fd

[root@xenserver ~]% xe vif-list vm-uuid=3df485ee-0e99-2851-cf6c-e0c7517e68fd params=MAC
MAC ( RO)    : 3a:c3:6f:ee:ab:c8

References

Virtual Inter Face (VIF): A VIF, which represents a virtual NIC on a virtual machine. VIF objects have a name and description, a globally unique UUID, and the network and VM they are connected to.

Xen Server version 6.2.0 documentation (cli xe command vif).
Retrieved from
cli xe command reference

Channel Bonding Interfaces the RedHat 6.x Way

- - posted in linux, tech

Recently at work, I received an email from my networking team on why they were seeing the following error.

%SW_MATM-4-MACFLAP_NOTIF: Host dddd.dddd.dddd in vlan 100 is flapping between port Te1/0/1 and port Gi2/0/22.

I tracked down the mac from the message above. (mac changed for security) I logged in to the box and discovered that the nic bond was running in mode 0 instead of mode 1 which it was configured for. The root cause was Red Hat changed the way parameters for the bonding kernel module were loaded. Starting in RHEL 6.0 you’ll need to add the bonding options to the bonded interface file (ifcfg-bondX).

Here are the commands I took to correct the issue.

Verify bonding mode

1
2
3
4
5
6
7
8
9
[root@server ~]% cat /sys/class/net/bond0/bonding/mode
balance-rr 0

[root@server ~]% cat /proc/net/bonding/bond0 | grep -i mode
Bonding Mode: load balancing (round-robin)

[root@server ~]% cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 mode=1 miimon=100

Make changes

1
2
[root@server ~]% sed -i'.bak' -e '/options.*/d' /etc/modprobe.d/bonding.conf
[root@server ~]% echo 'BONDING_OPTS="mode=1 miimon=100"' >> /etc/sysconfig/networking-scripts/ifcfg-bond0

Restart Network

1
[root@server ~]% service network restart

Verify bonding mode

1
2
3
4
5
6
7
8
[root@server ~]% cat /sys/class/net/bond0/bonding/mode
active-backup 1

[root@server ~]% cat /proc/net/bonding/bond0 | grep -i mode
Bonding Mode: fault-tolerance (active-backup)

[root@server ~]% cat /etc/modprobe.d/bonding.conf
alias bond0 bonding

Setting up nic bonding from scratch


Change all of the example IP addresses from 1.1.1.x to the IP addresses that work in your environment.

1.) Edit the /etc/modprobe.d/bonding.conf

1
2
[root@host ~]% vi /etc/modprobe.d/bonding.conf
alias bond0 bonding

2.) Edit the /etc/sysconfig/network

1
2
3
4
[root@host ~]% vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=your.hostname.here
GATEWAY=1.1.1.1

3.) Edit the /etc/resolv.conf

1
2
3
4
[root@host ~]% vi /etc/resolv.conf
search example.com
nameserver 1.1.1.2
nameserver 1.1.1.3

4.) Edit the /etc/sysconfig/network-scripts/ifcfg-bond0

1
2
3
4
5
6
7
8
9
[root@host ~]% vi  /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
IPADDR=1.1.1.100
NETMASK=255.255.255.0
BONDING_OPTS=”mode=1 miimon=100″

5.) Edit the /etc/sysconfig/network-scripts/ifcfg-eth0

1
2
3
4
5
6
7
8
[root@host ~]% vi  /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=”eth0″
SLAVE=”yes”
ONBOOT=”yes”
MASTER=”bond0″
USERCTL=”no”
BOOTPROTO=none
NM_CONTROLLED=”no”

6.) Edit the /etc/sysconfig/network-scripts/ifcfg-eth1

1
2
3
4
5
6
7
8
[root@host ~]% vi  /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=”eth1″
SLAVE=”yes”
ONBOOT=”yes”
MASTER=”bond0″
USERCTL=”no”
BOOTPROTO=none
NM_CONTROLLED=”no”

7.) Load the bonding kernel module

1
[root@host ~]% modprobe bonding

8.) restart the network

1
[root@host ~]# service network restart

9.) Verify changes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@host ~]% ping 1.1.1.1

PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=255 time=1.89 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=255 time=0.872 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=255 time=2.25 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=255 time=0.880 ms

— 1.1.1.1 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.872/1.476/2.253/0.612 ms

[root@host ~]% host www.google.com
host www.google.com
www.google.com has address 74.125.224.177
www.google.com has address 74.125.224.178
www.google.com has address 74.125.224.179
www.google.com has address 74.125.224.180
www.google.com has address 74.125.224.176
www.google.com has IPv6 address 2607:f8b0:4007:800::1010

[root@host ~]% cat /proc/net/bonding/bond0 | grep -i mode
Bonding Mode: fault-tolerance (active-backup)

[root@host ~]% cat /sys/class/net/bond0/bonding/mode
active-backup 1

[root@host ~]% ifdown eth0
from another host ping your server, if the ping is good, then ssh to it. If you can reach your server via ping and ssh then the nic bonding is working as it should.

[root@host ~]% ifup eth0
Reference

Red Hat Enterprise Linux 6 Deployment Guide
Retrieved from
Redhat Docs for bonding