Jump to Navigation

239 - One Channel bonding for multiple ethernet cards

Red Hat Enterprise Linux allows administrators to bind NICs together into a single channel using the bonding kernel module and a special network interface,
called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth
and providing redundancy. "bonding" should be configured in kernel level.

1. Edit /etc/modprobe.conf( or /etc/modules.conf) to configure bonding

* File /etc/modules.conf is configuration file for loading kernel modules.

* RedHat 6.x, CentOS 6.x

# vi /etc/modprobe.d/bonding.conf

  * Add the module to kernel

# modprobe bonding

To enhance performance, adjust available module options to ascertain what combination works best.
Pay particular attention to "the miimon" or arp (arp_interval and the arp_ip_target parameters).

Ex.

alias bond0 bonding
options bonding mode=x miimon=x
alias bond0 bonding
options bonding mode=1 miimon=0 arp_interval=10000 arp_ip_target=192.168.17.1 primary=eth0

* If you are using CentOS 6.x or Redhat 6.x :

You do not need to have the options in bonding.conf

"Parameters for the bonding kernel module must be specified as a space-separated list
in the BONDING_OPTS="" directive in the ifcfg-bond interface file. Do not specify options
for the bonding device in /etc/modprobe.d/.conf, or in the deprecated /etc/modprobe.conf file."

< mode >

0 - Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded
slave interface beginning with the first one available.

1 - Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface.
Another bonded slave interface is only used if the active bonded slave interface fails.

2 - Sets an XOR (exclusive-or) policy for fault tolerance and load balancing. Using this method, the interface matches up the incoming request's
MAC address with the MAC address for one of the slave NICs. Once this link is established, transmissions are sent out sequentially beginning
with the first available interface.

3 - Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.

4 - Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings.
Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant.

5 - Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to
the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave
takes over the MAC address of the failed slave.

6 - Sets an Active Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for
IPV4 traffic. Receive load balancing is achieved through ARP negotiation.

miimon= - Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability is required because MII is used to verify that the NIC is active. To verify that the driver for a particular NIC supports the MII tool, type the following command as root:

arp_interval= - Specifies (in milliseconds) how often ARP monitoring occurs.

arp_ip_target= - Specifies the target IP address of ARP requests when the arp_interval parameter is enabled. Up to 16 IP addresses can be specified in a comma separated list.

primary= - Specifies the interface name, such as eth0, of the primary device. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load.

downdelay= - Use integer value for delaying disabling a link by this number (in ms) after the link failure has been detected. Must be a multiple of miimon.
Default value is zero.

updelay= - Use integer value for delaying enabling a link by this number (in ms) after the "link up" status has been detected. Must be a multiple of miimon.
Default value is zero.

< MII Status & High Availability >

High availability is achieved by using MII status reporting. You need to verify that all your interfaces support MII link status reporting. On Linux kernel 2.2.17,
all the 100 Mbps capable drivers and yellowfin gigabit driver support it. If your system has an interface that does not support MII status reporting,
a failure of its link will not be detected.

The bonding driver can regularly check all its slaves links by checking the MII status registers. The check interval is specified by the module argument
"miimon" (MII monitoring). It takes an integer that represents the checking time in milliseconds. It should not come to close to (1000/HZ) (10 ms on i386)
because it may then reduce the system interactivity. 100 ms seems to be a good value. It means that a dead link will be detected at most 100 ms
after it goes down.

# ethtool eth0 | grep "Link detected:"

..........

        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 2
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: g
        Wake-on: d
        Link detected: yes

 

2. bond0 (Master)

# vi /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETMASK=255.255.255.0
IPADDR=192.168.17.13
GATEWAY=192.168.17.1
TYPE=Ethernet
USERCTL=no

#If CentOS 6.x or  RedHat 6.x :
BONDING_OPTS="mode=0 miimon=100"

3. vi ifcfg-eth0 (Slave)

DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

4. vi ifcfg-eth1 (Slave)

DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

5. Restart Network service

# service network restart

------------------------------------------------------------------------------------------

# ifconfig bond0 up

or

# ifup bond0

6. Check the NICs status

# ifconfig

All Mac Addresses are the same.

bond0     Link encap:Ethernet  HWaddr 00:24:E8:59:6B:87
          inet addr:192.168.17.13  Bcast:192.168.17.255  Mask:255.255.255.0
          inet6 addr: fe80::224:e8ff:fe59:6b87/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:26004 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1298 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5080691 (4.8 MiB)  TX bytes:229318 (223.9 KiB)

eth0      Link encap:Ethernet  HWaddr 00:24:E8:59:6B:87
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:26004 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1298 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5080691 (4.8 MiB)  TX bytes:229318 (223.9 KiB)
          Interrupt:122 Memory:da000000-da012800

eth1      Link encap:Ethernet  HWaddr 00:24:E8:59:6B:87
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:130 Memory:dc000000-dc012800

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:24:e8:59:6b:87

Slave Interface: eth1
MII Status: down
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:24:e8:59:6b:89

7. Redundant Test

Pull one of cables from the server and check the network.

8. If it is not working on CentOS, RedHat 6.x

You try to disable "NetworkManager", then restart network or reboot the server.

# chkconfig --level 2345 NetworkManager off

 

 

 

 



Main menu 2

Story | by Dr. Radut