Wednesday, October 27, 2010

Dynamic Bonding of Ethernet Interfaces

ifenslave - Attach and detach slave network devices to a bonding device.

Synopsis

ifenslave [-acdfhuvV] [--all-interfaces] [--change-active] [--detach] [--force] [--help] [--usage] [--verbose] [--version] master slave ...

Description

ifenslave is a tool to attach and detach slave network devices to a bonding device. A bonding device will act like a normal Ethernet network device to the kernel, but will send out the packets via the slave devices using a simple round-robin scheduler. This allows for simple load-balancing, identical to "channel bonding" or "trunking" techniques used in switches.

The kernel must have support for bonding devices for ifenslave to be useful.

Options

-a, --all-interfaces
Show information about all interfaces.

-c, --change-active
Change active slave.

-d, --detach
Removes slave interfaces from the bonding device.

-f, --force
Force actions to be taken if one of the specified interfaces appears not to belong to an Ethernet
device.

-h, --help
Display a help message and exit.

-u, --usage
Show usage information and exit.

-v, --verbose
Print warning and debug messages.

-V, --version
Show version information and exit.
If not options are given, the default action will be to enslave interfaces.

Example

The following example shows how to setup a bonding device and enslave two real
Ethernet devices to it:

# modprobe bonding
# ifconfig bond0 192.168.0.1 netmask 255.255.0.0
# ifenslave bond0 eth0 eth1

Tuesday, October 26, 2010

Scalr: The Auto-Scaling Open-Source Amazon EC2 Effort





Scalr is a recently open-sourced framework for managing the massive serving power of Amazon’s Elastic Computing Cloud (EC2) service. While web services have been using EC2 for increased capacity since Fall 2006, it has never been fully “elastic” (scaling requires adding and configuring more machines when the situation arises). What Scalr promises is compelling: a “redundant, self-curing, and self-scaling” network, or a nearly self-sustainable site that could do normal traffic in the morning, and then get Buzz’d in the afternoon.

The Scalr framework is a series of server images, known dully in Amazon-land as Amazon Machine Images (AMI), for each of the basic website needs: an app server, a load balancer, and a database server. These AMIs come pre-built with a management suite that monitors the load and operating status of the various servers on the cloud. Scalr can increase / decrease capacity as demand fluctuates, as well as detecting and rebuilding improperly functioning instances. Scalr is also smart enough to know what type of scaling is necessary, but how well it will scale is still a fair question.

Those behind Scalr believe open-sourcing their pet project will help disrupt the established, for-pay players in the AWS management game, RightScale and WeoCeo. Intridea, a Ruby on Rails development firm, originally developed Scalr for MediaPlug, a yet-to-launch “white label YouTube” with potentially huge (and variable) media transcoding needs. Scalr was recently featured on Amazon Web Service’s blog.

I’d argue that Scalr makes Amazon EC2 significantly more interesting from a developer’s standpoint. EC2 is still largely used for batch-style, asynchronous jobs such as crunching large statistics or encoding video (although increasingly more are using it for their full web server setup). Amazon for their part is delivering on the ridiculously hard cloud features, last week announcing that their EC2 instances can have static IPs and can be chosen from certain data centers (should really improve the latency). But for now, monitoring and scaling an EC2 cluster is a real chore for AWS developers, so it’s good to see some abstraction.

Ubuntu Enterprise Cloud Architecture

Interesting white paper on Ubuntu's approach to cloud computing architectures.

Overview

Ubuntu Enterprise Cloud (UEC) brings Amazon EC2-like infrastructure capabilities inside the firewall. The UEC is powered by Eucalyptus, an open source implementation for the emerging standard of the EC2 API. This solution is designed to simplify the process of building and managing an internal cloud for businesses of any size, thereby enabling companies to create their own self-service infrastructure.

As the technology is open source, it requires no licence fees or subscription for use, experimentation or deployment. Updates are provided freely and support contracts can be purchased directly from Canonical.

UEC is specifically targeted at those companies wishing to gain the benefits of self-service IT within the confines of the corporate data centre whilst avoiding either lock-in to a specific vendor, expensive re-occurring product licences or the use of non-standard tools that aren't supported by public providers.

This white paper tries to provide an understanding of the UEC internal architecture and possibilities offered by it in terms of security, networking and scalability.

http://www.ubuntu.com/system/files/UbuntuEnterpriseCloudWP-Architecture-20090820.pdf

Wednesday, October 20, 2010

Top 10 Virtualization Technology Companies

Any discussion of virtualization typically ends in clicking glasses and high fives, heated discussions or slap fights, but it almost always begins with VMware, as does this list. These top 10 virtualization vendors deliver the best virtualization software solutions on the market today.

You might not require every bit and byte of programming they're composed of, but you'll rejoice at the components of their feature sets when you need them. These solutions scale from a few virtual machines that host a handful of Web sites, virtual desktops or intranet services all the way up to tens of thousands of virtual machines serving millions of Internet users. Virtualization and related cloud services account for an estimated 40 percent of all hosted services. If you don't know all the names on this list, it's time for an introduction.


1. VMware
Find a major data center anywhere in the world that doesn't use VMware, and then pat yourself on the back because you've found one of the few. VMware dominates the server virtualization market. Its domination doesn't stop with its commercial product, vSphere. VMware also dominates the desktop-level virtualization market and perhaps even the free server virtualization market with its VMware Server product. VMware remains in the dominant spot due to its innovations, strategic partnerships and rock-solid products.


2. Citrix
Citrix was once the lone wolf of application virtualization, but now it also owns the world's most-used cloud vendor software: Xen (the basis for its commercial XenServer). Amazon uses Xen for its Elastic Compute Cloud (EC2) services. So do Rackspace, Carpathia, SoftLayer and 1and1 for their cloud offerings. On the corporate side, you're in good company with Bechtel, SAP and TESCO.


3. Oracle
If Oracle's world domination of the enterprise database server market doesn't impress you, its acquisition of Sun Microsystems now makes it an impressive virtualization player. Additionally, Oracle owns an operating system (Sun Solaris), multiple virtualization software solutions (Solaris Zones, LDoms and xVM) and server hardware (SPARC). What happens when you pit an unstoppable force (Oracle) against an immovable object (the Data Center)? You get the Oracle-centered Data Center.


4. Microsoft
Microsoft came up with the only non-Linux hypervisor, Hyper-V, to compete in a tight server virtualization market that VMware currently dominates. Not easily outdone in the data center space, Microsoft offers attractive licensing for its Hyper-V product and the operating systems that live on it. For all Microsoft shops, Hyper-V is a competitive solution. And, for those who have used Microsoft's Virtual PC product, virtual machines migrate to Hyper-V quite nicely.


5. Red Hat
For the past 15 years, everyone has recognized Red Hat as an industry leader and open source champion. Hailed as the most successful open source company, Red Hat entered the world of virtualization in 2008 when it purchased Qumranet and with it, its own virtual solution: KVM and SPICE (Simple Protocol for Independent Computing Environment). Red Hat released the SPICE protocol as open source in December 2009.


6. Amazon
Amazon's Elastic Compute Cloud (EC2) is the industry standard virtualization platform. Ubuntu's Cloud Server supports seamless integration with Amazon's EC2 services.EngineYard's Ruby applicationservices leverage Amazon's cloud as well.


7. Google
When you think of Google, virtualization might not make the top of the list of things that come to mind, but its Google Apps, AppEngine and extensiveBusiness Services list demonstrates how it has embraced cloud-oriented services.


8. Virtual Bridges
Virtual Bridges is the company that invented what's now known as virtual desktop infrastructure or VDI. Its VERDE product allows companies to deploy Windows and Linux Desktops from any 32-bit or 64-bit Linux server infrastructure running kernel 2.6 or above. To learn more about this Desktop-as-a-Managed Service, download theVERDE whitepaper.


9. Proxmox
Proxmox is a free, open source server virtualization product with a unique twist: It provides two virtualization solutions. It provides a full virtualization solution withKernel-based Virtual Machine (KVM) and a container-based solution, OpenVZ.


10. Parallels
Parallels uses its open source OpenVZ project, mentioned above, for its commercial hosting product for Linux virtual private servers. High density and low cost are the two keywords you'll hear when experiencing a Parallels-based hosting solution. These are the two main reasons why the world's largest hosting companies choose Parallels. But, the innovation doesn't stop at Linux containerized virtual hosting. Parallels has also developed a containerized Windows platform to maximize the number of Windows hosts for a given amount of hardware.


Tuesday, October 19, 2010

Difference Between Ext3 and Ext4

Linux is a very flexible operating system that has a long history of interoperability with other systems on a different hardware platforms,linux can read and write several different file systems that originated with other operating systems much different from linux.
One main reason that linux supports so many file systems because linux work on VFS (Virtual file system )layer that is a data abstraction layer between the kernel and the programs in userspace that issue file system commands.”Programs that run inside the kernel are in kernelspace,programs that don’t run inside the kernel called in userspace”.The VFS layer avoids duplication of common code between all file systems also it provides a fairly universal backward compatible method for programs to access data from almost all type of file systems.

EXT3 : Ext3 (Extended 3 file system) provides all the features of ext2,and also features journaling and backward compatibility with ext2.The backward compatibility enables you to still run kernals that are only ext2-aware with ext3 partitions.we can also use all of the ext2 file system tuning,repair and recovery tools with ext3 also you can upgrade an ext2 file system to an ext3 file system without losing any of your data.
Ext3’s journaling feature speeds up the amount of time it takes to bring the file system back to a sane state if it’s not been cleanly unmounted (that is,in the event of a power outage or a system crash). Under ext2,when a file system is uncleanly mounted ,the whole file system must be checked.This takes a long time on large file systems.On an ext3 system ,the system keeps a record of uncommitted file transactions and applies only those transactions when the system is brought back up.So a complete system check is not required and the system will come back up much faster.
A cleanly unmounted ext3 file system can be mounted and used as an ext2 file system,this capability can come in handy if you need to revert back to an older kernel that is not aware of ext3.The kernel sees the ext3 filesystem as an ext2 file system.

Ext4 : Ext4 is part of the Linux 2.6.28 kernel,Ext4 is the evolution of the most used Linux file system, Ext3. In many ways, Ext4 is a deeper improvement over Ext3 than Ext3 was over Ext2. Ext3 was mostly about adding journaling to Ext2, but Ext4 modifies important data structures of the file system such as the ones destined to store the file data. The result is a filesystem with an improved design, better performance, reliability and features.

Sunday, October 3, 2010

OCFS Cluster File System

OCFS stands for Oracle Cluster File System. It is a shared disk file system developed by Oracle Corporation and released under the GNU General Public License.

The first version of OCFS was developed with the main focus to accommodate oracle database files for clustered databases. Because of that it was not an POSIX compliant file system. With version 2 the POSIX features were included.

OCFS2 (version 2) was integrated into the version 2.6.16 of Linux kernel. Initially, it was marked as “experimental” (Alpha-test) code. This restriction was removed in Linux version 2.6.19. With kernel version 2.6.29 more features have been included into ocfs2 especially access control lists and quota.[2]

OCFS2 uses a distributed lock manager which resembles the OpenVMS DLM but is much simpler.

Hardware Requirement

* Shared Storage, accessible over SAN between cluster nodes.

* HBA for Fiber SAN on each node.

* Network Connectivity for hearbeat between servers.

OS Installation

Regular installation of RHEL 5.x. 64 bit with the following configuration.

1. SELinux must be disabled. 2. Time Zone - Saudi Time. (GMT +3)

Packages

1. Gnome for Graphics desktop.

2. Development libraries

3. Internet tools - GUI and text based

4. Editors - GUI and text based

Partitioning the HDD

1. 200 MB for /boot partition

2. 5 GB for /var partition

3. 5 GB for /tmp partition

4. Rest of the space to / partition.

The server should be up to date with the latest patches from RedHat Network.

Installation of OCFS2 Kernel Module and Tools

OCFS2 Kernel modules and tools can be downloaded from the Oracle web sites.

OCFS2 Kernel Module:

http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5/x86_64/1.4.1-1/2.6.18-128.1.1.el5

http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5/x86_64/1.4.1-1/2.6.18-128.1.10.el5/

Note that 2.6.18-128.1.1.el5 should match the current running kernel on the server. A new OCFS2 Kernel package should be downloaded and installed each time the kernel is updated to a new version.

OCFS2 Tools

http://oss.oracle.com/projects/ocfs2-tools/dist/files/RedHat/RHEL5/x86_64/1.4.1-1/ocfs2-tools-1.4.1-1.el5.x86_64.rpm

OCFS2 Console

http://oss.oracle.com/projects/ocfs2-tools/dist/files/RedHat/RHEL5/x86_64/1.4.1-1/ocfs2console-1.4.1-1.el5.x86_64.rpm

OCFS2 Tools and Console depends on several other packages which are normally available on a default RedHat Linux installation, except for VTE (A terminal emulator) package.

So in order to satisfy the dependencies of OCFS2 you have to install the vte package using

yum install vte

After completing the VTE installation start the OCFS2 installation using regular RPM installation procedure.

rpm -ivh ocfs2-2.6.18-92.128.1.1.el5-1.4.1-1.el5.x86_64.rpm

rpm -ivh ocfs2console-1.4.1-1.el5.x86_64.rpm

rpm -ivh ocfs2-tools-1.4.1-1.el5.x86_64.rpm

This will copy the necessary files to its corresponding locations.

Following are the important tools and files that are used frequently

/etc/init.d/o2cb /sbin/mkfs.ext2 /etc/ocfs2/cluster.conf (Need to create this Folder and file manually)

OCFS2 Configuration

It is assumed that the shared SAN storage is connected to the cluster nodes and is available as /dev/sdb. This document will cover installation of only two node (node1 and node2) ocfs2 cluster.

Following are the steps required to configure the cluster nodes.

Create the folder /etc/ocfs2

mkdir /etc/ocfs2

Create the cluster configuration file /etc/ocfs2/cluster.conf and add the following contents:

cluster:

      node_count = 2
name = vmsanstorage

node:

      ip_port = 7777
ip_address = 172.x.x.x
number = 1
name = mc1.ocfs
cluster = vmsanstorage

node:

      ip_port = 7777
ip_address = 172.x.x.x
number =2
name = mc2.ocfs
cluster = vmsanstorage

Note that the:

* Node name should match the “hostname” of corresponding server.

* Node number should be unique for each member.

* Cluster name for each node should match the “name” field in “cluster:” section.

* “node_count” field in “cluster:” section should match the number of nodes.

O2CB cluster service configuration

The o2cb cluster service can be configured using:

/etc/init.d/o2cb configure (This command will show the following dialogs)

Configuring the O2CB driver

This will configure the on-boot properties of the O2CB driver.

The following questions will determine whether the driver is loaded onboot.

The current values will be shown in brackets ('[]').

Hitting without typing an answer will keep that current value.

Ctrl-C will abort.

Load O2CB driver on boot (y/n) [n]: y

Cluster stack backing O2CB [o2cb]:

Cluster to start on boot (Enter “none” to clear) [ocfs2]: ocfs2

Specify heartbeat dead threshold (>=7) [31]:

Specify network idle timeout in ms (>=5000) [30000]:

Specify network keepalive delay in ms (>=1000) [2000]:

Specify network reconnect delay in ms (>=2000) [2000]:

Writing O2CB configuration: OK

Loading filesystem “ocfs2_dlmfs”: OK

Mounting ocfs2_dlmfs filesystem at /dlm: OK

Starting O2CB cluster ocfs2: OK

Note that the driver should be loaded while booting and the “Cluster to start” should match the cluster name, in our case “ocfs2”.

As a best practice it is adviced to reboot the server after successfully completing the above configuration.

Formating and Mounting the shared file system

Before we can start using the shared filesystem, we have to format the shared device using OCFS2 filesystem.

Following command will format the filesystem with ocfs2 and will set some additional features.

mkfs.ocfs2 -T mail -L ocfs-mnt –fs-features=backup-super,sparse,unwritten -M cluster /dev/sdb

Where :

-T mail

Specify how the filesystem is going to be used, so that mkfs.ocfs2 can chose optimal filesystem parameters for that use.

“mail” option is a ppropriate for file systems which will have many meta data updates. Creates a larger journal.

-L ocfs-mnt

Sets the volume label for the filesystem. It will used instead of device named to identify the block device in /etc/fstab

–fs-features=backup-super,sparse,unwritten

Turn specific file system features on or off.

backup-super

Create backup super blocks for this volume

sparse

Enable support for sparse files. With this, OCFS2 can avoid allocating (and zeroing) data to fill holes

unwritten

Enable unwritten extents support. With this turned on, an application can request that a range of clusters be pre-allo-cated within a file.

-M cluster

Defines if the filesystem is local or clustered. Cluster is used by default.

/dev/sdb

Block device that need to be formated.

Note: The default mkfs.ocfs2 option covers only 4 node cluster. In case if you have more nodes you have to specify number of node slots using -N number-of-node-slots.

We are ready to mount the new filesystem once the format operation is completed successfully.

You may mount the new filesystem using the following command. It is assumed that the mount point (/mnt) exists already.

mount /dev/sdb /mnt

If the mount operation was successfully completed you can add the following entry to /etc/fstab for automatic mounting during the bootup process.

LABEL=ocfs-mnt /mnt ocfs2 rw,_netdev,heartbeat=local 0 0

Test the newly added fstab entry by rebooting the server. The server should mount /dev/sdb automatically to /mnt. You can verify this using ” df ” command after reboot.