Wednesday, January 12, 2011

Micro-Cloud Anyone? Supermicro Unleashes a (tiny) Monster

















Supermicro’s been holding the AS-2021TM 4-node-in-2U platform back for several weeks but finally it’s out from behind proprietary OEM’s. We’re talking about the Supermicro 2021TM-B “mini cluster” of course and we’ve been watching this platform for some time.

Why is this a great platform for right now? TheH8DMT-F motherboard, supporting only 64GB of DDR2/800 memory, also supports HT3.0 links to enable the slightly higher HT bandwidth of the upcoming Istanbul 6-core processors. The on-board IPMI 2.0 management (WPCM450, supporting KVM/IP and serial-over-LAN) with dedicated LAN port and two inboard USB ports (supporting boot flash) make this an ideal platform for “cloud computing” operations with high-density needs and limited budgets.

The inclusion of the on-board Intel Zoar (82575) dual-port Gigabit Ethernet controller means VMDq support for 4 recieve queues per port using the “igb driver” as we’ve reported in a previous post on “cheap” IOV. An nVidia MCP55-Pro provides Southbridge functions for 6xUSB 2.0, 4x SATA and 1xPCI-express x16 (low-profile) slot. This is a VMware “vSphere ready” configuration.

Each motherboard is installed on a removable try allowing for hot-swapping of motherboard trays (similar to blade architectures). The available x16 PCI-express slot allows for a single, dual-port 10GE card to drive higher network densities per node. An optional 20Gbps Mellanox Infiniband controller (MT25408A0-FCC-DI) is available on-board (PCI-express x8 connected) for HPC applications.

Each node is connected to a bank of 3-SATA hot-swap drive bays supporting RAID 0, 1 or 5 modes of operation (MCP55 Pro NVRAID). This makes the 2021TM a good choice for dense Terminal Services applications, HPC cluster nodes or VMware ESX/ESXi service nodes.

Key Factors:

  • Redundant power supply with Gold Level 93% efficiency rating
  • up to 64GB DDR2/800 per node (256GB/2U) – Istanbul’s sweet-spot is 32-48GB
  • HT3.0 for best Socket-F I/O and memory performance
  • Modern i82575 1Gbps (dual-port) with IOV
  • Inboard USB flash for boot-from-flash (ESXi)
  • Low-profile PCI-express x16 (support for dual-port 1oGE & CNA’s)
  • Hot-swap motherboard trays for easy maintenance
  • Full KVM/IP with Media/IP per node (dedicated LAN port)
  • Available with on-board Mellanox Infiniband (AS-2021TM-BiBTRF) or without (AS-2021TM-BTRF)


Monday, January 10, 2011

Using parted and LVM2 for large partitions

Using parted and LVM2 for large partitions

I wanted to spread a partition across two RAID cards, one drive partition on each card. Here is the server's physical drive configuration.

|-RAID0-|-RAID5---------|
| 00 01 | 02 03 04 05 06|

|-RAID5------------------------------------|
| 00 01 02 03 04 05 06 07 08 09 10 11 12 13|
The second RAID5 is a few TB and fdisk won't work on partitions larger than 2TB, so I useparted to create a partition that fills the free space.
# parted /dev/sdc
GNU Parted 1.8.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print

Model: DELL PERC 5/E Adapter (scsi)
Disk /dev/sdc: 3893GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

(parted) mklabel gpt

(parted) mkpart primary 0 3893G
(parted) print

Model: DELL PERC 5/E Adapter (scsi)
Disk /dev/sdc: 3893GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 3893GB 3893GB primary

(parted) quit
Then I use LVM2 to combine the two RAIDs, sdb1 and sdc1, into one logical volume.
# pvcreate /dev/sdb1 /dev/sdc1
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
# vgcreate nsm_vg /dev/sdb1 /dev/sdc1
Volume group "nsm_vg" successfully created
# pvscan
PV /dev/sdb1 VG nsm_vg lvm2 [272.25 GB / 632.00 GB free]
PV /dev/sdc1 VG nsm_vg lvm2 [3.54 TB / 3.54 TB free]
Total: 2 [1.94 TB] / in use: 2 [1.94 TB] / in no VG: 0 [0 ]

# lvcreate -L 3897G -n nsm_lv nsm_vg
Logical volume "nsm_lv" created

# mkfs.ext3 -m 1 /dev/nsm_vg/nsm_lv
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
528482304 inodes, 1056964608 blocks
10569646 blocks (1.00%) reserved for the super user
First nsm block=0
Maximum filesystem blocks=0
32256 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

# mount /dev/mapper/nsm_vg-nsm_lv /nsm

$ df -h
Filesystem Size Used Avail Use% Mounted on
---snip---
/dev/mapper/nsm_vg-nsm_lv
3.8T 196M 3.8T 1% /nsm
Now I can put the entry for mounting /nsm into /etc/fstab.

NOTE:
Suggest you to mount the partition with "defaults,async,noatime", which helps make things more efficient when you're doing a constant stream of writes, as sguil does with PCAP files.