Cisco New Aironet 1700 Series & WLC Software Compatibility Matrix

Cisco recently released Aironet 1700 series access points which support high speed WiFi IEEE 802.11ac. One of the key specifications of this 1700 series – they “only” work with Cisco Unified Wireless Network Software Release 8.0 or later.

It’s now time to upgrade your WLC to version 8.x to get this work for you.

Here is the latest APs & WLCs software compatibility matrix (as of December 15, 2014) –

cisco-aironet-wlc-software

Details @ http://www.cisco.com/c/en/us/td/docs/wireless/compatibility/matrix/compatibility-matrix.html

Advertisements

Cisco AIP SSM Email Alert – Cisco IPS Manager Express (IME)

Cisco AIP SSM are pluggable hardware modules for advanced intrusion prevention security services (IDS/IPS) to Cisco ASA 5500 series firewalls.

Although lots of AIP-SSM configuration parameters can be set via ASDM > IDM (Cisco IPS Device Manager) or via CLI – however there is no such thing on IDM or CLI to send security events or security reports via email.

So, how do I know > what is happening on the IDS/IPS? Is the IPS device capable of detecting threats? Is IPS is blocking attacker IP address?

The answer is – there is a separate piece of software called Cisco IPS Manager Express (IME) to manage, configure and send email alert notifications for AIP-SSM modules. This software needs to be installed on a Windows machine. As of today the latest version is 7.2.7. Supported windows platforms are – Windows Vista Business+/XP Pro/Windows 8+/Windows 2003 R2/Windows 2008 and above.

Apart from all ASA AIP-SSM modules; this IME software does support following Cisco IPS hardware platforms – 4240, 4255, 4260, 4270-20, 4345, 4360, 4510 and 4520.

Here is the download URL (you need valid Cisco login),

https://software.cisco.com/download/type.html?mdfid=282052550&catid=null

Installation is very straight forward; start the installation > follow next, next and finish.

Once IME installation is finished; add all of your AIP-SSMs or IPS devices to IME console via IP address. Make sure IME Windows machine is able to communicate to AIP-SSM or IPS device’s management interface IP address. You can have bunch of IPS devices under one IME.

Setting Up Email Notification

This is very easy task. Open IME console > click on “Tools” > click on “Preferences”; enter your SMTP server details under “Email Setup” tab; screenshot–

IME-EMailSetup

You should send test email to confirm – IME is OK sending email.

Click on the next tab “Notifications” for IDS/IPS security events – configure your preferred notification parameters here; screenshot-

IME-Notifications

Lastly you might want to see consolidated security events in a report – such as what happened in last 24 hours or last 7 days or last 30 days; go to the next tab called “Reports” – all the report parameters are here; this will send PDF report with colorful presentation of data with graphs or charts; screenshot–

IME-Reports

Questions again –

i. You have done configuration of all the email notification parameters, do you need to keep IME running on desktop? Should you close the IME console and logoff?

Answer: Yes – you close IME console and logoff from the Windows computer; IME is still running on the background as a Windows service.

ii. You have added 4 IPS devices on your IME – is email alert notification working on ALL of them?

Answer: Yes – email notification is a global setting within IME that applied to ALL IPS devices those been added to the console. There is no option here to configure email notifications on individual IPS device within the same IME console.

VMware ESXi Host Memory Management, Monitoring, Alert Notification – Part 2

I have described memory monitoring and alert notification gauge on the previous article (Part 1) – let’s do the configuration.

There are so many ways to monitor and get alert notification of VMWare ESXi host memory usage status – most of well-known monitoring solutions come with VMware monitoring plugins pre-installed. vCenter server can also send alerts based on given conditions as well.

Here I will discuss how to configure Nagios Core to monitor memory usage and alert notification; NagiosXI (the commercial edition has a built-in nice easy web UI to do the same). Before moving forward, make sure Nagios server up and running – we need install the following software/tools on the Nagios server –

i. VMware vSphere Perl-SDK; the version should match to the vCenter/ESXi host version – version 5.5 can be download at https://developercenter.vmware.com/web/sdk/55/vsphere-perl
ii. Download and install check_vmware_esx.pl (this is a fork of check_vmware_api.pl) from https://www.monitoringexchange.org/inventory/Check-Plugins/Virtualization/VMWare-%2528ESX%2529/check_vmware_esx.pl—a-fork-of-check_vmware_api.pl-%2528check_esx3-pl%2529 or from https://github.com/BaldMansMojo/check_vmware_esx/blob/master/check_vmware_esx.pl
iii. Install the required Perl modules.

(Step 1 – install VMware vSphere Perl-SDK)

#tar zxvf  VMware-vSphere-Perl-SDK-5.5.0-1384587.x86_64.tar.gz
#cd vmware-vsphere-cli-distrib
#./vmware-install.pl

Accept the license agreement and install with default settings.

If the installation detect missing or old Perl modules – install them; easiest way is install them via CPAN.

(Step 2 – install & configure check_vmware_esx.pl Nagios check script)

Download this from the above mentioned web sites. Copy the “chech_vmware_esx.pl” script to Nagios libexec directory “/usr/local/nagios/libexec/”; make sure it is owned by “nagios” user/group with executable permission.

If you download the “check_vmware_esx_0.9.19.tgz” file – the installation process is following –

#tar zxvf check_vmware_esx_0.9.19.tgz
#cd check_vmware_esx_0.9.19
#cp check_vmware_esx.pl /usr/local/nagios/libexec
#chown nagios.nagios check_vmware_esx.pl
#chmod 751 check_vmware_esx.pl

Copy the perl modules within “check_vmware_esx_0.9.19/modules” to a directory – this can be inside “/usr/local/nagios/libexec” directory –

#mkdir /usr/local/nagios/libexec/vmware_modules
#cp –R /tmp/check_vmware_esx_0.9.19/modules /usr/local/nagios/libexec/vmware_modules/ 
#chown –R nagios.nagios /usr/local/nagios/libexec/vmware_modules

Also change following parameter in the check_vmware_esx.pl file –

use lib “modules”;
to
use lib /usr/local/nagios/libexec/vmware_modules/modules;

Again if the script execution complain about missing Perl module – install them via CPAN.

You should use a “session lock file” to minimize auth log entries on vCenter or ESXi host; every time nagios execute service check with this script – this will create auth log entries in vCenter/ESXi host – it’s huge! The default script will ask you to create session lock file in “/var/nagios_plugin_cache/” directory – create this directory and make sure it is owned by Nagios.

#mkdir /var/nagios_plugin_cache
#chown –R nagios.nagios /var/nagios_plugin_cache

You need to create an user account for this nagios script on your vCenter or on ESXi hosts you want to monitor. You should use “authfile”; this file contains Nagios monitoring user account/password created on vCenter or ESXi host.

#vi /usr/local/nagios/libexec/vmware_plugin/authfile

Enter the following –

username=nagios_userName_on_esxi
password=password_nagios

#chown nagios.nagios /usr/local/nagios/libexec/vmware_plugin/authfile

At this stage the script should be ready to execute! If not – it must be missing Perl modules :(.

(Step 3 – configure Nagios commands and service check)

This script is capable of monitoring lots of other vCenter objects such as cpu, network, datastore, virtual machines etc. Follow standard Nagios guidelines to create your check commands and service checks.

Usage:

To see all memory parameters of an esxi host–
./check_vmware_esx.pl -H 192.168.1.1 -f /location/of/authfile -S mem

mem usage=42.73% - consumed memory=24501.48 MB - swap used=35.87 MB - overhead=650.41 MB - memctl=0.00 MB: |'mem_usage'=42.73%;;;; 'consumed_memory'=24501.48MB;;;; 'mem_swap'=35.87MB;;;; 'mem_overhead'=650.41MB;;;; 'mem_memctl'=0.00MB;;;;

Set alert notification based on % of memory usage of an esxi host-
./check_vmware_esx.pl -H 192.168.1.1 -f /location/of/authfile -S mem -s usage

mem usage=42.73%|'mem_usage'=42.73%;;;;

./check_vmware_esx.pl -H 192.168.1.1 -f /location/of/authfile -S mem -s usage -w 40% -c 60%

Warning! mem usage=42.69%|'mem_usage'=42.69%;40;60;;

Set alert notification based on MB of total memory usage of an esxi host–
./check_vmware_esx.pl -H 192.168.1.1 -f /location/of/authfile -S mem -s consumed

consumed memory=24501.29 MB|'consumed_memory'=24501.29MB;;;;

./check_vmware_esx.pl -H 192.168.1.1 -f /location/of/authfile -S mem -s consumed -w 24000 -c 26000

Warning! consumed memory=24475.05 MB|'consumed_memory'=24475.05MB;24000;28000;;

To see swap memory usage only of an esxi host–
./check_vmware_esx.pl -H 192.168.1.1 -f /location/of/authfile -S mem -s swapused

swap used=35.87 MB|'mem_swap'=35.87MB;;;;

Screenshot of mem usage on Nagios web UI –

nagios-esxi-memcheck

This script also generate Nagios perfdata which is useful for graphing; if you have pnp4nagios graph installed you should be able to get graph like the following –

nagios-mem-graph

Ruby program as Windows Service – the windows way

Recently I come across Ruby for the first time while installing few cloud-based network monitoring applications on Windows Servers; these are Ruby applications called Ruby “gem”.

After finished installation and configuration – I found I am able to run the Ruby gem without any problem on the command line (very easy – open CMD > go to Ruby bin directory > execute >application_name run). But if I close CMD – the application immediately stop working. I need this application running as Windows Service.

There are couple of ways to make a Ruby windows service –
i. the Ruby way – there are couple of Ruby utilities & gems already available; one of them is called “win32-services”.
ii. the Windows way – by using the OLD “sc.exe” & “SrvAny.exe”; this works OK on Windows 2008 & Windows 2012. This is the easiest one!

I am no Ruby expert – I will describe here how to create Ruby windows service using SC & SrvAny.

Configuration details are following –

i. Get the “SrvAny.exe” and place it in a directory; this can be even inside Ruby directory “C:\Ruby21\mywinservice\srvany.exe”.

ii. Open CMD with admin priv; execute the following sc command to create a windows service –
>sc create MyRubyService binPath= “C:\Ruby21\mywinservice\srvany.exe” DisplayName= “My Ruby Application”

This will create the windows service “MyRubyService” and registry key with the same name. The registry key & entries should look like following–

RubySrvAny

iii. Open regedit; go to “HKLM\SYSTEM\CurrentControlSet\Services\MyRubyService”. Create a new key name “Parameters”. Enter the following entries (String value) under “HKLM\SYSTEM\CurrentControlSet\Services\MyRubyService\Parameters” –

AppDirectory     -this is the Ruby bin directory
Application         -this is the “ruby.exe” file location
AppParameters -this is the ruby gem application “run” command

RubyAppSrv

You might need to stop or disable “Interactive Services Detection” on Windows 2008; by default this is not enabled on Windows 2012.

Cisco IOS Site-to-site IPSec VPN with VRF-lite

Just few years ago if there was a requirement of connecting to destinations with same IP networks address or for a low-level network segregation – the solution was to get separate network devices. These days the same can be done on a single hardware platform using VRF (VRF-lite).

On server platform – it’s virtualization everywhere these days; why not VRF-lite on networking then! I have seen lots of routers they never use above 50% of its capacity! This saves us the following –

i. Buying new router hardware
ii. Less power consumption, less power outlet
iii. Less number of switch ports required
iv. Overall high gain total cost of ownership

That’s why I have started implementing VRF-lite on all my new implementations! Why “all” – because if there any new requirements comes into the picture I still can use the same device; no need to reconfigure the existing platform or buy new devices.

So far I experienced – Cisco IOS does support all IP features with VRF-lite; such as static routing, dynamic routing, BGP, site-to-site vpn, nat and packet filtering firewall. On HP Comware5 platform (A-Series, 5xxx) – VRF-lite doesn’t support Layer-3 packet filtering – other than this they support most of IP services.

Let’s talk about IPSec site-to-site VPN with VRF-lite. Following are the key configurable components of a site to site IPsec VPN –

  1. Remote peer with secret keys
  2. IKE Phase 1 security details
  3. IKE Phase 2 security details
  4. Crypto map
  5. NAT
  6. Access List

On a VRF environment – the whole VPN concept and commands remain same except the following only where we specify network addresses –

1. Remote peer & keys – remote peer is reachable via which VRF domain; instead of global “key” we need to configure “keyring” with specific vrf domain name here.
5. NAT – internal source address belongs to which VRF domain; we need to specify vrf domain name in the NAT rules.
6. Although “access-list” contain of IP addresses – no VRF name need to be specify here.

Following are the command syntaxes for remote peer with VRF details –

(config)#crypto keyring tunnelkey vrf my-vrf-A
  (config-keyring)#pre-shared-key address 10.100.200.1 key 6 mysecretkey
 
 (config)#crypto keyring tunnelkey vrf my-vrf-B
  (config-keyring)#pre-shared-key address 20.100.200.1 key 6 mysecretkey

Without VRF the syntax is (this is called “global key”)-
(config)#crypto isakmp key mysecretkey address 10.100.200.1

Following are the command syntaxes for NAT rules with VRF domain name –
(config)#ip nat inside source static my_src_ip  my_nat_inside_global_ip vrf my-vrf-A

(config)#ip nat inside source static 192.168.1.10 10.100.200.10 vrf my-vrf-A

Show configuration commands for the above are –

#show crypto isakmp key
#show ip nat translations vrf my-vrf-A

If the above are not specified correctly – you might receive the following error on the router log file;

No pre-shared key with 10.x.x.x!
Encryption algorithm offered does not match policy!
atts are not acceptable. Next payload is 3
phase 1 SA policy not acceptable!
deleting SA reason “Phase1 SA policy proposal not accepted” state (R) MM_NO_STATE (peer 10.x.x.x)

 

VMware ESXi Host Memory Management, Monitoring, Alert Notification – Part 1

When it comes VMware memory monitoring – two items to monitor (i)ESXi host memory (ii)VM memory. There are bunch of memory related terminologies and calculations here in this space. I am discussing host memory monitoring here –

-understand physical memory usage monitoring
-what is the right memory counter to monitor & alert notification for esxi host
-what is the right gauge of memory monitoring & alert notification for esxi host

Will also setup Nagios check plugin to monitor the above with performance data for graph (Part 2).

Before moving forward; let’s have a look into Mem.MinFreePct function. This function manage how much host memory should be kept free and when the hypervisor should kick-off advanced memory reclamation techniques such as ballooning, compression, swapping.

(Configuration> Advanced Settings>Mem)
memminfreepct

Based on free host memory & reclamation techniques – there are four (04) different states of host memory utilization;

State Name Mem Reclamation Technique Good or Bad Note
High At this state “Transparent Page Sharing” is will be always running. This is default behaviour. Good – this is normal This is defined by Mem.MinFreePct function. Don’t disable TPS – not recommended.
Soft At this state host will activate memory ballooning. Not good enough This is 64% of Mem.MinFreePct. This means physical memory near to max out.  If host unable to go back to previous state itself – take necessary action to free up more mem.
Hard At this state host will start doing memory compression and hypervisor level swapping. Bad – memory under stress This is 32% of Mem.MinFreePct. Need to free up memory by migrating VMs to other hosts or upgrade memory.
Low At this state host will no more serve any page to VMs. Very Bad – fix it ASAP This is 16% of Mem.MinFreePct. This protects host VMkernel layer from Purple Screen of Death.

Prior to ESXi-5.x this (high state) was set to 6% by default – this means host system will always keep 6% of total physical memory free before activate advanced memory reclamation technique; let’s say an ESXi-4.x host with 64GB memory will be required at least 3.84GB free to be in the High state (normal).

Starting from ESXi-5.x this calculation is no more 6% by default – because high memory servers (512GB/768GB) are becoming common these days; 6% of 512GB is 30.72GB its huge free memory.

The new calculation is following –

Free Memory Threshold Range Calculation Note
6% First 0GB to 4 GB 6% of 4GB
4% Starting from 4GB to 12GB (12-4=8) 4% of 8GB
2% Starting from 12GB to 28GB (28-12=16) 4% of 16GB
1% Remaining memory i.e. 36GB if total size is 64GB (64-28=36)
i.e. 68GB if total size is 96GB (96-28=68)

Based on above – on a system with 128GB memory, the min free memory required to be in “high state” calculation is following –

i. 6% of first 4GB – this is 245.76MB (first 0-4GB)
ii. 4% of 8GB – this is 327.68MB (0-4GB|4-12GB)
iii. 2% of 16GB – this is 327.68MB (0-4GB|4-12GB|12-28GB)
iv. 1% of 100GB – this is 1024MB (0-4|4-12|12-28|28-128GB)
v. Total is 1925.12GB (245.76+327.68+327.68+1024).

esxmemfree

Based on the above we can setup monitoring & alert notification for a 128GB host as following –

Mem State Min Free Mem Monitoring Action Calculation
High 1925.12MB No action required Based on above
Soft 1232.0768MB Warning alert 64% of Mem.MinFreePct
Hard 616.384MB Critical alert 32% of Mem.MinFreePct
Low 308.0192MB Critical alert 16% of Mem.MinFreePct

Also at “Hard” state – memory performance measurement counter “Swap used” will be greater than 0. This condition also should trigger alarm.

vmware-perf-mem

esxtop-mem
(esxtop – memory high state)

References:
http://blogs.vmware.com/vsphere/2012/05/memminfreepct-sliding-scale-function.html

 

Linux LVM Cookbook with Examples

I often manage LVM volumes on Linux servers. LVM is the best way to manage on growing disk storage demands on Linux servers running on virtual infrastructure platform; the beauty is – the whole disk administration process is ONLINE without server reboot or any outage to running services. Here is below my LVM cookbook with examples –

Before working with LVM – make sure you are experienced with disk administration utilities such as “fdisk” and Linux file systems.

Before moving forward – let’s get familiar with LVM terminologies –

LVM Terminology Descriptions
Physical Volume (PV) A PV is the actual physical disk or a physical disk partition to be used in LVM. The disk partition ID for LVM PV is “8e”; whereas standard Linux partition ID is “83”. Example :/dev/sdc – the whole disk cab be a LVM PV/dev/sda6 – the logical partition sda6 can be a LVM PV
Volume Group (VG) A VG is a pool of Physical Volumes (PV); a VG can have a single PV only or bunch of PVs bundle together. Member PV disk size can be same or different in a VG. VGs are identified by unique names within a Linux server. Example:VolGrp00 – actual location is /dev/VolGrp00VolGrp01 – actual location is /dev/VolGrp01
Logical Volume (LV) LVs are the logical/virtual partitions in Linux systems. Linux file system mount points are assigned against LVs. A single LV or multiple LVs can be created on a single Volume Group (VG). A LV name is unique within a VG. Example:/dev/VolGrp00/LogicalVol00 can be mount as /usr/dev/VolGrp00/LogicalVol01 can be mount as /var/dev/VolGrp01/LogicalVol00 can be mount as /mydb
Physical Extents (PE) PEs are physical disk blocks in LVM. The size of PE is often in megabytes. Larger PE size allow large Volume Group (VG); i.e. PE size 32M support a VG up to 2TB, 64MB PE size support a VG up to 4TB, 256MB PE size support a VG up to 16TB. Once a VG is created, the PE size cannot be changed; that’s why it is important to decide PE size before creating LVM.
Logical Extents (LE) LEs are logical disk blocks in LVM. LEs have the same size as PEs.

1. Let’s start with the “creation” of LVM-

The LVM creation process involves five tasks –

  • create PV on physical disk or on a free disk partition
  • then create VG on PV(s); specify PE size if required
  • create LV(s) on VG(s)
  • create Linux fie system (ext3 ot ext4 or other) on LV
  • mount the LV disk to a Linux directory/mount point.

Following table describe the creation tasks Linux commands –

LVM Terminology Linux Commands Description
Physical Volume #pvcreate /dev/sdbor#pvcreate /dev/sdc3 The first command will create PV on the whole disk /dev/sdb.The second command will create PV on the disk partition /dev/sdc3. Other partitions on the same disk /dev/sdc might not be LVM PV – and it’s ok.
Logical Volume #vgcreate MyVolGrp00 /dev/sdbor#vgcreate MyVolGrp00 /dev/sdb –s 32Mor#vgcreate VolGrp01 /dev/sdb /dev/sdc3 /dev/sda7 –s 64M The first command will create VG named MyVolGrp00 on PV /dev/sda. The PE size is based on Linux system default – if you don’t specify.The second command will create VG named MyVolGrp00 on PV /dev/sdb with PE size of 32MB. This VG can grow upto 2TB.The third command will create VG named VolGrp01 on multiple PVs with PE size of 64MB. This VG can grow upto 4TB.
Logical Volume #lvcreate -l +100%Free MyVolGrp01 –n LVol00or#lvcreate –L 10G VolGrp00 –n LVol01 The first command will create LV LVol00 on MyVolGrp01 VG – and will consume the whole size of the VG.The second command will create LV LVol01 on VolGrp00 – the LV size is 10GB; if any free space available you can create more LVs on the same VG VolGrp00.
Fie System #mkfs.ext3 /dev/VolGrp00/LVol01#mkfs.ext3 /dev/VolGrp02/LVol00 This will create Ext3 file system on LVol01 & LVol00.
Mount Point #mount /dev/VolGrp00/LVol00 /mymountpoint For auto mount – add mount commands onto > /etc/fstab.

2. LVM – Display Details

Following are Linux commands to display detailed information about LVM,

LVM Term Linux Commands Description
PV #pvdisplay#pvdisplay /dev/sdc The first command will display all available PVs.The second command will display PV on disk /dev/sdc.
VG #vgdisplay#vgdisplay VolGrp01 The first command will display details of all available VGs. You can find the PE size here.The second command will display details of VolGrp01 only.
LV #lvdisplay#lvdisplay /dev/VolGrp00/LVol00 The first command will display details of all available LVs.The second command will display details of /dev/VolGrp00/LVol00 only.

3. LVM – Disk Expansion

LVM disk expansion process works as following –

-you can expand a logical volume (LV) only if there available free space on underlying volume group (VG).

-if no free space available on volume group (VG) – then (i)add new physical disk, convert it to PV then add to VG or (ii)extend existing PV physical disk; this is very common on virtual environment.

-finally tell Linux file system (ext3/ext4) to recognize and work with the new size.

The whole operation can be done ONLINE. Following table describe Linux commands for LVM disk expansion operation –

LVM Terminology Linux Commands Description
Physical Volume (PV) #pvresize /dev/sdc The first command will grow PV to all available free space on /dev/sdc physical disk – this is useful when we expand an existing hard disk on virtual machine.For “pvresize” nothing need to done on VG – VG will automatically recognize new disk size after the “pvresize” command.
#pvcreate /dev/sda6or#pvcreate /dev/sdd These commands will create a new PV to be added to existing VG; these commands are same as create a new PV for a new VG.New PV need to be added to VG as a part of VG extend.
Volume Group (VG) #vgextend VolGrp02 /dev/sdeor#vgextend VolGrp02 /dev/sde  /dev/sdf1 The first command will extend the VG VolGrp02 to new PV /dev/sde.The second command will extend the VG VolGrp02 to new multiple PVs /dev/sde & /dev/sdf1.
Logical Volume (LV) #lvextend –l +100%Free /dev/VolGrp02/Vol00or#lvextend –size +20G /dev/VolGrp02/Vol00or#lvextend –size 800GB /dev/VolGrp02/Vol00 The first command will extend the LV to all available free space on the VG /dev/VolGrp02.The second command will add additional 20GB space to the LV /dev/VolGrp02/Vol00 – make sure you have 20GB free space on the VG.The third command will make the LV /dev/VolGrp02/Vol00 size to 800GB (new final size) – make sure you have available space on the VG to have this 800GB LV.
File System #resize2fs /dev/VolGrp02/Vol00 You need to tell the Linux file system to recognize the new size ONLINE – this command will do that for LV Vol00. No system reboot or file system re-mount required for this.

4. LVM – Delete Operations

Before delete a logical volume LV make sure it is not mounted. Also delete operation will destroy all the data – so be careful before delete.

Command examples are following –

#lvremove /dev/VolGrp03/Vol01 ; this will remove LV “/dev/VolGrp03/Vol01”

#vgremove VolGrp03 ; this will remove VG “VolGrp03”

#pvremove /dev/sdc ; this will remove PV “/dev/sdc”

5. LVM – Disk Shrink

Disk size shrink of a LV can be done without deleting & recreating it – you might experience data loss while shrinking if not planned well (I experienced data loss and restored data from backup). Make sure you have data backup and the LV is not mounted before you move ahead. The new shrinked size must not be below the volume used size. Command examples are following –

Unmount the LV first,

#umount /dev/VolGrp02/Vol00   ; this will unmount “/dev/VolGrp02/Vol00”

Run file system check and make sure the file system is OK before moving ahead,

#e2fsck –f /dev/VolGrp02/Vol00

Reduce the file system size first (without losing data),

#resize2fs /dev/VolGrp02/Vol00 30G; this will reduce the current file system size to 30GB (let’s say the LV size is 50GB – and it has less than 30GB of data) – make sure volume usage is less than <30GB otherwise you will lose data.

Reduce the Logical Volume now,

#lvreduce –L 35G /dev/VolGrp02/Vol00; this will shrink the LV to 35GB (around 30GB of data + 5GB free space)

Finally resize the file system again to 35GB,

#resize2fs /dev/VolGrp02/Vol00

In this above example – our target is reducing a LV from 50GB to 35GB. The first “resize2fs” we shrink the file system from 50GB to 30GB (where data usage is less than 30GB out of total 50GB disk size) and then resize the Logical Volume to 35GB. This estimation helps eliminate data loss.

6. LVM – “scan” Commands

There are three scan commands – #pvscan, #vgscan & #lvscan.

LVM scan commands scan all the available PV/VG/LV in the system and build the LVM cache file “/etc/lvm/.cache” to maintain a listing of current LVM devices.

A Linux system auto execute “scan” commands every time we restart/power-on server and also during LVM create/expand/resize/reduce operations.