Pages

Tuesday, January 15, 2013

Increasing diskspace on Cisco Prime Infrastructure NCS 1.2 "the hard way?"

Somewhere last year PI NCS 1.2 was constantly complaining about disk full errors, although the the disk was only 62% used.

Apparently the complaining starts above 60% because it needs to be able to make a temporary backup on the same disk as it is running the database.

Our datacenter guy increased the disk (200GB) in vmware by increasing it by 100GB.

I wrongly thought this would have fixed the problem and that the system would automatically resize everything.

I was wrong.



So after running root_enable it was possible to login as root (it's just running a linux beneath it)

Below you can see the layout of the system with 62% in use for /opt where the database and temporary backups are located.

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/mapper/smosvg-rootvol
                       1967952    477860   1388512  26% /
/dev/mapper/smosvg-tmpvol
                       1967952     36460   1829912   2% /tmp
/dev/sda3               988116     17764    919348   2% /storedconfig
/dev/mapper/smosvg-recvol
                         95195      5664     84616   7% /recovery
/dev/mapper/smosvg-home
                         95195      5668     84612   7% /home
/dev/mapper/smosvg-optvol
                     147630356  86745492  53264668  62% /opt
/dev/mapper/smosvg-usrvol
                       5935604    904020   4725204  17% /usr
/dev/mapper/smosvg-varvol
                       1967952     90372   1776000   5% /var
/dev/mapper/smosvg-storeddatavol
                       3967680     74320   3688560   2% /storeddata
/dev/mapper/smosvg-altrootvol
                         95195      5664     84616   7% /altroot
/dev/mapper/smosvg-localdiskvol
                      29583412    204132  27852292   1% /localdisk
/dev/sda1               101086     12713     83154  14% /boot
tmpfs                  4021728   2044912   1976816  51% /dev/shm

How to fix this manually.
So we run fdisk, and we see that the extra 100GB is detected (/dev/sda 322.1GB)
We then add another (4) primary partition which consists of the missing 100GB

# fdisk /dev/sda

The number of cylinders for this disk is set to 39162.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14       25368   203664037+  8e  Linux LVM
/dev/sda3           25369       25495     1020127+  83  Linux

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p Selected partition 4 First sector (409577175-629145599, default 409577175): Using default value 409577175 Last sector or +size or +sizeM or +sizeK (409577175-629145599, default 629145599): Using default value 629145599 Command (m for help): p Disk /dev/sda: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders, total 629145600 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /dev/sda1 * 63 208844 104391 83 Linux /dev/sda2 208845 407536919 203664037+ 8e Linux LVM /dev/sda3 407536920 409577174 1020127+ 83 Linux /dev/sda4 409577175 629145599 109784212+ 83 Linux

Also tag it 8e to be a linux LVM filesystem.
And write the partition table <scary>

Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): 8e
Changed system type of partition 4 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sda: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders, total 629145600 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63      208844      104391   83  Linux
/dev/sda2          208845   407536919   203664037+  8e  Linux LVM
/dev/sda3       407536920   409577174     1020127+  83  Linux
/dev/sda4       409577175   629145599   109784212+  8e  Linux LVM

Command (m for help): v
62 unallocated sectors

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

Now reboot the system.
When it boots up, the partition can be used.
Use pvcreate to create the new physical volume.
vgdisplay will still show the old 194.22GB because it's not added yet to smosvg.

# pvcreate /dev/sda4
  Physical volume "/dev/sda4" successfully created

# vgdisplay
  --- Volume group ---
  VG Name               smosvg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                11
  Open LV               11
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               194.22 GB
  PE Size               32.00 MB
  Total PE              6215
  Alloc PE / Size       6215 / 194.22 GB
  Free  PE / Size       0 / 0
  VG UUID               vJcFNu-qPIP-uoKY-KiIu-wKCd-Jj7l-e8cLaV

pvdisplay will show the 2 volumes.

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               smosvg
  PV Size               194.23 GB / not usable 10.66 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              6215
  Free PE               0
  Allocated PE          6215
  PV UUID               ugQzvR-HsdX-cK8u-Sylm-NYAm-FXFX-YXkb6R

  --- Physical volume ---
  PV Name               /dev/sda4
  VG Name               smosvg
  PV Size               104.70 GB / not usable 11.15 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              3350
  Free PE               3350
  Allocated PE          0
  PV UUID               pEPFVV-AI3e-HLfo-fy5i-TyEN-eRCp-BJiXPS

Now add this volume to /opt and extend it by 50Gb And use resize2fs so that the kernel nows the size changed.

# lvextend /dev/mapper/smosvg-optvol /dev/sda4
# lvextend -L +50G /dev/mapper/smosvg-optvol
  Extending logical volume optvol to 195.34 GB
  Logical volume optvol successfully resized

# resize2fs /dev/mapper/smosvg-optvol
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/smosvg-optvol is mounted on /opt; on-line resizing required
Performing an on-line resize of /dev/mapper/smosvg-optvol to 51208192 (4k) blocks.
The filesystem on /dev/mapper/smosvg-optvol is now 51208192 blocks long.

A df will noshow 47% in use for /opt
The annoying popups are now gone.

# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/smosvg-rootvol
                       1967952    477876   1388496  26% /
/dev/mapper/smosvg-tmpvol
                       1967952     36460   1829912   2% /tmp
/dev/sda3               988116     17764    919348   2% /storedconfig
/dev/mapper/smosvg-recvol
                         95195      5664     84616   7% /recovery
/dev/mapper/smosvg-home
                         95195      5668     84612   7% /home
/dev/mapper/smosvg-optvol
                     198417376  88100224 100077796  47% /opt
/dev/mapper/smosvg-usrvol
                       5935604    904020   4725204  17% /usr
/dev/mapper/smosvg-varvol
                       1967952     88744   1777628   5% /var
/dev/mapper/smosvg-storeddatavol
                       3967680     74320   3688560   2% /storeddata
/dev/mapper/smosvg-altrootvol
                         95195      5664     84616   7% /altroot
/dev/mapper/smosvg-localdiskvol
                      29583412    204132  27852292   1% /localdisk
/dev/sda1               101086     12713     83154  14% /boot
tmpfs                  4021728   2044888   1976840  51% /dev/shm

It seems like I fixed everything and that I was in the clear.

Which it was for a couple of weeks until ... to be continued in a later post.



15 comments:

  1. Interesting post. I am running NCS 1.1.x and also running out of space on /opt. The official recommendation from TAC is to add an additional 100GB virtual disk, which NCS will supposedly dynamically allocate upon restart of the OS. I am going to give this a try in the lab soon.

    I am curious what issues you ran into next?

    ReplyDelete
  2. It seems like these products are not actually QA'd against large deployment scenarios. Would you recommend holding off on upgrading to PI 1.2?

    ReplyDelete
  3. If you have no need to upgrade (e.g. no specific version needed for specific switch/AP), don't upgrade is my advice atm :)

    This said, they have fixed some (critical) issues in 1.2 patches, see http://www.cisco.com/en/US/ts/fn/635/fn63595.html the ORA-19809 thing I mentioned in my last post is also there.

    But PI 1.3 is just around the corner, should be released any day now, so maybe wait for that release. You'll have to upgrade to PI 1.3 anyway if you want to manage 1600 AP's.

    ReplyDelete
  4. Lvextend gave me an error about not being in the same group so I used vgextend and then resize2fs.

    Thanks 42wim - EXTREMELY helpful posts (including the Oracle one).

    ReplyDelete
  5. I ran into this issue during an upgrade of 1.3 to 2.0. I also had to use 'vgextend' before the 'lgextend'.

    Thanks 42wim!

    ReplyDelete
  6. Very good post! This helped me a lot and it really worked at the end! Thank you very much

    ReplyDelete
  7. Awesome stuff! Thank you for enlighting the rest!

    ReplyDelete
  8. Instead of increasing the size of the actual disk, you should have just created a new disk. Upon restart, Cisco Prime identify the new disk and automatically increase the partitions (not just /opt).

    ReplyDelete
  9. I can confirm Artur's post is accurate for PI 1.3 with update 4.16 applied. I had to add an additional 150GB disk to get the /opt partition to exceed 200 GB (202 GB to be specific). Adding an additional 100GB disk only expanded the /opt partition to 183 GB.

    ReplyDelete
  10. If your getting this error:
    "/dev/sda4" is a new physical volume of "99.99 GB"
    --- NEW Physical volume ---
    PV Name /dev/sda4
    VG Name
    PV Size 99.99 GB
    Allocatable NO
    PE Size (KByte) 0
    Total PE 0
    Free PE 0
    Allocated PE 0
    PV UUID U0RV9I-hyol-eaZ5-MWaw-uPjm-nAh8-FH6f0P

    Do the following command then continue:

    vgextend smosvg /dev/sda4


    ReplyDelete
  11. Thanks for the post 42wim (LOL @ exemplary use of the "one cannot simply" meme). I ran into this same stupid problem myself. I was going from WCS 7.0.240 ... to NCS 1.1.05.8 ... to NCS 1.2 ... to Prime Inf 1.3.0.20 ... to 2.1! It was when I attempted to patch 1.3.0.20 to 2.1 that it said "oh, hey, there is not enough space here to perform the upgrade". Which was just fantastic, because I started all this using a canned VM and migrating my database into it.

    To further make my life HELL ... the VMWare server is 4.0.x, and has no idea how to increase the block size to make use of a filesystem larger than 256GB. I tried to attach an NFS mounted file system and 'cheat the stupidity of VMWare', but that didn't work either.

    Following these instructions, I just added another Hard Drive to my VM in the amount of 100GB. When I got to your steps on adding the drive to the group, it was giving me errors that the physical disk was full or in use. To my utter disbelief (seriously) - the Cisco OS made the right choice of seeing that I had added an empty disk, automatically formatted it and added it to the volume group, and automatically expanded /opt. Wow! We must not have been the only suckers who ended up in this boat.

    So for others in the same boat. You may want to step up to Cisco Prime Infrastructure 1.3.0.20 first, and then let it handle your storage addition just by rebooting and waiting. ;-)

    ReplyDelete
  12. Thanks 42wim

    It worked like a charm.

    As well thanks to who ever gave the command "vgextend smosvg /dev/sda4". I ran into that issue with 0 PE

    ReplyDelete
  13. FYI..... I had to run this command
    # vgextend smosvg /dev/sda4

    before
    # lvextend /dev/mapper/smosvg-optvol /dev/sda4
    # lvextend -L +50G /dev/mapper/smosvg-optvol

    ReplyDelete