Somewhere last year PI NCS 1.2 was constantly complaining about disk full errors, although the the disk was only 62% used.
Apparently the complaining starts above 60% because it needs to be able to make a temporary backup on the same disk as it is running the database.
Our datacenter guy increased the disk (200GB) in vmware by increasing it by 100GB.
I wrongly thought this would have fixed the problem and that the system would automatically resize everything.
I was wrong.
So after running root_enable it was possible to login as root (it's just running a linux beneath it)
Below you can see the layout of the system with 62% in use for /opt where the database and temporary backups are located.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/smosvg-rootvol
1967952 477860 1388512 26% /
/dev/mapper/smosvg-tmpvol
1967952 36460 1829912 2% /tmp
/dev/sda3 988116 17764 919348 2% /storedconfig
/dev/mapper/smosvg-recvol
95195 5664 84616 7% /recovery
/dev/mapper/smosvg-home
95195 5668 84612 7% /home
/dev/mapper/smosvg-optvol
147630356 86745492 53264668 62% /opt
/dev/mapper/smosvg-usrvol
5935604 904020 4725204 17% /usr
/dev/mapper/smosvg-varvol
1967952 90372 1776000 5% /var
/dev/mapper/smosvg-storeddatavol
3967680 74320 3688560 2% /storeddata
/dev/mapper/smosvg-altrootvol
95195 5664 84616 7% /altroot
/dev/mapper/smosvg-localdiskvol
29583412 204132 27852292 1% /localdisk
/dev/sda1 101086 12713 83154 14% /boot
tmpfs 4021728 2044912 1976816 51% /dev/shm
How to fix this manually.
So we run fdisk, and we see that the extra 100GB is detected (/dev/sda 322.1GB)
We then add another (4) primary partition which consists of the missing 100GB
# fdisk /dev/sda
The number of cylinders for this disk is set to 39162.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 25368 203664037+ 8e Linux LVM
/dev/sda3 25369 25495 1020127+ 83 Linux
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Selected partition 4
First sector (409577175-629145599, default 409577175):
Using default value 409577175
Last sector or +size or +sizeM or +sizeK (409577175-629145599, default 629145599):
Using default value 629145599
Command (m for help): p
Disk /dev/sda: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders, total 629145600 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 63 208844 104391 83 Linux
/dev/sda2 208845 407536919 203664037+ 8e Linux LVM
/dev/sda3 407536920 409577174 1020127+ 83 Linux
/dev/sda4 409577175 629145599 109784212+ 83 Linux
Also tag it 8e to be a linux LVM filesystem.
And write the partition table <scary>
Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): 8e
Changed system type of partition 4 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sda: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders, total 629145600 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 63 208844 104391 83 Linux
/dev/sda2 208845 407536919 203664037+ 8e Linux LVM
/dev/sda3 407536920 409577174 1020127+ 83 Linux
/dev/sda4 409577175 629145599 109784212+ 8e Linux LVM
Command (m for help): v
62 unallocated sectors
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
Now reboot the system.
When it boots up, the partition can be used.
Use pvcreate to create the new physical volume.
vgdisplay will still show the old 194.22GB because it's not added yet to smosvg.
# pvcreate /dev/sda4
Physical volume "/dev/sda4" successfully created
# vgdisplay
--- Volume group ---
VG Name smosvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 12
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 11
Open LV 11
Max PV 0
Cur PV 1
Act PV 1
VG Size 194.22 GB
PE Size 32.00 MB
Total PE 6215
Alloc PE / Size 6215 / 194.22 GB
Free PE / Size 0 / 0
VG UUID vJcFNu-qPIP-uoKY-KiIu-wKCd-Jj7l-e8cLaV
pvdisplay will show the 2 volumes.
# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name smosvg
PV Size 194.23 GB / not usable 10.66 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 6215
Free PE 0
Allocated PE 6215
PV UUID ugQzvR-HsdX-cK8u-Sylm-NYAm-FXFX-YXkb6R
--- Physical volume ---
PV Name /dev/sda4
VG Name smosvg
PV Size 104.70 GB / not usable 11.15 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 3350
Free PE 3350
Allocated PE 0
PV UUID pEPFVV-AI3e-HLfo-fy5i-TyEN-eRCp-BJiXPS
Now add this volume to /opt and extend it by 50Gb
And use resize2fs so that the kernel nows the size changed.
# lvextend /dev/mapper/smosvg-optvol /dev/sda4
# lvextend -L +50G /dev/mapper/smosvg-optvol
Extending logical volume optvol to 195.34 GB
Logical volume optvol successfully resized
# resize2fs /dev/mapper/smosvg-optvol
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/smosvg-optvol is mounted on /opt; on-line resizing required
Performing an on-line resize of /dev/mapper/smosvg-optvol to 51208192 (4k) blocks.
The filesystem on /dev/mapper/smosvg-optvol is now 51208192 blocks long.
A df will noshow 47% in use for /opt
The annoying popups are now gone.
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/smosvg-rootvol
1967952 477876 1388496 26% /
/dev/mapper/smosvg-tmpvol
1967952 36460 1829912 2% /tmp
/dev/sda3 988116 17764 919348 2% /storedconfig
/dev/mapper/smosvg-recvol
95195 5664 84616 7% /recovery
/dev/mapper/smosvg-home
95195 5668 84612 7% /home
/dev/mapper/smosvg-optvol
198417376 88100224 100077796 47% /opt
/dev/mapper/smosvg-usrvol
5935604 904020 4725204 17% /usr
/dev/mapper/smosvg-varvol
1967952 88744 1777628 5% /var
/dev/mapper/smosvg-storeddatavol
3967680 74320 3688560 2% /storeddata
/dev/mapper/smosvg-altrootvol
95195 5664 84616 7% /altroot
/dev/mapper/smosvg-localdiskvol
29583412 204132 27852292 1% /localdisk
/dev/sda1 101086 12713 83154 14% /boot
tmpfs 4021728 2044888 1976840 51% /dev/shm
It seems like I fixed everything and that I was in the clear.
Which it was for a couple of weeks until ... to be continued in a later post.