Tuesday, 23 June 2015

Ouch, where's my disk space gone ?

I've got a development VM that hosts a whole bunch of software including: -

IBM WebSphere MQ
IBM Integration Bus
IBM Integration Designer
IBM Integration Toolkit
IBM WebSphere MQ Explroer
IBM Business Process Manager Advanced
IBM DB2

The VM has a 80 GB disk, and yet .... quelle horreur ... I saw an exception relating to insufficient disk space: -

An exception occurred while writing to the platform log:
java.io.IOException: No space left on device


Firstly I checked using df ( Display Freespace ): -

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhel66-lv_root
                       75G   70G  589M 100% /
tmpfs                 3.9G   20K  3.9G   1% /dev/shm
/dev/sda1             477M   34M  419M   8% /boot


which doesn't look good.

Then, following this post: -


I tried this: -

sudo du -h / | grep -P '^[0-9\.]+G'

which returned: -

2.8G /opt/IBM/WebSphere/AppServer/cognos
1.1G /opt/IBM/WebSphere/AppServer/BPM/Lombardi
1.3G /opt/IBM/WebSphere/AppServer/BPM
7.9G /opt/IBM/WebSphere/AppServer
8.1G /opt/IBM/WebSphere
2.2G /opt/IBM/IMShared/plugins
1.7G /opt/IBM/IMShared/files
4.7G /opt/IBM/IMShared
15G /opt/IBM
1.6G /opt/mqm
1.1G /opt/ibm/db2/V10.5
1.1G /opt/ibm/db2
2.1G /opt/ibm
19G /opt
1.1G /home/wasadmin
2.2G /home/db2inst1/db2inst1/NODE0000/BPMDB/T0000002
2.5G /home/db2inst1/db2inst1/NODE0000/BPMDB
1.5G /home/db2inst1/db2inst1/NODE0000/CMNDB/T0000002
1.7G /home/db2inst1/db2inst1/NODE0000/CMNDB
7.2G /home/db2inst1/db2inst1/NODE0000
7.2G /home/db2inst1/db2inst1
7.4G /home/db2inst1
8.5G /home
du: cannot read directory `/mnt/hgfs/Software': No such file or directory
du: cannot access `/proc/36184/task/36184/fd/4': No such file or directory
du: cannot access `/proc/36184/task/36184/fdinfo/4': No such file or directory
du: cannot access `/proc/36184/fd/4': No such file or directory
du: cannot access `/proc/36184/fdinfo/4': No such file or directory
1.9G /usr
3.3G /var/repo/rhel66
3.3G /var/repo
38G /var/mqm/trace
38G /var/mqm
41G /var

70G /

which was very revealing.

More than half of my available disk was being used by ... WebSphere MQ.

When I checked more closely, I could definitely see a few likely candidates in /var/mqm/trace including: -

-rw-rw-r--   1 mqm mqm 266M Jun 23 12:08 mqjms_25962.trc
-rw-rw-r--   1 mqm mqm 8.2G Jun 23 14:15 mqjms_27216.trc
-rw-rw-r--   1 mqm mqm  21G Jun 21 21:02 mqjms_32695.trc


I shut down my Queue Manager: -

endmqm IB9QMGR -c

Quiesce request accepted. The queue manager will stop when all outstanding work
is complete.


and checked that it was stopped: -

dspmq

QMNAME(IB9QMGR)                                           STATUS(Ended immediately)

and .... deleted the trace files: -

rm -Rf /var/mqm/trace

which brings things back to usual: -

df -kmh

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhel66-lv_root
                       75G   33G   38G  47% /
tmpfs                 3.9G   20K  3.9G   1% /dev/shm
/dev/sda1             477M   34M  419M   8% /boot


Phew :-)

No comments:

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...