Wednesday, 8 February 2012

WebSphere and VMware Together - The Balloon Goes POP!!

I'm working on a couple of projects where our client is running various WebSphere Application Server-based products, including IBM Connections, IBM WebSphere Portal and IBM Web Content Manager, on VMware ESX.

One specific item of interest is the wonderfully named "Memory Ballooning"  …..

Memory Balloon Driver

The balloon driver, also known as the vmmemctl driver, collaborates with the server to reclaim pages that are considered least valuable by the guest operating system. It essentially acts like a native program in the operating system that requires more and more memory. The driver uses a proprietary ballooning technique that provides predictable performance that closely matches the behavior of a native system under similar memory constraints. This technique effectively increases or decreases memory pressure on the guest operating system, causing the guest to invoke its own native memory management algorithms. When memory is tight, the guest operating system decides which particular pages to reclaim and, if necessary, swaps them to its own virtual disk.

You need to be sure your guest operating systems have sufficient swap space. This swap space must be greater than or equal to the difference between the virtual machine's configured memory size and its reservation.



For us, this caused some performance problems, with symptoms such as hung JVMs, hung threads, apparent 100% CPU utilisation etc., both on the VMs running real end-user workload e.g. IBM Connections, which we have clustered across two nodes, each with oodles of memory and oodles of CPU core. We also saw the same symptoms on our Deployment Manager servers, which aren't exactly what I'd call heavyweight workloads.

We've now disabled this behaviour, and things appear to be a little better.

These two VMware-sourced documents were of considerable use: -


the latter of which says: -

It is recommended that you do not overcommit memory because the JVM memory is an active space where objects are constantly being created and garbage collected. Such an active memory space requires its memory to be available all the time. If you overcommit, memory ballooning or swapping may occur and impede performance.

An ESX/ESXi host employs two distinct techniques for dynamically expanding or contracting the amount of memory allocated to virtual machines. The first method is known as memory balloon driver (vmmemctl). This is loaded from the VMware Tools package into the guest operating system running in a virtual machine. The second method involves paging from a virtual machine to a server swap file without any involvement by the guest operating system.

In the page swapping method, when you power on a virtual machine, a corresponding swap file is created and placed in the same location as the virtual machine configuration file (VMX file). The virtual machine can power on only when the swap file is available. ESX/ESXi hosts use swapping to forcibly reclaim memory from a virtual machine when no balloon driver is available. The balloon driver might be unavailable either because VMware Tools is not installed or because the driver is disabled or not running. For optimal performance, ESX/ESXi uses the balloon approach whenever possible.

However, swapping is used when the driver is temporarily unable to reclaim memory quickly enough to satisfy current system demands. Because the memory is being swapped out to disk, there is a significant performance penalty when the swapping technique is used. Therefore, it is recommended that the balloon driver is always enabled, but monitor it to verify that it is not being invoked when that memory is overcommitted.

Both ballooning and swapping should be prevented for Java applications. To prevent ballooning and swapping, refer to BP5 – Set memory reservation for virtual machine needs.

Bottom line, ballooning and WebSphere aren't a good fit ….

No comments:

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...