Skip to main content

Posts

Showing posts from 2013

How Windows performance counters are affected by running under VMware ESX

This post is a prequel to a recent one on correcting the Process(*)\% Processor Time counters on a Windows guest machine . To assess the overall impact of the VMware virtualization environment on the accuracy of the performance measurements available for Windows guest machines, it is necessary to first understand how VMware affects the clocks and timers that are available on the guest machine. Basically, VMware virtualizes all calls made from the guest OS to hardware-based clock and timer services on the VMware Host. A VMware white paper entitled “ Timekeeping in VMware Virtual Machines ” contains an extended discussion of the clock and timer distortions that occur in Windows guest machines when there are virtual machine scheduling delays. These clock and timer services distortions, in turn, cause distortion among a considerably large set of Windows performance counters, depending on the specific type of performance counter. (The different types of performance counters are described

Correcting the Process level measurements of CPU time for Windows guest machines running under VMware ESX

Recently, I have been writing about how Windows guest machine performance counters are affected by running in a virtual environment, including publishing two recent, longish papers on the subject, one about processor utilization metrics and another about memory management. In the processor utilization paper, (which is available here ), it is evident that running under VMware, the Windows performance counters that measure processor utilization are significantly distorted. At a system level, this distortion is not problematic so long as one has recourse to the VMware measurements of actual physical CPU usage by each guest machine. A key question – one that I failed to address properly, heretofore – is whether it is possible to correct for that distortion in the measurements of processor utilization taken at the process level inside the guest machine OS. The short answer for Windows, at least, is, “Yes.” The % Processor Time performance counters in Windows that are available at the proces

A comment on “Memory Overcommitment in the ESX Server”

The VMware Technical Journal recently published a paper entitled “ Memory Overcommitment in the ESX Server .” It traverses similar ground to my recent blog entries on the subject of VMware memory management, and similarly illustrates the large potential impact paging can have on the performance of applications running under virtualization. Using synthetic benchmarks, the VMware study replicates the major findings from the VMware benchmark data that I recently reported on beginning here .  VMware being willing to air performance results publicly that are less than benign is a very positive sign. Unfortunately, it is all too easy for VMware customers to configure machines where memory is overcommitted, subject to severe performance problems. VMware customers require frank guidance from their vendor to help them recognize when this happens and understand what steps to try to minimize these problems arising in the future. The publication of this article in the VMTJ is a solid, first step i

Virtual memory management in VMware: Final thoughts

This is final blog post in a series on VMware memory management. The previous post in the series is here : Final Thoughts We constructed and I have been discussing in some detail a case study where VMware memory over-commitment led to guest machine memory ballooning and swapping, which, in turn, had a substantial impact on the performance of the applications that were running. When memory contention was present, the benchmark application executed to completion three times slower than the same application run standalone. The difference was entirely due to memory management “overhead,” the cost of demand paging when the supply of machine memory was insufficient to the task. Analysis of the case study results unequivocally shows that the cost equation associated with aggressive server consolidation using VMware needs to be adjusted based on the performance risks that  can arise when  memory is over-committed.  When configuring the memory on a VMware Host machine, for optimal perfo

Virtual memory management in VMware: Swapping

This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is  here . Swapping VMware has recourse to steal physical memory pages granted to a guest OS at random, which VMware terms swapping , to relieve a serious shortage of machine memory. When free machine memory drops below a 4% threshold, swapping is triggered.  During the case study, VMware resorted to swapping beginning around 9:10 AM when the Memory State variable reported a memory state transition to the “Hard” memory state, as shown in Figure 19. Initially, VMware swapped out almost 600 MB of machine memory granted to the four guest machines. Also, note that swapping is very biased. The ESXAS12B guest machine was barely touched, while at one point 400 MB of machine memory from the ESXAS12E machine was swapped out. Figure 19. VMware resorted to random page replacement – or swapping – to relieve a critical shortage of machine memory when usage of machine memory exce

Virtual memory management in VMware: memory ballooning

This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is  here . Ballooning Ballooning is a complicated topic, so bear with me if this post is much longer than the previous ones in this series. As described earlier , VMware installs a balloon driver inside the guest OS and signals the driver to begin to “inflate” when it begins to encounter contention for machine memory, defined as the amount of free machine memory available for new guest machine allocation requests dropping below 6%. In the benchmark example I am discussing here, the Memory Usage counter rose to 98% allocation levels and remained there for duration of the test while all four virtual guest machines were active. Figure 7, which shows the guest machine Memory Granted counter for each guest, with an overlay showing the value of the Memory State counter reported at the end of each one-minute measurement interval, should help to clarify the state of VMw