Memory Mapping on Windows (including Benchmark)

The following tweet sparked my interest to investigate further into the different ways of mapping and unmapping memory on Windows, and trying to find the "best" way:

If you're familiar with the memory mapping techniques on Windows, you may skip to the benchmark and conclusion.

Mapping Memory

As a quick recap, Windows has multiple ways of mapping (allocating) virtual memory from the operating system: VirtualAlloc or CreateFileMapping.

VirtualAlloc is pretty straight forward to use, you simply pass in the desired address (or NULL, if you are fine with the decision of the OS), size (as a multiple of dwAllocationGranularity) and flags and you get back the address of the newly allocated memory region.

CreateFileMapping is a little more complicated, since it's actual purpose is to map files on disk to memory. However, if you pass in a size and INVALID_HANDLE_VALUE instead of a valid file handle, you'll get a file mapping which is backed by the systems page file. The resulting file handle has to be mapped with MapViewOfFile.

Since Windows does not support overcommitting, unlike Linux, all allocated memory, regardless of method, must always be backed by either the swap file[1] or physical memory.

Unmapping Memory

Now that we know how to map memory on Windows, let's look into unmapping. Contrary to any logic, there are four, not just two, ways of unmapping:

  • VirtualFree, with MEM_RELEASE
  • VirtualFree, with MEM_DECOMMIT
  • VirtualAlloc, using MEM_RESET & MEM_RESET_UNDO
  • UnmapViewOfFile

Using UnmapViewOfFile is the only legal way to unmap a region mapped with MapViewOfFile, the former three ways are legal for any memory allocated using VirtualAlloc.

Benchmark

One might think that all function somewhat similar. Either of the three ways should technically only change a couple of bits in some data structure in the kernel. Apparently so, this is not the case.

Every benchmark consists of mapping the allocated region, overwriting it (using either memset or touching the first byte of the 64k region, more on that later), to make sure all of the memory is actually committed to physical RAM, and than unmapping the region again.
The cost of the pure memset operation is also benchmarked, and subtracted from all benchmarks.

Benchmarks were run on my desktop machine at home (i7 5960X, 64 GiB DDR4-2134 RAM, Samsung 960 Pro). The size of memory region used was 300 MiB (64k * 4800).

Memsetting the whole allocated region

Benchmark Time
memset 21.9ms
VirtualAlloc/VirtualFree 64.4ms
MEM_DECOMMIT 116.8ms
MEM_RESET 77.3ms
UnmapViewOfFile 103.6ms

Chart0
(the "unnormalized" bar includes the cost of the memset)

Only touching the first byte of every 64k region

Benchmark Time
"memset" 0.1ms
VirtualAlloc/VirtualFree 4.4ms
MEM_DECOMMIT 4.6ms
MEM_RESET 4.4ms
UnmapViewOfFile 5.6ms

Chart1
(the "unnormalized" bar includes the cost of the memset)

The code for the benchmark can be found here.

Further Investigation

Further investigation revealed, that MEM_RESET is lazily unmapping the pages[2], dropping them only in the case of memory pressure, while the other ways are actively unmapping (and probably zero'ing) the memory. This would explain the difference in perceived performance.
Releasing the memory will try to "hide" the cost of zero'ing, as explained by this fantastic blog post by Bruce Dawson.

Conclusion

If the intention is to re-use the pages in the near future, prefer to mark them as unused using MEM_RESET. Otherwise, simply releasing the pages is best, and will give Windows a better opportunity to re-use pages.
In general, though, I'd advise against any method, since the performance characteristics is not suited for anything close to (soft-) realtime.

I've yet figure out a fast way, any ideas?
Tweet me at @ArvidGerstmann.


  1. Page file in Windows jargon. ↩︎

  2. As revealed by a developer on jemalloc: https://github.com/jemalloc/jemalloc/issues/255#issuecomment-130380103 ↩︎

More

Announcing the C++ Tour

C++ Tour Logo

I'm proud to officially announce the C++ Tour.[1]

The tour can be best explained by quoting our mission statement:

The goal of the C++ tour project is to create a new way of teaching C++.
First and foremost we want to target those, who already have some experience
in programming, but are new to C++ or return after a longer absence.

We want to guide through features of the language and standard library, showing
pitfalls and best practices. The tour will be split into chapters, each of which
contains lessons, teaching a single concept or language feature.
Every lesson will be accompanied by an interactive example, demonstrating the
concept and allowing for experimentation.

It'll be available from cpp-tour.com early next year (current content is a placeholder).

We are looking for help!

The tour is currently being built on github.com/leandros/cpp-tour,
we have a couple of tickets open looking for feedback.

Please give us a star and share the blog post!

Feel free to just chime in. We're looking for any help we can get to help make
the C++ tour a reality in a timely manner.

It's best to reach us over the Slack channel #cpp-tour on the CppLang slack (click here to join).


  1. The official announcement was done on CppCast and can be heard in Episode 129. ↩︎

More

Using clang on Windows

Update 1: Visual Studio 2017 works. Thanks to STL.


Disclaimer: This isn't about clang/C2, clang/C2 is Microsoft own fork of clang to work with their backend. This is using clang + llvm.

tl;dr: All the source is in this repository: https://github.com/Leandros/ClangOnWindows


Recently Chrome decided to switch their Windows builds to use clang, exclusively. That got me intrigued to try it again, since my former experience of trying to use clang on Windows was rather mixed. However if it's good enough for Chrome, it surely must've improved!

Unfortunately, getting clang to compile MSVC based projects isn't as easy as just dropping in clang and changing a few flags. Let's get started.

Requirements

You'll need:

Building

Since I want to keep this build-system independent, I've setup a .bat script with all the required steps to compile a simple example. You can grab it here: github.com/Leandros/ClangOnWindows.

Open the build.bat and let's walk through it:

  • Set LLVMPath, VSPath and WinSDKPath to the installation paths of LLVM, VS 2017 and the current Windows Kit.
  • OUTPUT defines the name of the final .exe.
  • CFLAGS contains all your usual clang compiler flags, for our example I've kept them simple.
  • CPPFLAGS defines the include directories of the Universal CRT, C++ Standard Library and Windows SDK.
  • LDLIBS defines the library import paths for the Universal CRT, C++ Standard Library and Windows SDK.
  • MSEXT are the required flags to make clang act more like CL. Not required anymore, Visual Studio 2017 will work without.

The rest of the file is dedicated to compiling all .cc files in the current directory and linking them into an executable.

This example makes use of lld, LLVMs linker. It has a caveat, it's not yet able to fully emit PDBs, you might want to consider to keep using LINK.EXE until lld is fully ready. You can use your normal linking process, the output of clang is fully compatible.

Questions? @ArvidGerstmann on Twitter.

More

Is my output going to bash.exe or cmd.exe?

If you want to color the output of your terminal program in Windows, you might've noticed, that running it from different shells reacts differently.

cmd.exe does not recognize the ANSI escape sequences to change the foreground/background color, while bash.exe does not recognize Windows' SetConsoleTextAttribute. This poses a problem. A way to detect if the output is going to bash.exe or cmd.exe is required.

Fortunately, an old mail on the Cygwin mailing list[1] hinted to the fact that GetFileType for the console handle returned by GetStdHandle is different. And after a little testing, it in fact is! Equipped with this information, we can now distinguish between our output terminals:

HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD dwFiletype = GetFileType(hConsole);
if (dwFiletype == 0x3) {
    /* We're running in bash.exe */
} else if (dwFiletype == 0x2) {
    /* We're running in cmd.exe */
}

Questions? Criticism? Wanna talk? I'm @ArvidGerstmann on Twitter.


  1. Despite the author saying it's a bug, this isn't the case, as later emails in the thread confirm. ↩︎

More

Stop using #ifdef for configuration

Using #ifdef's to configure different conditional compilations is very error prone.

I believe we've all had the case where something was compiled, while it shouldn't have been, due to accidentally creating/including a #define of the same name or defining it to 0 while checking with #ifdef.

While I won't give you a solution to fix the flawed model of using the preprocessor for conditional inclusion, I'll give you a solution to make it less error prone:

#if USING(DEBUG)
  /* Do something in debug. */
#else
  /* Do something in production. */
#endif

The USING macro requires each configuration macro to be explicitly defined to special ON or OFF values, or you'll get an error[1].

By simply defining USING to a mathematical expression and ON / OFF to be the operators, we'll get an error whenever an undefined or otherwise defined macro is tried to be used as an argument:

#define USING(x)        ((1 x 1) == 2)
#define ON              +
#define OFF             -

Comments, criticism? Drop me a tweet @ArvidGerstmann.


  1. Not entirely correct, but good enough for our case. ↩︎

More