Clik here to view.

Recent questions in TechNet Forums reminded me of an issue faced when building large file servers running on Windows Server 2008 R2. By large I mean serving a lot of files, from thousand to millions or more.
To improve performance, Windows Server makes intensive use of file system cache. With files located on an NTFS-formatted partition, this also means caching the additional information associated to the file or folder such as attributes, permissions and so on. Since with NTFS everything is a file, those information's are stored under the form of metafiles, which are hidden to the user. For each file/folder, the matching metafile entry can have a memory footprint equivalent to at least 1K. Multiplied by the number of files cached, it starts counting on larger file servers. Thanks to Sysinternal’s RAMMap Utility, you can witness this behavior by looking at the line Metafile from the tab Use Counts:
Image may be NSFW.
Clik here to view.
There is very little you can do to work around this issue except adding more RAM to the server. Since the amount of memory used depends on the size of files served and the number of files (Metadata), the amount of RAM needed can be relatively easily although roughly calculated.
While you can control the amount of memory used by the file system cache, you can’t prevent the metafiles from being cached.
Finally, a safe way not to get caught by surprise by this behavior once your file server is running in production is to benchmark it beforehand using the File Server Capacity Tool (FSCT).
[UPDATE] While File Servers are the most likely to be affected by this issue, web servers serving large amount of files or workstations used for large development projects might be too…
More Information