Wednesday, November 18, 2015

Windows Sandbox Attack Surface Analysis

Posted by James Forshaw, Quartermaster of Tools

Analysing the attack surface of user-mode sandboxed applications is a good way to hunt for elevation of privilege vulnerabilities. Much of the task of enumerating the attack surface could be done manually, but that’s a very tedious and error prone procedure. Obviously automating that process as much as possible is important both for initial analysis as well as detecting potential regressions.

TL;DR; I’ve released my tools I use internally to test out sandboxed code and determine the likely attack surface exposed to an attacker if a sandboxed process is compromised. You can get the source code from This blog post will describe a few common use cases so that you can use them to do your own sandbox analysis.


Writing a user-mode sandbox is a difficult challenge for various different reasons (see my  Shmoocon/Nullcon presentation I did this year for some examples, in fact I was planning on releasing the tools for Shmoocon but it didn’t happen in time). However in most user-mode implementations, such as Chrome, IE or Edge the sandboxing is done through assigning a restrictive process token so that only a very small number of securable resources can be accessed, ideally no resources at all should be accessible.

An obvious example of a securable resource is the file system. We’d like to know, for example, which locations a sandboxed process can access for read and/or write. A well known tool which comes to mind is AccessChk from Sysinternals ( Microsoft these days). AccessChk allows us to enumerate the security of the file system (as well as many of the secured resources such as the registry or object manager) but only tells you whether you could write to a resource based on a user or group account. For example, running the command ‘accesschk.exe -w users c:\windows’ will show you what files or directories a process that runs with the BUILTIN\Users group can access. However, that doesn’t really help us when it comes to a sandboxed application, which might have a restrictive token that results in a more complex access checking model.

For example, Chrome and Adobe Reader use Restricted Tokens to limit what resources the sandboxed process can access; this changes how the normal kernel access check works. And then there are Mandatory Integrity Labels, which also change what resources you can write to. You can summarise the access check for a restricted token in the below diagram.
Untitled drawing.png

And let’s not forget the introduction of LowBox tokens in Windows 8, which have a similar, but different access checks. And what if you mix both LowBox and restricted tokens? In general this is too complex to replicate accurately; fortunately, Windows provides a means of calculating the granted access to a resource which allows us to to automate a lot of the analysis of various different resources. For that reason I developed my own set of tools to do this which I’ve released as open-source under an Apache v2 license at In the rest of this blog post I’ll describe some of the tools, giving simple examples of use and why you might want to use them.

The Check* Tools

The core of the suite is the Check* tools. Their purpose is to determine whether the process token for a particular sandboxed application can be used to get access to a specific secured resource. For example CheckFileAccess will scan a given location on the file system comparing the Security Descriptor of a file or directory against the process token and determine whether the process would have read and/or write access.

The core to the operation of the tools is the AccessCheck function exposed by the Win32 APIs. This is actually a kernel system call, NtAccessCheck under the hood, and uses the same algorithms as a normal access check performed during the opening of an existing resource. Therefore we can be reasonably confident that the result of operation would match what we’d be able to get access to. The AccessCheck function takes an impersonation token; in this case we’ll use the primary token of a specified process (ideally sandboxed), convert it to an impersonation token, and pass the Security Descriptor for the resource we are interested in. We can then request the kernel determines the maximum allowed permissions for that token.

The following table is a list of the available tools for analysing different types of resources. They all take a command --pid parameter, which specifies the PID of a process to base the security check on.

Checks allowed access to Device Objects such as \Device\HarddiskVolume1
Checks allowed access to the file system.
Checks allowed access to connecting or binding network sockets. This is for testing AppContainer lockdown.
Checks allowed access to resources and directories in the object manager namespace.
Checks allowed access to processes.
Checks allowed access to the registry.

For example if you want to check what files and directories a sandboxed process can write to on the C: drive you can run the following command:
CheckFileAcccess -w -r -p <PID> C:\

The -w parameter specifies only display files or directories with at least one Write permission available (for example Write File, or Add File for directories, or a standard right such as Write DACL). The -r parameter performs the check recursively and the -p specifies the PID of the sandboxed process to test. It’s recommended to run the tool as an administrator as that ensures the tool can recurse into as many directories as possible. If we do this for the Chrome GPU process we find some interesting entries such as being able to write to c:\ProgramData\Microsoft\Windows\DRM.

The CheckDeviceAccess tool deviates from most of the others as it has to actually attempt to open a device node while impersonating the sandboxed token. This is because while the device object itself might have a Security Descriptor, Windows devices by default are considered to be file systems. This means that if you have a device object with the name \Device\Harddisk1 you can also try and access \Device\Harddisk1\SomeName and depending on how the device was registered it might be up to the driver itself to enforce security when accessing SomeName. The only reliable way of determining whether this is the case for a particular device object is to just open the path and see if it works.


A simple example is just to recursively check all Device objects in the object manager namespace using the command:
CheckDeviceAccess -r -l -p <PID> \

The -l parameter will try and map the device name to a symbolic link; this is quite useful for automatically named devices (which look like \Device\00000abc) as the symbolic link is generally more descriptive. For the Chrome renderer sandbox this simple command shows we can access devices such as the NTFS file system driver and AFD (which is the socket driver) but admittedly only if you access it through the namespace. Code running within the Chrome renderer sandbox cannot open any Device object itself.

Of course not all Devices can be tested in this manner; by default, the tool tries to open DeviceName\Dummy but some drivers require a specific path name otherwise they won’t open (you can change Dummy using the --suffix parameter). Still it gives you a quick list of drivers to go hunting for sandbox escape vulnerabilities.

And the Best of the Rest

Not all the tools in the suite are for checking direct access checking, I’ll summarise a few of the other tools which you might find useful.


This tool dumps a list of process mitigations which have been applied through the SetProcessMitigationPolicy API. This only works on Windows 8 and above. Examples of mitigations that could be enabled include Win32k Syscall Disable, Forced ASLR and Custom Font Disable. For example, to dump all processes with Win32k System Call Disable Policy run the following command as an Administrator:
DumpProcessMitigations -t DisallowWin32kSystemCalls


This is a GUI tool which allows you to view the contents of a shared memory section, modify it in a hex editor, and execute a couple of ways of corrupting the section to test for trivial security issues. A section can be opened through its object name or via extracting handles from a running process. I developed this tool for investigating the Chrome section issue I documented in my blog here.



This is just a generic command line tool to dump open handles in all processes in the system. While that in itself wouldn’t differentiate it from other similar tools already available (such as SysInternals Handle utility) it does have one interesting feature. You can group handles by certain properties such as the address of the kernel mode object. This allows you to find instances where an object is shared between two processes at different privilege levels (say between a browser process and its sandboxes tabs) which might allow for privilege escalation attacks to occur. For example running the following command as an Administrator will dump the section objects shared between different Chrome processes.

GetHandles.exe -n chrome -t Section -g object -s partial

This will produce output similar to the following, which shows two section objects shared between different processes:
Object: FFFFC00128086060
11020/0x2B0C/chrome 4/0x4:              Section 00000006 (unknown)
10264/0x2818/chrome 15636/0x3D14:       Section 000F0007 (unknown)

Object: FFFFC00135F82A00
13644/0x354C/chrome 4/0x4:              Section 00000006 (unknown)
10264/0x2818/chrome 11956/0x2EB4:       Section 000F0007 (unknown)

There’s also the CommonObjects tool, which does a similar job but doesn’t have as many other features.


This GUI tool allows you to inspect and manipulate access tokens as well as do some basic tests of what you can do with that token (such as opening files). You can either look at the token for a specific process (or even open token handles inside those processes) or you can create ones using common APIs.



Hopefully these tools will be useful for your own investigations into Windows sandboxes and finding exploitable attack surface. It’s open-source under a permissive license so I hope it benefit the security community and the wider users at large. Still if you have any ideas or changes please consider contributing back to the original project.

Monday, November 2, 2015

Hack The Galaxy: Hunting Bugs in the Samsung Galaxy S6 Edge

Posted by Natalie Silvanovich, Planner of Bug Bashes

Recently, Project Zero researched a popular Android phone, the Samsung Galaxy S6 Edge. We discovered and reported 11 high-impact security issues as a result. This post discusses our motivations behind the research, our approach in looking for vulnerabilities on the device and what we learned by investigating it.

The majority of Android devices are not made by Google, but by external companies known as Original Equipment Manufacturers or OEMs which use the Android Open-Source Project (AOSP) as the basis for mobile devices which they manufacture. OEMs are an important area for Android security research, as they introduce additional (and possibly vulnerable) code into Android devices at all privilege levels, and they decide the frequency of the security updates that they provide for their devices to carriers.

Having done some previous research on Google-made Nexus devices running AOSP, we wanted to see how different attacking an OEM device would be. In particular, we wanted to see how difficult finding bugs would be, what type of bugs we would find and whether mitigations in AOSP would make finding or exploiting bugs more difficult. We also wanted to see how quickly bugs would be resolved when we reported them. We chose the Samsung Galaxy S6 Edge, as it is a recent high-end device with a large number of users.

We decided to work together on a single problem for a week, and see how much progress we could make on the Samsung device. To get our competitive spirits going, we decided to have a contest between the North American and European members of Project Zero, with a few extra participants from other Google security teams to make the teams even, giving a total of five participants on each side.

Each team worked on three challenges, which we feel are representative of the security boundaries of Android that are typically attacked. They could also be considered components of an exploit chain that escalates to kernel privileges from a remote or local starting point.

  1. Gain remote access to contacts, photos and messages. More points were given for attacks that don’t require user interaction, and required fewer device identifiers.
  2. Gain access to contacts, photos, geolocation, etc. from an application installed from Play with no permissions
  3. Persist code execution across a device wipe, using the access gained in parts 1 or 2

A week later, we had the results! A total of 11 issues were found in the Samsung device.

Samsung WifiHs20UtilityService path traversal

Perhaps the most interesting issue found was CVE-2015-7888, discovered by Mark Brand. It is a directory traversal bug that allows a file to be written as system. There is a process running a system on the device that scans for a zip file in /sdcard/Download/ and unzips the file. Unfortunately, the API used to unzip the file does not verify the file path, so it can be written in unexpected locations. On the version of the device we tested, this was trivially exploitable using the Dalvik cache using a technique that has been used to exploit other directory traversal bugs, though an SELinux policy that prevents this specific exploitation technique has been pushed to the device since.

Samsung SecEmailComposer QUICK_REPLY_BACKGROUND permissions weakness

Another interesting and easy-to-exploit bug, CVE-2015-7889 was found in the Samsung Email client by James Forshaw. It is a lack of authentication in one of the client’s intent handlers. An unprivileged application can send a series of intents that causes the user’s emails to be forwarded to another account. It is a very noisy attack, as the forwarded emails show up in the user’s sent folder, but it is still easy access to data that not even a privileged app should be able to access.

Samsung SecEmailUI script injection

James Forshaw and Matt Tait also found a script injection issue in the Samsung email client, CVE-2015-7893. This issue allows JavaScript embedded in a message to be executed in the email client. It is somewhat unclear what the worst-case impact of this issue is, but it certainly increases the attack surface of the email client, as it would make JavaScript vulnerabilities in the Android WebView reachable remotely via email.

Driver Issues

There were three issues found in drivers on the device. CVE-2015-7890, found by Ian Beer, and CVE-2015-7892, found by Ben Hawkes, are buffer overflows in drivers that are accessible by processes that run as media. These could be used by bugs in media processing, such as libstagefright bugs, to escalate to kernel privileges. CVE-2015-7891, found by Lee Campbell of the Chrome Security Team is a concurrency issue, leading to memory corruption in a driver that could be used to escalate from any unprivileged application or code execution to kernel.

Image Parsing Issues

Five memory corruption issues on the device in Samsung-specific image processing by myself, Natalie Silvanovich. Two of these issues, CVE-2015-7895 and CVE-2015-7898 occur when an image is opened in Samsung Gallery, but the three others, CVE-2015-7894, CVE-2015-7896 and CVE-2015-7897 occur during media scanning, which means that an image only needs to be downloaded to trigger these issues. They allow escalation to the privileges of the Samsung Gallery app or the media scanning process.

Severity and Mitigations

Overall, we found a substantial number of high-severity issues, though there were some effective security measures on the device which slowed us down. The weak areas seemed to be device drivers and media processing. We found issues very quickly in these areas through fuzzing and code review. It was also surprising that we found the three logic issues that are trivial to exploit. These types of issues are especially concerning, as the time to find, exploit and use the issue is very short.

SELinux made it more difficult to attack the device. In particular, it made it more difficult to investigate certain bugs, and to determine the device attack surface. Android disabling the setenforce command on the device made this even more difficult. That said, we found three bugs that would allow an exploit to disable SELinux, so it’s not an effective mitigation against every bug.

Reporting the Issues

We reported these issues to Samsung soon after we discovered them. They responded recently, stating that they had fixed eight of the issues in their October Maintenance Release, and the remaining issues would be fixed in November. We greatly appreciate their efforts in patching these issues.

Testing for the vulnerabilities on the same device we found them on, with the most recent security update (G925VVRU4B0G9) applied confirmed this.


The majority of the issues are fixed, however three will not be patched until November. Fortunately, these appear to be lower severity issues. CVE-2015-7898 and CVE-2015-7895 require an image to be opened in Samsung Gallery, which does not have especially high privileges and is not used by default to open images received remotely via email or SMS (so an exploit would require the user to manually download the image and open it in Gallery). The other unfixed issue, CVE-2015-7893 allows an attacker to execute JavaScript embedded in emails, which increases the attack surface of the email client, but otherwise has unclear impact.


A week of investigation showed that there are a number of weak points in the Samsung Galaxy S6 Edge. Over the course of a week, we found a total of 11 issues with a serious security impact. Several issues were found in device drivers and image processing, and there were also some logic issues in the device that were high impact and easy-to-exploit.

The majority of these issues were fixed on the device we tested via an OTA update within 90 days, though three lower-severity issues remain unfixed. It is promising that the highest severity issues were fixed and updated on-device in a reasonable time frame.

Thursday, October 15, 2015

Windows Drivers are True’ly Tricky

Posted by James Forshaw, Driving for Bugs

Auditing a product for security vulnerabilities can be a difficult challenge, and there’s no guarantee you’ll catch all vulnerabilities even when you do. This post describes an issue I identified in the Windows Driver code for Truecrypt, which has already gone through a security audit. The issue allows an application running as a normal user or within a low-integrity sandbox to remap the main system drive and elevate privileges to SYSTEM or even the kernel. I hope to show why the bug in question might have been missed. I don’t provide any guarantees that there are no more bugs left to find.

It’s worth noting that this vulnerability didn’t have a direct impact on the security of the encrypted drive volumes at rest. Before I delve into the details let’s take a look at an aspect of the Windows NT operating system that’ll be very important later.

The History of DosDevices

Under MS-DOS and versions of Windows that ran on top of it drive letters were generally assigned in a specific order based on the device and disk partition type. In Windows NT this isn’t the case. As I mentioned in my previous post on symbolic links, the drive letters you see in Windows Explorer are really symbolic links under the hood which point the drive letter (say C:) to the mounted device object (say \Device\HarddiskVolume4). The OS is free to assign these drive letters in an arbitrary order.

The OS needs a known location to store these symbolic links so in the original Windows NT 3.1 an object directory was added to the root of the object manager namespace called DosDevices. This directory stored all the drive and device symbolic links for the system. There was only a single directory for all users, but for the original versions of Windows NT this didn’t matter as you could only ever have one interactive user logged on at one time. When calling a Win32 API which takes a DOS path it’s converted to an absolute drive path and the DosDevices prefix is appended before passing to the native NT system call.

Over the subsequent versions of the OS the implementation of DosDevices changed. First in NT 4 the name was changed from DosDevices to ??. This was presumably for performance reasons as the kernel could quickly check for the prefix using two 32 bit integer comparisons for the 4 unicode characters \??\. To ensure old code still worked DosDevices now became a symbolic link pointing to the new shorter path.

The biggest change, however, happened in Windows XP (well technically with the introduction of Terminal Services but XP was the first consumer OS with this support). XP shipped with Fast User-Switching and remote desktop support, which allowed multiple interactive users to be logged in the same machine at the same time. This required that DosDevices supported per-user objects, because it would be annoying and potentially dangerous to allow the sharing of user specific drive mapping. To achieve this a per-user object directory is created under \Sessions\0\DosDevices with a name which corresponds to the user’s logon ID.

So how does this per-user directory get referenced? By creating a fake DosDevices object directory. First the original ?? directory was renamed to GLOBAL?? and ?? now became a virtual directory. When reading from the directory, say resolving a drive letter, the per-user directory is checked. If the per-user directory doesn’t contain a corresponding entry the kernel falls back to checking the global directory, if no entry is found there then the kernel generates an appropriate error such as STATUS_OBJECT_NAME_NOT_FOUND. An interesting case is what happens when a process creates a new object in the virtual directory. Only the per-user directory is taken into account so any new object creation to \?? will result in that object being added to the per-user directory. The access control on GLOBAL?? is set so that only administrators can modify objects within it, however a normal user is free to modify their own per-user directory.

To add a further complication Windows 2000 introduced the concept of a per-process DosDevices directory. This can be specified by calling the system call NtSetInformationProcess with the ProcessDeviceMap information class passing the handle to a new object directory. On Windows 2000 this just replaces the ?? directory lookup entirely, however on XP and above the fallback to GLOBAL?? still occurs. This is only used when the lookup is occurring within the same process, no other process on the system see the new DosDevices map.

Here’s a condensed history of the major changes in how DosDevices works across NT operating systems.

Details of the Vulnerability

So with that bit of history out of the way let’s look at the vulnerability itself. The opened issue can be found here. The Truecrypt driver exposes a number of different IOCTLs to a user-mode application to perform its various tasks such as mount and unmounting encrypted disk images and enumerating information. The vulnerability is due to bugs in  the mounting and unmounting of Truecrypt volumes, corresponding to the IOCTLs TC_IOCTL_MOUNT_VOLUME and TC_IOCTL_DISMOUNT_VOLUME. All the vulnerable code is contained in the Driver\Ntdriver.c file in the Truecrypt source code.

When a drive is mounted in Windows there are a few ways that a drive letter can be assigned. The most common way is by registering the drive with the Mount Manager driver. This requires the caller to be an administrator, and all registration information will go into the Registry. The alternative is the symbolic link for the drive can be created manually using the IoCreateSymbolicLink API. Ultimately though it must go into one of the DosDevices locations otherwise normal user-mode application would not be able to pick up the drive letter.

The Truecrypt driver supports both ways of mounting the drive, it can use the Mount Manager to mount the drive (if the bMountManager flag is set in the structure passed to the driver), however, just in case it also manually creates the link as shown:

// We create symbolic link even if mount manager is notified of
// arriving volume as it apparently sometimes fails to create the link
CreateDriveLink (mount->nDosDriveNo);

From user-mode we can only specify a number from 0 to 25 as nDosDriveNo which represents the drive letters A through Z. Now let’s look at what CreateDriveLink is doing:

#define DOS_MOUNT_PREFIX L"\\DosDevices\\"

void TCGetDosNameFromNumber (LPWSTR dosname, int nDriveNo) {
   WCHAR tmp[3] =
   {0, ':', 0};
   int j = nDriveNo + (WCHAR) 'A';

   tmp[0] = (short) j;
   wcscpy (dosname, (LPWSTR) DOS_MOUNT_PREFIX);
   wcscat (dosname, tmp);

NTSTATUS CreateDriveLink (int nDosDriveNo) {
   WCHAR dev[128], link[128];
   UNICODE_STRING deviceName, symLink;
   NTSTATUS ntStatus;

   TCGetNTNameFromNumber (dev, nDosDriveNo);
   TCGetDosNameFromNumber (link, nDosDriveNo);

   RtlInitUnicodeString (&deviceName, dev);
   RtlInitUnicodeString (&symLink, link);
   // Delete \DosDevices\X:
   ntStatus = IoCreateSymbolicLink (&symLink, &deviceName);
   return ntStatus;

Ignore the horrible looking string manipulation in TCGetDosNameFromNumber as it’s not relevant to the vulnerability. What the code is doing is building a path for the drive letter symbolic link to \DosDevices\X: where X is the drive letter determined simply by adding the drive number to the character ‘A’.

In theory we could redefine the C: drive, perhaps that could be used to elevate privileges? Well sadly not, if you go back to my description of DosDevices on XP and later versions of Windows you’ll notice that when writing to the DosDevices directory (which is really a symbolic link to the virtual ?? directory) it will create the symbolic link for the drive in the per-user directory  which doesn’t really gain you much. You’re only overriding the current user’s view of the drive. This is useful to escape a sandbox (assuming you can access the Truecrypt device) but as a normal user you can already write to the per-user DosDevices directory. That seems like a dead end, perhaps it’s worth taking a look at the unmount process instead.

When unmounting a Truecrypt volume you only need to pass the drive number. Unmounting an existing device will delete the original symbolic link using RemoveDriveLink.

NTSTATUS RemoveDriveLink (int nDosDriveNo) {
   WCHAR link[256];
   NTSTATUS ntStatus;

   TCGetDosNameFromNumber (link, nDosDriveNo);
   RtlInitUnicodeString (&symLink, link);
   // Delete \DosDevices\X:
   ntStatus = IoDeleteSymbolicLink (&symLink);
   return ntStatus;

// We always remove symbolic link as mount manager might fail to do so
RemoveDriveLink (extension->nDosDriveNo);

Does this help us in anyway? Let’s see what IoDeleteSymbolicLink is doing under the hood:

NTSTATUS IoDeleteSymbolicLink(PUNICODE_STRING SymbolicLinkName) {
 NTSTATUS status;
 OBJECT_ATTRIBUTES ObjectAttributes;
 HANDLE Handle;

 InitializeObjectAttributes(&ObjectAttributes, SymbolicLinkName, ...);
 status = ZwOpenSymbolicLinkObject(&Handle, DELETE, &ObjectAttributes);
 if (NT_SUCCESS(status)) {
   status = ZwMakeTemporaryObject(Handle);
   if (NT_SUCCESS(status))
 return status;

We can see IoDeleteSymbolicLink is opening the symbolic link object for DELETE access. It then calls ZwMakeTemporaryObject to drop the reference count of the object by 1 (which was added when creating the symbolic link in the first place). As no other handles are open to the object it gets deleted from the object namespace which removes the name. Crucially though this is a “Read” operation, even though we’re asking for DELETE permissions it's only opening an existing object, this means that the virtual ?? directory will first try to open in the per-user directory, then fallback to the global directory. The result is if the drive letter can’t be found in the per-user directory it will actually open the global symbolic link, then delete it.

So seems like we’re getting somewhere, we’ve got a primitive to delete an existing drive letter in the global directory. However we need to have already mounted a Truecrypt volume to the corresponding drive letter in order to delete it. If we try and do this the mount process fails, what’s stopping us defining a new C: drive? During the mount process the function IsDriveLetterAvailable is called, if it returns TRUE then the letter is available for use. If the function returns FALSE then the command refuses to mount the volume as the specified drive letter.

BOOL IsDriveLetterAvailable (int nDosDriveNo) {
   OBJECT_ATTRIBUTES objectAttributes;
   UNICODE_STRING objectName;
   WCHAR link[128];
   HANDLE handle;

   TCGetDosNameFromNumber (link, nDosDriveNo);
   RtlInitUnicodeString (&objectName, link);
   InitializeObjectAttributes (&objectAttributes, &objectName,
   // Test opening \DosDevices\X:
   if (NT_SUCCESS (ZwOpenSymbolicLinkObject (&handle, GENERIC_READ,
                                             &objectAttributes))) {
       ZwClose (handle);
       return FALSE;

   return TRUE;

All IsDriveLetterAvailable does is call ZwOpenSymbolicLinkObject to try and open any existing symbolic link with the drive letter name. As we saw with the IoDeleteSymbolicLink case this is a read operation so the virtual directory fallback will occur. If the global drive letter entry exists we can’t mount the device to use the unmount to delete the drive letter. Seems like we’re at an impasse unless we can bypass this check.

Notice that the result of ZwOpenSymbolicLinkObject is just checked to return a successful status using the NT_SUCCESS macro. This causes the logic of the function to be incorrect. The intent of the function is to say “Does no symbolic link object with this name exist?” However because of the sinking of error cases what it actually results in is asking “Does opening a symbolic link object with this name fail?” Those are subtly different questions and obviously we can get it to return the answer we like.

When the symbolic link doesn’t exist the API returns STATUS_OBJECT_NAME_NOT_FOUND; that was the intent of the check. However, to bypass the check we can just find any other way of getting the API to fail, IsDriveLetterAvailable will return TRUE, and we can mount an arbitrary drive letter even if it already exists. A common trick in these cases is to change the access control on the object so that the function would return STATUS_ACCESS_DENIED, however as the code is using the Zw variant of the system call all access checks are bypassed. Instead all we need to do is create a different object type with the same name in the virtual DosDevices directory. As ZwOpenSymbolicLinkObject  will verify the object type this will result in the API returning the error status STATUS_OBJECT_TYPE_MISMATCH instead. We do have a race condition here between when the drive letter check occurs and when the symbolic link is created which we need to win by deleting the invalid object, however that’s a pretty easy this to do through brute force or abusing file OPLOCKs.

So we can combine these small bugs into remounting the C: drive to an arbitrary Truecrypt volume, well almost. We have just one problem left, how to actually get the mount process to write to the GLOBAL?? directory. Turns out this is probably the easiest part of all. IoCreateSymbolicLink doesn’t perform any security checks when creating the link so we can get it to write to an object directory we wouldn’t normally be able to control. However when setting the per-process using NtSetInformationProcess we only need a handle with DIRECTORY_TRAVERSE permissions. As this is a read permission we can open GLOBAL?? from a low-privileged user, set it as the per-process DosDevices directory then get the Truecrypt driver to write to it.

So the final exploit chain is as follows:
  1. Create a new kernel object in the per-user DosDevices directory with the name of the drive letter to override.
  2. Mount a Truecrypt volume as that drive number then win the race between IsDriveLetterAvailable and CreateDriveLink.
  3. Manually delete the symbolic link in the per-user directory (it’s our directory so we can do this) then unmount the drive. This will cause IoDeleteSymbolicLink to delete the global drive letter.
  4. Assign GLOBAL?? as the per-process DosDevices directory, remount the volume as the letter to override.
  5. Exploit the remapped drive letter to elevate privileges such as starting a scheduled task or service.

I’ve summarised the operations in the diagram below.

Performing the sequence of operations results in the global C: drive mapped to our arbitrary Truecrypt volume, and from that you can trivially elevate privileges as any system service, or even the kernel will think


So why might this issue have been missed? Well obviously the root cause was the lack of knowledge of the attack vector but you could be forgiven for this because of typical idiomatic Windows driver code which isn’t usually exploitable. Take a look at pretty much any DriverEntry method for a Windows driver. When the driver starts it needs to create a new device object, typically with IoCreateDevice. Then the code will also create a symbolic link in the DosDevices directory using IoCreateSymbolicLink to make it easier for user-mode applications to access the device.

The Truecrypt implementation for this is in the function TCCreateRootDeviceObject, which creates a symbolic link called \DosDevices\TrueCrypt. Why isn’t this vulnerable to a similar issue? Whenever a driver starts, even if it was loaded by a user, the initial thread running DriverEntry runs within the System process, which is a special process on the NT operating system where system kernel threads execute. By running in the system process context it’s not possible, at least without Administrator privileges, to influence the DosDevices location either via the per-user directory or the per-process directory.

This might lead a developer, and even an auditor to think that IoCreateSymbolicLink does something special to guard against this attack. As we’ve seen it doesn’t. The issue wouldn’t have been exploitable if the symbolic link creation occurred in a system thread but that’s up to the developer. It also wouldn’t have been vulnerable prior to Windows 2000, but that’s hardly a consolation. When the behaviour of something so fundamental to the NT operating system like how DOS style devices letters is handled isn’t well documented or things like the per-process device map trips up Microsoft it’s hard to blame the developers and auditors when bugs sneak through.