Logo
blank Skip to main content

How to Overcome the Challenges of Developing a User Mode File System Driver

In one of our previous posts, we explained how to protect valuable user data with file encryption. Using the file system minifilter approach, we implemented a driver that can encrypt files on the fly and ensure per-process access restrictions.

While we provided a detailed description of the driver implementation process, there are still some challenges you may face when using minifilter drivers for file encryption. So in this article, we go over six challenges you may face when developing a user mode file system solution and provide some tips on how to overcome them.

This article will be helpful for developers who want to know more about ways to create a fake file system and the pitfalls of this process.

NTFS has a standard encryption mechanism — Encrypting File System (EFS) — that is used for encrypting separate parts of the logical drive. The main problem with EFS is that if a user has permission to decrypt protected data, then all applications running in the session started by this user get access to the decrypted data. Thus, sensitive data is left unprotected against malicious applications.

To solve this issue, we decided to develop a user mode file system that can transparently encrypt selected objects. Similar to the standard EFS, our fake file system can automatically encrypt and decrypt data. The difference is that our solution splits all applications in the same user session into two groups:

  • Permitted applications, which receive unencrypted data
  • Prohibited applications, which receive encrypted data

To learn more about this process, read our previous post on building an encryption driver with per-process access restrictions.

Read also:
File Encryption Driver Development with Per-Process Access Restriction

When implementing an encryption driver, you may face a number of challenges. In this article, we cover the six most common questions that may arise during user mode file system driver development:

  1. Can we use cache during the paging input/output (I/O) process?
  2. How can we avoid double ciphering?
  3. How can we solve problems with sharing modes?
  4. How can we address difficulties with driver callbacks?
  5. What architecture should we use?
  6. How can we ensure compatibility of the fake file system?

Let’s dig deeper and find answers to each of these questions!

1. Can we not use cache during the paging I/O process?

File systems have three common pairs of read/write operations:

read and write operations

The Cache Manager typically uses paging I/O read/write operations for moving data to the cache or writing it to a file. For decrypted information, you need to maintain a separate cache for encrypting that data on the fly when writing it directly to disk — for instance, when Cache Manager flushes dirty pages to the disk due to lack of memory.

Here, you may face a challenge: since our driver isn’t a data repository, all flushed data will be redirected to the real file system. But we don’t know how the real file system will behave — it fully depends on the way we build our interaction with it.

So what you need to do is flush data to the real file system in a way that allows you to control — or at least accurately predict — its caching. Otherwise, our fake file system’s data flushed by Cache Manager will get back to Cache Manager again, but Cache Manager will see this data as data received from a different file system.

Unfortunately, Cache Manager can’t distinguish cached data received from different file systems. Furthermore, its internal implementation contains a number of blockings that prohibit any recursions. As a result, the process of flushing data goes as follows:

  • Cache Manager starts flushing decrypted pages and blocks its internal structure
  • Cache Manager initiates writing data to our device
  • The device initiates writing data to the real file system
  • The real file system tries to move data to the locked cache
  • The system gets into a deadlock

There are several possible ways of solving this problem:

  • Calculate the amount of available cache space
  • Determine how much cache space the file system will need for writing data
  • Use end-to-end non-cached data writing to the final file system when other methods can’t be used

The best solution will fully depend on the particular use case. For instance, the use of non-cached write-through I/O on the final file system will harm system performance, especially when working with small files. On the other hand, cache calculations provide you with information that quickly becomes irrelevant if there are other consumers of cache space.

Read also:
Windows Driver Testing Basics: Tools, Features, and Examples

2. How can we avoid double ciphering?

So far, we’ve managed to make our target application write all valuable data only to our fake file system. This allows us to transfer our crypto containers over the network without risking leaking any sensitive information. However, at this point, we face a problem of double ciphering. To avoid this, we need to make our application accept and save the sent crypto containers without encrypting the data twice. Let’s look closer at this process.

We compose larger individual blocks out of small network packets and encrypt the data with ciphering algorithms before saving it to a file. The problem is that the data initially transferred across the network has already been encrypted. So in the end, we get a file that’s been ciphered twice, and if we try to open this file through our file system, we’ll only get unreadable data.

What can be done to solve this problem?

1) Delay writing data

The first option is to delay the process of writing data until we have a fully aggregated buffer with the whole potential header of a crypto container. You need to identify the type of data header contains and write data as is if it’s already encrypted.

Usually, this trick works. But it requires injecting the code at the user space level. Furthermore, we need to have a workaround for writing data directly without encryption when needed.

However, if we’re working with a torrent file, for instance, we’ll have to cache the entire file, since we don’t know when exactly we’ll receive the whole header of the crypto container.

2) Decrypt fully downloaded data

The second option is to try to detect double ciphering after we download the whole file and perform one-time decryption. The main problem here is determining whether the file has been fully downloaded.

Some browsers intercept the downloading process to rename a temporary file and move it to the Downloads folder or any other place specified by the user. Only after that does the download resume.

The general approach here is to consider a crypto container as being invalid if the size of its declared header is less than the actual size of the downloaded file. However, if a part from the middle of the file wasn’t fully downloaded and the file was temporarily closed, we can mistakenly consider the download as finished and lift the unnecessary encryption. And if later the download agent adds the rest of the encrypted data with additional encryption, the file will become corrupted.

As you can see, neither of these methods will be helpful when it comes to partially downloading a range of bytes to a new file from the middle of an existing file. In this case, we’ll always end up with unreadable information.

Related services

Data Management Solutions

3. How can we solve problems with sharing modes?

Now, let’s get back to our user mode file system on Windows. As you probably know, Windows supports sharing modes that allow you to not only open files exclusively but also work with them collectively. Furthermore, you can specify what types of operations can be performed in a shared more: read, write, or delete.

There are two standard scenarios for the situation when a file is opened repeatedly from the same or a different application with the same access permission:

  • Access to the file is granted in accordance with the set permissions.
  • Access to the file is prohibited and the user receives a message about the sharing violation error.

Of course, we need to imitate this behavior in our fake file system. And this is where we may face a number of challenges.

Usually, an application can open a file for reading attributes and, say, determining the size of the file. Reading attributes belongs to the class of operations that don’t require additional permissions and always can be performed. However, we need to return not the current size of the file but the size of the unencrypted file. And for that, we need to have reading rights.

Here’s one more example. The LastAccessTime attribute shows when a particular file was last accessed. But we may need to store extended information related to file access, such as the LastUserAccessed, in our crypto container. These two attributes are synchronous. But if the LastAccessTime attribute is updated automatically, even when reading the file, the LastUserAccessed attribute can only be saved in the header of the crypto container. So if you open the file with reading rights, you may need writing rights to save the LastUserAccessed attribute, and this will lead to a sharing violation error.

To solve this problem at the level of our fake file system, we’ll need to implement so-called superhandles.

A superhandle is a handle that can only be opened when all other handles are closed. So to use it, you need to take three steps:

  1. Close all other handles opened for the current file.
  2. Perform the operation.
  3. Re-open the closed handles.

Superhandles are helpful when dealing with simple situations similar to the one we’ve described. However, this approach may be useless in some cases. For instance, if a file is exclusively opened directly from an untrusted application, we have no control over its handle in our file system. So even if we close our handles, we won’t be able to perform an exclusive operation.

Such a problem may occur when working with antivirus applications or if there’s a clear Inter Process Communication (IPC) between a trusted and untrusted application that both perform some actions on the same file.

Related services

Operating System Management

4. How can we address difficulties with driver callbacks?

Another common challenge you may face hides in the process of handling driver callbacks. When we create a user mode file system, we only leave the minimum necessary set of functions in the kernel for transferring requests to the service. This model works fine for dealing with fake file systems in the cloud and secured network storage, when we don’t care much about the behavior of the local machine once the request is sent further across the network.

However, when we’re talking about encrypting local files, the behavior of the local machine becomes critical. When processing a request in user mode, we may find ourselves needing to go back to the driver and perform an additional operation, such as flush the cache.

The problem can be solved with the help of custom input-output control (IOCTL) calls to our driver. But since our driver is only capable of transferring requests to the user mode, when we call it, the driver will send new requests with the updated kernel state to the user mode. And since the current thread is already busy waiting for the driver response, these requests will have to be processed by other service threads.

As a result, what we get is the need to support multithreading and take into account all possible blockings. And if you need to add an option for returning a request to the kernel, you’ll have to check all possible execution paths and blockings for deadlocks.

Also, when processing a secondary request in the service, you may need to return to the kernel once more. In this case, the code will become less and less comprehensible due to the increased number of required threads.

The only way to avoid such complexity is to implement our solution solely in the kernel.

Read also:
Comparison of User Mode and Kernel Mode Applications for Modifying HTTP Traffic

5. What architecture should we use?

We offer the following architecture for a user mode file system solution:

  • One driver
  • One service
  • Multiple storage plugins
user mode file system solution architecture

Such a solution can be used as an ultimate mechanism for encrypting selected local files or connecting to arbitrary storage systems, such as cloud or secured network storage. In this case, the driver won’t even need to know where the data is sent for storage, as all data processing will be performed in user mode.

We can further improve this principle by splitting the user mode logic into two parts:

  • General logic for working with all types of storage
  • Specialized plugins for working with specific types of storage

Surprisingly, the standard service allows us to implement a lot of logic. The common service will take over communication with the driver and handle common problems that arise when taking out user mode requests from the kernel, including:

  • Impersonating requests — We should act on behalf of the user mode service using the credentials of the user who called the driver.
  • Caching credentials — We should cache the credentials provided by the kernel when we open the file, as we’ll need them later for multiple operations.
  • Translating communications between the kernel and plugins — We need to translate requests and responses from ones that can be processed by the kernel to ones optimized for the plugins (and back). Such communication is often used for translating error codes.

On the other hand, plugins can be fully isolated from communicating with the kernel. In this case, they’ll be developed, tested, and used separately from the service.

If you’re also interested in improving your application’s cybersecurity, check out our article about preventing heap spraying attacks.

Read also:
Windows File System Filter Driver

6. How can we ensure the compatibility of the fake file system?

In order for encrypted files to be available for AppContainer applications (Universal Windows Platform applications), we need to make sure that the current operating system doesn’t prohibit reparsing to our fake file system. To do this, we need to implement minimal functionality that would allow us to:

  • consider our device as a local DiskDrive device
  • use this device

There are several levels of the Windows storage architecture that an I/O request should pass through. The following table illustrates the standard Windows storage stack:

ApplicationInitiates I/O request
I/O subsystemSends I/O to the file system
MinifiltersOffer various functionality
FileSystemProvides file structures
Volume managerPresents volumes
Partition managerManages disk partitions
Class driverManages specific device type
Port/Miniport driverPort driver manages specific transport, e.g. SCSI port and Storport. Storport  miniport driver is a vendor-supplied functionality.
Bus driver (disk subsystem)Satisfies I/O requests

Generally, we need to follow this scheme. But we have nothing below the level of the fake file system. Therefore, we need to implement a custom bus driver that meets the requirements of one of the standard port protocols, such as Small Computer Systems Interface (SCSI). We also need to use standard system class and port drivers.

At the same time, we should return some fake information about the properties of the volume and partition on our device, ignore requests for data reformatting, and prevent the mounting and direct use of the data. All these operations can be assigned to a minifilter.

If you want to learn more about minifilters, check out our Windows minifilter driver tutorial.

Related services

Kernel and Driver Development

Conclusion

Coordinating the way different applications handle encrypted data is challenging. In this article, we described a number of pitfalls you may encounter when implementing a user mode file system for Windows. Keeping these challenges in mind and planning ahead will help you get the most out of implementing your fake file system in user mode.

At Apriorit, we have vast experience developing data encryption and data management solutions. Get in touch with us and we’ll help you bring to life even the most challenging ideas.

Tell us about your project

Send us a request for proposal! We’ll get back to you with details and estimations.

By clicking Send you give consent to processing your data

Book an Exploratory Call

Do not have any specific task for us in mind but our skills seem interesting?

Get a quick Apriorit intro to better understand our team capabilities.

Book time slot

Contact us