StableBit DrivePool Q521955
Question
What is Fast I/O and why don't I see any occurring in the performance pane?
Answer
Fast I/O is a technical term that describes how I/O is processed in the Windows kernel, and no one really needs to understand what it is in order to use StableBit DrivePool effectively. But for those of you who are interested in this concept, this article will give you a brief description of what it is and why it's important (or perhaps not so important).
In short, Fast I/O bypasses the normal I/O processing model in the Windows kernel and opts for a faster, but perhaps less flexible approach. It is part of the core architecture of Windows.
In order to understand how Fast I/O differs from the "normal" I/O model, you'll have to have some idea of what the normal model is. Normally in Windows every I/O request is represented by something called an I/O Request Packet, or IRP. For every I/O request, such as a read or a write on a volume, an IRP is created and sent to the file system mounted over that volume in order to satisfy the I/O request. The file system then performs additional processing necessary to complete the I/O request, such as reading data from one or more disks, and then completes the IRP.
While an IRP is being processed, other IRPs can arrive and also be processed at the same time. This is what makes the I/O model in Windows asynchronous. Note that this is technically different from multithreading in that this approach doesn't require multiple threads and can work even with a single thread, which makes it more flexible.
The original designers of the Windows NT kernel (which we're still using today) realized that while the IRP approach is flexible it generates a lot of overhead for typical sequential I/O, particularly cached I/O, and that it's essentially overkill.
To understand why, you need to understand how the read-ahead cache and write-behind cache affects IRP based I/O. Let's consider an example where you are copying a file from one volume to another.
On the source volume, for every read request, a new IRP needs to be created to read some chunk of data from that file. The IRP then gets sent down to the file system. The file system then has to process the IRP, by reading data from a disk, and then complete that IRP. This process repeats until the entire file is read.
But think about it, the file system can get a bit smarter and realize that we're reading the whole file one chunk at a time. So, after the last chunk is read the file system can start to read the next chunk of data, even before it receives the IRP. Once the next IRP arrives, it can simply complete it right away because the data has already been read from the disk and is already in memory. This is called read-ahead caching and happens all the time in Windows.
Similarly, when saving a file to the destination volume, for every write IRP the file system doesn't necessarily need to write the data to the disk before completing the IRP. It can simply save it to memory and then write it later. This is called write-behind caching, and happens all the time in Windows as well.
Both read-ahead and write-behind caching speed up reading and writing to and from the disk by essentially predicting future IRPs and not waiting for them to arrive.
This is all sounding great, we have IRPs, which are a powerful mechanism for managing multiple I/O requests, and caching speeds up the whole process.
But wait, let's think about the typical file copying process when caching is in use. For the source file, if caching is working properly, there is a good chance that the data being requested had already been read in and is in memory. Thus, a request for that data sent to the file system should be completed immediately. If that's the case then the chief advantage that IRPs provide, asynchronous I/O (I.E. having the ability to send another IRP while the first one is processing), is unnecessary. That also means that the whole process of creating an IRP and managing those IRPs is also unnecessary.