Difference between revisions of "StableBit DrivePool Q521955"

From Covecube - Wiki
(Created page with "==Question== '''What is Fast I/O and why don't I see any occurring in the performance pane?''' ==Answer== Fast I/O is a technical term that describes how I/O is processed in the...")
 
 
(10 intermediate revisions by the same user not shown)
Line 3: Line 3:
  
 
==Answer==
 
==Answer==
Fast I/O is a technical term that describes how I/O is processed in the Windows kernel, and no one really needs to understand what it is in order to use StableBit DrivePool effectively. But for those of you who are interested in this concept, this article will give you a brief description of what it is and why it's important (or perhaps not so important).
+
Fast I/O is a technical term that describes how I/O is processed in the Windows kernel, and you don't really need to understand what it is in order to use StableBit DrivePool effectively. But for those of you who are interested in this concept, this article will give you a brief description of what it is and why it's important (or perhaps not so important).
  
 +
===What is Fast I/O===
 
In short, Fast I/O bypasses the normal I/O processing model in the Windows kernel and opts for a faster, but perhaps less flexible approach. It is part of the core architecture of Windows.
 
In short, Fast I/O bypasses the normal I/O processing model in the Windows kernel and opts for a faster, but perhaps less flexible approach. It is part of the core architecture of Windows.
  
 +
=== I/O Request Packets ===
 
In order to understand how Fast I/O differs from the "normal" I/O model, you'll have to have some idea of what the normal model is. Normally in Windows every I/O request is represented by something called an ''I/O Request Packet'', or IRP. For every I/O request, such as a read or a write on a volume, an IRP is created and sent to the file system mounted over that volume in order to satisfy the I/O request. The file system then performs additional processing necessary to complete the I/O request, such as reading data from one or more disks, and then ''completes'' the IRP.
 
In order to understand how Fast I/O differs from the "normal" I/O model, you'll have to have some idea of what the normal model is. Normally in Windows every I/O request is represented by something called an ''I/O Request Packet'', or IRP. For every I/O request, such as a read or a write on a volume, an IRP is created and sent to the file system mounted over that volume in order to satisfy the I/O request. The file system then performs additional processing necessary to complete the I/O request, such as reading data from one or more disks, and then ''completes'' the IRP.
  
 
While an IRP is being processed, other IRPs can arrive and also be processed at the same time. This is what makes the I/O model in Windows ''asynchronous''. Note that this is technically different from multithreading in that this approach doesn't require multiple threads and can work even with a single thread, which makes it more flexible.
 
While an IRP is being processed, other IRPs can arrive and also be processed at the same time. This is what makes the I/O model in Windows ''asynchronous''. Note that this is technically different from multithreading in that this approach doesn't require multiple threads and can work even with a single thread, which makes it more flexible.
  
 +
=== Cached I/O ===
 
The original designers of the Windows NT kernel (which we're still using today) realized that while the IRP approach is flexible it generates a lot of overhead for typical sequential I/O, particularly cached I/O, and that it's essentially overkill.
 
The original designers of the Windows NT kernel (which we're still using today) realized that while the IRP approach is flexible it generates a lot of overhead for typical sequential I/O, particularly cached I/O, and that it's essentially overkill.
  
Line 17: Line 20:
 
On the source volume, for every read request, a new IRP needs to be created to read some chunk of data from that file. The IRP then gets sent down to the file system. The file system then has to process the IRP, by reading data from a disk, and then complete that IRP. This process repeats until the entire file is read.
 
On the source volume, for every read request, a new IRP needs to be created to read some chunk of data from that file. The IRP then gets sent down to the file system. The file system then has to process the IRP, by reading data from a disk, and then complete that IRP. This process repeats until the entire file is read.
  
But think about it, the file system can get a bit smarter and realize that we're reading the whole file one chunk at a time. So, after the last chunk is read the file system can start to read the next chunk of data, even before it receives the IRP. Once the next IRP arrives, it can simply complete it right away because the data has already been read from the disk and is already in memory. This is called read-ahead caching and happens all the time in Windows.
+
====Read-ahead Cache====
 +
But think about it, the file system can get a bit smarter and realize that we're reading the whole file one chunk at a time. So, after a chunk is read, the file system can start to read the next chunk of data, even before it receives the next IRP. Once the next IRP arrives, it can simply complete it right away because the data has already been read from the disk and is already in memory. This is called read-ahead caching and happens all the time in Windows.
  
 +
====Write-behind Cache====
 
Similarly, when saving a file to the destination volume, for every write IRP the file system doesn't necessarily need to write the data to the disk before completing the IRP. It can simply save it to memory and then write it later. This is called write-behind caching, and happens all the time in Windows as well.
 
Similarly, when saving a file to the destination volume, for every write IRP the file system doesn't necessarily need to write the data to the disk before completing the IRP. It can simply save it to memory and then write it later. This is called write-behind caching, and happens all the time in Windows as well.
  
 
Both read-ahead and write-behind caching speed up reading and writing to and from the disk by essentially predicting future IRPs and not waiting for them to arrive.
 
Both read-ahead and write-behind caching speed up reading and writing to and from the disk by essentially predicting future IRPs and not waiting for them to arrive.
  
 +
===IRPs are Unnecessary===
 
This is all sounding great, we have IRPs, which are a powerful mechanism for managing multiple I/O requests, and caching speeds up the whole process.
 
This is all sounding great, we have IRPs, which are a powerful mechanism for managing multiple I/O requests, and caching speeds up the whole process.
  
But wait, let's think about the typical file copying process when caching is in use. For the source file, if caching is working properly, there is a good chance that the data being requested had already been read in and is in memory. Thus, a request for that data sent to the file system should be completed immediately. If that's the case then the chief advantage that IRPs provide, asynchronous I/O (I.E. having the ability to send another IRP while the first one is processing), is unnecessary. That also means that the whole process of creating an IRP and managing those IRPs is also unnecessary.
+
But wait, let's think about the typical file copying process when caching is in use. For the source file, if caching is working properly, there is a good chance that the data being requested has already been read in, and is in memory. Thus, a request for that data sent to the file system should be completed immediately. If that's the case then the chief advantage that IRPs provide, asynchronous I/O (I.E. having the ability to send another IRP while the first one is processing), is unnecessary. That also means that the whole process of creating an IRP and managing those IRPs is also unnecessary.
 +
 
 +
===Fast I/O===
 +
This is what the original designers of the Windows kernel were thinking and this is where Fast I/O comes in. Fast I/O gets rid of the whole concept of IRPs and allows the application sending the read request to simply ask the file system for some data, without creating any IRPs. It can optionally state that if the data is not cached, the Fast I/O request should be aborted immediately (presumably because the application intends to create and send an IRP for that data shortly after).
 +
 
 +
For a Fast I/O write request, the file system puts the data to be written in the memory cache (to be written at a later time). If the memory cache has a lot of data waiting to be written to the disk, then the caller can optionally ask Fast I/O to fail immediately instead of waiting for some of that data to be written to disk and the cache to be freed up (again, presumably because the application intends to send a write IRP shortly after).
 +
 
 +
Fast I/O is completely optional and a file system is allowed to omit support for it, if it so chooses. But all builtin file systems, such as NTFS, have support for for it. StableBit DrivePool supports it as well in most typical cases.
 +
 
 +
===Practical Considerations===
 +
The first version of Windows NT was released in 1993, Windows NT 3.1. Windows NT 3.1 had the following minimum system requirements:
 +
* 25 Mhz 80386 processor.
 +
* At least 12 MB of RAM.
 +
* 75 MB of hard drive space.
 +
The original architecture of the Windows NT kernel was designed to run as fast as possible under these system specifications. Today, with our gigahertz processors, many gigabytes of RAM and SSDs, the design may be a bit antiquated.
 +
 
 +
The practical speed up achieved by utilizing Fast I/O on today's systems may not be as pronounced as it was back in 1993.
 +
 
 +
===StableBit DrivePool and Fast I/O===
 +
StableBit DrivePool explicitly rejects Fast I/O requests under these circumstances:
 +
* If explicitly disabled in the .config file.
 +
* If the file is a CoveFS metadata stream.
 +
* If caching was not initiated on the file (I.E. non-cached I/O was requested).
 +
* Network I/O boost is enabled (normally only read Fast I/O is disabled, but can be changed in .config).
 +
 
 +
For the most part, StableBit DrivePool rejects Fast I/O requests when additional processing is required for an I/O request.

Latest revision as of 04:00, 18 June 2014

Question

What is Fast I/O and why don't I see any occurring in the performance pane?

Answer

Fast I/O is a technical term that describes how I/O is processed in the Windows kernel, and you don't really need to understand what it is in order to use StableBit DrivePool effectively. But for those of you who are interested in this concept, this article will give you a brief description of what it is and why it's important (or perhaps not so important).

What is Fast I/O

In short, Fast I/O bypasses the normal I/O processing model in the Windows kernel and opts for a faster, but perhaps less flexible approach. It is part of the core architecture of Windows.

I/O Request Packets

In order to understand how Fast I/O differs from the "normal" I/O model, you'll have to have some idea of what the normal model is. Normally in Windows every I/O request is represented by something called an I/O Request Packet, or IRP. For every I/O request, such as a read or a write on a volume, an IRP is created and sent to the file system mounted over that volume in order to satisfy the I/O request. The file system then performs additional processing necessary to complete the I/O request, such as reading data from one or more disks, and then completes the IRP.

While an IRP is being processed, other IRPs can arrive and also be processed at the same time. This is what makes the I/O model in Windows asynchronous. Note that this is technically different from multithreading in that this approach doesn't require multiple threads and can work even with a single thread, which makes it more flexible.

Cached I/O

The original designers of the Windows NT kernel (which we're still using today) realized that while the IRP approach is flexible it generates a lot of overhead for typical sequential I/O, particularly cached I/O, and that it's essentially overkill.

To understand why, you need to understand how the read-ahead cache and write-behind cache affects IRP based I/O. Let's consider an example where you are copying a file from one volume to another.

On the source volume, for every read request, a new IRP needs to be created to read some chunk of data from that file. The IRP then gets sent down to the file system. The file system then has to process the IRP, by reading data from a disk, and then complete that IRP. This process repeats until the entire file is read.

Read-ahead Cache

But think about it, the file system can get a bit smarter and realize that we're reading the whole file one chunk at a time. So, after a chunk is read, the file system can start to read the next chunk of data, even before it receives the next IRP. Once the next IRP arrives, it can simply complete it right away because the data has already been read from the disk and is already in memory. This is called read-ahead caching and happens all the time in Windows.

Write-behind Cache

Similarly, when saving a file to the destination volume, for every write IRP the file system doesn't necessarily need to write the data to the disk before completing the IRP. It can simply save it to memory and then write it later. This is called write-behind caching, and happens all the time in Windows as well.

Both read-ahead and write-behind caching speed up reading and writing to and from the disk by essentially predicting future IRPs and not waiting for them to arrive.

IRPs are Unnecessary

This is all sounding great, we have IRPs, which are a powerful mechanism for managing multiple I/O requests, and caching speeds up the whole process.

But wait, let's think about the typical file copying process when caching is in use. For the source file, if caching is working properly, there is a good chance that the data being requested has already been read in, and is in memory. Thus, a request for that data sent to the file system should be completed immediately. If that's the case then the chief advantage that IRPs provide, asynchronous I/O (I.E. having the ability to send another IRP while the first one is processing), is unnecessary. That also means that the whole process of creating an IRP and managing those IRPs is also unnecessary.

Fast I/O

This is what the original designers of the Windows kernel were thinking and this is where Fast I/O comes in. Fast I/O gets rid of the whole concept of IRPs and allows the application sending the read request to simply ask the file system for some data, without creating any IRPs. It can optionally state that if the data is not cached, the Fast I/O request should be aborted immediately (presumably because the application intends to create and send an IRP for that data shortly after).

For a Fast I/O write request, the file system puts the data to be written in the memory cache (to be written at a later time). If the memory cache has a lot of data waiting to be written to the disk, then the caller can optionally ask Fast I/O to fail immediately instead of waiting for some of that data to be written to disk and the cache to be freed up (again, presumably because the application intends to send a write IRP shortly after).

Fast I/O is completely optional and a file system is allowed to omit support for it, if it so chooses. But all builtin file systems, such as NTFS, have support for for it. StableBit DrivePool supports it as well in most typical cases.

Practical Considerations

The first version of Windows NT was released in 1993, Windows NT 3.1. Windows NT 3.1 had the following minimum system requirements:

  • 25 Mhz 80386 processor.
  • At least 12 MB of RAM.
  • 75 MB of hard drive space.

The original architecture of the Windows NT kernel was designed to run as fast as possible under these system specifications. Today, with our gigahertz processors, many gigabytes of RAM and SSDs, the design may be a bit antiquated.

The practical speed up achieved by utilizing Fast I/O on today's systems may not be as pronounced as it was back in 1993.

StableBit DrivePool and Fast I/O

StableBit DrivePool explicitly rejects Fast I/O requests under these circumstances:

  • If explicitly disabled in the .config file.
  • If the file is a CoveFS metadata stream.
  • If caching was not initiated on the file (I.E. non-cached I/O was requested).
  • Network I/O boost is enabled (normally only read Fast I/O is disabled, but can be changed in .config).

For the most part, StableBit DrivePool rejects Fast I/O requests when additional processing is required for an I/O request.