AFIO  v2.00 late alpha
AFIO Documentation
on GitHub
CTest summary
Linux and MacOS CI:
Windows CI:
Latest stable
Latest stable
Linux x64 prebuilt
Latest stable
VS2017 x64 prebuilt

Herein lies my proposed zero whole machine memory copy async file i/o and filesystem library for Boost and the C++ standard, intended for storage devices with ~1 microsecond 4Kb transfer latencies and those supporting Storage Class Memory (SCM)/Direct Access Storage (DAX). Its i/o overhead, including syscall overhead, has been benchmarked to 100 nanoseconds on Linux which corresponds to a theoretical maximum of 10M IOPS @ QD1, approx 40Gb/sec per thread. It has particularly strong support for writing portable filesystem algorithms which work well with directly mapped non-volatile storage such as Intel Optane.

It is a complete rewrite after a Boost peer review in August 2015. Its github source code repository lives at

  • Portable to any conforming C++ 14 compiler with a working Filesystem TS in its STL.
  • Will make use of any Concepts TS if you have them.
  • async_file_handle supports co_await (Coroutines TS).
  • Provides view adapters into the Ranges TS, so ready for STL2.
  • Original error code is always preserved, even down to the original NT kernel error code if a NT kernel API was used.
  • Race free filesystem design used throughout (i.e. no TOCTOU).
  • Zero malloc, zero exception throw and zero whole system memory copy design used throughout, even down to paths (which can hit 64Kb!).
  • Works very well with the C++ standard library, and is intended to be proposed for standardisation into C++ in 2020 or thereabouts.
Note that this code is of late alpha quality. It's quite reliable on Windows and Linux, but be careful when using it!

Examples of use:

namespace afio = AFIO_V2_NAMESPACE;
// Make me a 1 trillion element sparsely allocated integer array!
afio::mapped_file_handle mfh = afio::mapped_temp_inode().value();
// On an extents based filing system, doesn't actually allocate any physical
// storage but does map approximately 4Tb of all bits zero data into memory
// Create a typed view of the one trillion integers
afio::algorithm::mapped_view<int> one_trillion_int_array(mfh);
// Write and read as you see fit, if you exceed physical RAM it'll be paged out
one_trillion_int_array[0] = 5;
one_trillion_int_array[999999999999ULL] = 6;
namespace afio = AFIO_V2_NAMESPACE;
// Create an asynchronous file handle
afio::io_service service;
afio::async_file_handle fh =
afio::async_file(service, {}, "testfile.txt",
// Resize it to 1024 bytes
truncate(fh, 1024).value();
// Begin to asynchronously write "hello world" into the file at offset 0,
// suspending execution of this coroutine until completion and then resuming
// execution. Requires the Coroutines TS.
alignas(4096) char buffer[] = "hello world";
co_await co_write(fh, {{{buffer, sizeof(buffer)}}, 0}).value();

These compilers and OS are regularly tested:

  • GCC 7.0 (Linux 4,x x64)
  • clang 4.0 (Linux 4.x x64)
  • Visual Studio 2017 (Windows 10 x64)

Other compilers, architectures and OSs may work, but are not tested regularly. You will need a Filesystem TS implementation in your STL and C++ 14. See for a database of latencies for various previously tested OS, filing systems and storage devices.

Todo list for already implemented parts:

To build and test (make, ninja etc):

mkdir build
cd build
cmake ..
cmake --build .
ctest -R afio_sl

To build and test (Visual Studio, XCode etc):

mkdir build
cd build
cmake ..
cmake --build . --config Release
ctest -C Release -R afio_sl

v2 architecture and design implemented:

NEW in v2 Boost peer review feedback
Universal native handle/fd abstraction instead of void *.
Perfectly/Ideally low memory (de)allocation per op (usually none).
noexcept API throughout returning error_code for failure instead of throwing exceptions.
AFIO v1 handle type split into hierarchy of types:
  1. handle - provides open, close, get path, clone, set/unset append only, change caching, characteristics
  2. fs_handle - handles with an inode number
  3. path_handle - a race free anchor to a subset of the filesystem
  4. directory_handle - enumerates the filesystem
  5. io_handle - adds synchronous scatter-gather i/o, byte range locking
  6. file_handle - adds open/create file, get and set maximum extent
  7. async_file_handle - adds asynchronous scatter-gather i/o
  8. mapped_file_handle - adds low latency memory mapped scatter-gather i/o
Cancelable i/o (made possible thanks to dropping XP support).
All shared_ptr usage removed as all use of multiple threads removed.
Use of std::vector to transport scatter-gather sequences replaced with C++ 20 span<> borrowed views.
Completion callbacks are now some arbitrary type U&& instead of a future continuation. Type erasure for its storage is bound into the one single memory allocation for everything needed to execute the op, and so therefore overhead is optimal.
Filing system algorithms made generic and broken out into public afio::algorithms template library (the AFIO FTL).
Abstraction of native handle management via bitfield specified "characteristics".
Storage profiles, a YAML database of behaviours of hardware, OS and filing system combinations.
Absolute and interval deadline timed i/o throughout (made possible thanks to dropping XP support).
Dependency on ASIO/Networking TS removed completely.
Four choices of algorithm implementing a shared filing system mutex.
Uses CMake, CTest, CDash and CPack with automatic usage of C++ Modules or precompiled headers where available.
Far more comprehensive memory map and virtual memory facilities.
Much more granular, micro level unit testing of individual functions.
Much more granular, micro level internal logging of every code path taken.
Path views used throughout, thus avoiding string copying and allocation in std::filesystem::path.
Paths are equally interpreted as UTF-8 on all platforms.
We never store nor retain a path, as they are inherently racy and are best avoided.
Parent handle caching is hard coded in, it is now an optional user applied templated adapter class.


NEW in v2 Boost peer review feedback
clang AST assisted SWIG bindings for other languages.
Statistical tracking of operation latencies so realtime IOPS can be measured.

Planned features implemented:

NEW in v2 Windows POSIX
Native handle cloning.
✔ (up from four) Maximum possible (seven) forms of kernel caching.
Absolute path open.
Relative "anchored" path open enabling race free file system.
Win32 path support (260 path limit).
NT kernel path support (32,768 path limit).
Synchronous universal scatter-gather i/o.
✔ (POSIX AIO support) Asynchronous universal scatter-gather i/o.
i/o deadlines and cancellation.
Retrieving and setting the current maximum extent (size) of an open file.
Retrieving the current path of an open file irrespective of where it has been renamed to by third parties.
statfs_t ported over from AFIO v1.
utils namespace ported over from AFIO v1.
shared_fs_mutex shared/exclusive entities locking based on lock files
Byte range shared/exclusive locking.
shared_fs_mutex shared/exclusive entities locking based on byte ranges
shared_fs_mutex shared/exclusive entities locking based on atomic append
Memory mapped files and virtual memory management (section_handle, map_handle and mapped_file_handle)
shared_fs_mutex shared/exclusive entities locking based on memory maps
Universal portable UTF-8 path views.
"Hole punching" and hole enumeration ported over from AFIO v1.
Directory handles and very fast directory enumeration ported over from AFIO v1.
shared_fs_mutex shared/exclusive entities locking based on safe byte ranges
Set random or sequential i/o (prefetch).
i/o on async_file_handle is coroutines awaitable.
afio::algorithm::trivial_vector<T> with constant time reallocation if T is trivially copyable.

Todo to reach feature parity with AFIO v1:

NEW in v2 Windows POSIX
BSD and OS X kqueues optimised io_service

Todo thereafter in order of priority:

NEW in v2 Windows POSIX
Linux KAIO support for native non-blocking O_DIRECT i/o
Reliable directory hierarchy deletion algorithm.
Reliable directory hierarchy copy algorithm.
Reliable directory hierarchy update (two and three way) algorithm.
std::pmr::memory_resource adapting a file backing if on C++ 17.
Extended attributes support.
Algorithm to replace all duplicate content with hard links.
Algorithm to figure out all paths for a hard linked inode.
Algorithm to compare two or three directory enumerations and give differences. Probably blocked on the Ranges TS.

Features possibly to be added after a Boost peer review:

  • Directory change monitoring.
  • Permissions support (ACLs).
Why you might need AFIO
Manufacturer claimed 4Kb transfer latencies for the physical hardware:
  • Spinning rust hard drive latency @ QD1: 9000us
  • SATA flash drive latency @ QD1: 800us
  • NVMe flash drive latency @ QD1: 300us
  • RTT UDP packet latency over a LAN: 60us
  • NVMe Optane drive latency @ QD1: 60us
  • memcpy(4Kb) latency: 5us (main memory) to 1.3us (L3 cache)
  • RTT PCIe latency: 0.5us
100% read QD1 4Kb direct transfer latencies for the software with AFIO:
  • < 99% spinning rust hard drive latency: Windows 187,231us FreeBSD 9,836us Linux 26,468us
  • < 99% SATA flash drive latency: Windows 290us Linux 158us
  • < 99% NVMe drive latency: Windows 37us FreeBSD 70us Linux 30us
75% read 25% write QD4 4Kb direct transfer latencies for the software with AFIO:
  • < 99% spinning rust hard drive latency: Windows 48,185us FreeBSD 61,834us Linux 104,507us
  • < 99% SATA flash drive latency: Windows 1,812us Linux 1,416us
  • < 99% NVMe drive latency: Windows 50us FreeBSD 143us Linux 40us

Max bandwidth for the physical hardware:

  • DDR4 2133: 30Gb/sec (main memory)
  • x4 PCIe 4.0: 7.5Gb/sec (arrives end of 2017, the 2018 NVMe drives will use PCIe 4.0)
  • x4 PCIe 3.0: 3.75Gb/sec (985Mb/sec per PCIe lane)
  • 2017 XPoint drive (x4 PCIe 3.0): 2.5Gb/sec
  • 2017 NVMe flash drive (x4 PCIe 3.0): 2Gb/sec
  • 10Gbit LAN: 1.2Gb/sec