LLFIO
v2.00
|
A work item which paces when it next executes according to i/o congestion. More...
#include "dynamic_thread_pool_group.hpp"
Classes | |
struct | byte_io_handle_awareness |
Information about an i/o handle this work item will use. More... | |
Public Member Functions | |
io_aware_work_item (span< byte_io_handle_awareness > hs) | |
Constructs a work item aware of i/o done to the handles in hs . More... | |
io_aware_work_item (io_aware_work_item &&o) noexcept | |
span< byte_io_handle_awareness > | handles () const noexcept |
The handles originally registered during construction. | |
virtual intptr_t | io_aware_next (deadline &d) noexcept=0 |
As for work_item::next() , but deadline may be extended to reduce i/o congestion on the hardware devices to which the handles refer. | |
dynamic_thread_pool_group * | parent () const noexcept |
Returns the parent work group between successful submission and just before group_complete() . | |
virtual result< void > | operator() (intptr_t work) noexcept=0 |
virtual void | group_complete (const result< void > &cancelled) noexcept |
Public Attributes | |
float | max_iosbusytime {0.95f} |
Maximum i/o busyness above which throttling is to begin. | |
uint32_t | min_iosinprogress {16} |
Minimum i/o in progress to target if iosbusytime exceeded. The default of 16 suits SSDs, you want around 4 for spinning rust or NV-RAM. | |
uint32_t | max_iosinprogress {32} |
Maximum i/o in progress to target if iosbusytime exceeded. The default of 32 suits SSDs, you want around 8 for spinning rust or NV-RAM. | |
Protected Member Functions | |
constexpr bool | _has_timer_set_relative () const noexcept |
constexpr bool | _has_timer_set_absolute () const noexcept |
constexpr bool | _has_timer_set () const noexcept |
A work item which paces when it next executes according to i/o congestion.
Currently there is only a working implementation of this for the Microsoft Windows and Linux platforms, due to lack of working statfs_t::f_iosinprogress
on other platforms. If retrieving that for a seekable handle does not work, the constructor throws an exception.
For seekable handles, currently reads
, writes
and barriers
are ignored. We simply retrieve, periodically, statfs_t::f_iosinprogress
and statfs_t::f_iosbusytime
for the storage devices backing the seekable handle. If the recent averaged i/o wait time exceeds max_iosbusytime
and the i/o in progress > max_iosinprogress
, next()
will start setting the default deadline passed to io_aware_next()
. Thereafter, every 1/10th of a second, if statfs_t::f_iosinprogress
is above max_iosinprogress
, it will increase the deadline by 1/16th, whereas if it is below min_iosinprogress
, it will decrease the deadline by 1/16th. The default deadline chosen is always the worst of all the storage devices of all the handles. This will reduce concurrency within the kernel thread pool in order to reduce congestion on the storage devices. If at any point statfs_t::f_iosbusytime
drops below max_iosbusytime
as averaged across one second, and statfs_t::f_iosinprogress
drops below min_iosinprogress
, the additional throttling is completely removed. io_aware_next()
can ignore the default deadline passed into it, and can set any other deadline.
For non-seekable handles, the handle must have an i/o multiplexer set upon it, and on Microsoft Windows, that i/o multiplexer must be utilising the IOCP instance of the global Win32 thread pool. For each reads
, writes
and barriers
which is non-zero, a corresponding zero length i/o is constructed and initiated. When the i/o completes, and all readable handles in the work item's set have data waiting to be read, and all writable handles in the work item's set have space to allow writes, only then is the work item invoked with the next piece of work.
|
inlineexplicit |
Constructs a work item aware of i/o done to the handles in hs
.
Note that the reads
, writes
and barriers
are normalised to proportions out of 1.0
by this constructor, so if for example you had reads/writes/barriers = 200/100/0
, after normalisation those become 0.66/0.33/0.0
such that the total is 1.0
. If reads/writes/barriers = 0/0/0
on entry, they are replaced with 0.5/0.5/0.0
.
Note that normalisation is across all i/o handles in the set, so three handles each with reads/writes/barriers = 200/100/0
on entry would have 0.22/0.11/0.0
each after construction.
|
inlinevirtualnoexceptinherited |
Invoked by the i/o thread pool when all work in this thread pool group is complete.
cancelled
indicates if this is an abnormal completion. If its error compares equal to errc::operation_cancelled
, then stop()
was called.
Just before this is called for all work items submitted, the group becomes reset to fresh, and parent()
becomes null. You can resubmit this work item, but do not submit other work items until their group_complete()
has been invoked.
Note that this function is called from multiple kernel threads.
dynamic_thread_pool_group::current_work_item()
may have any value during this call.
|
pure virtualnoexceptinherited |
Invoked by the i/o thread pool to perform the next item of work.
work | The value returned by next() . |
Note that this function is called from multiple kernel threads, and may not be the kernel thread from which next()
was called.
dynamic_thread_pool_group::current_work_item()
will always be this
during this call.