In general, the amount time
required for a checkpoint operation increases with the number of dirty pages
that the operation must write. By default, to minimize the performance impact
on other applications, SQL Server adjusts the frequency of writes that a
checkpoint operation performs. Decreasing the write frequency increases the
time the checkpoint operation requires to complete. SQL Server uses this
strategy for a manual checkpoint unless a checkpoint_duration value
is specified in the CHECKPOINT command.
The performance impact of
using checkpoint_duration depends on the number of dirty
pages, the activity on the system, and the actual duration specified. For
example, if the checkpoint would normally complete in 120 seconds, specifying
a checkpoint_duration of 45 seconds causes SQL Server to
devote more resources to the checkpoint than would be assigned by default. In
contrast, specifying a checkpoint_duration of 180 seconds
would cause SQL Server to assign fewer resources than would be assigned by
default. In general, a short checkpoint_duration will increase
the resources devoted to the checkpoint, while a long checkpoint_duration will
reduce the resources devoted to the checkpoint. SQL Server always completes a
checkpoint if possible, and the CHECKPOINT statement returns immediately when a
checkpoint completes. Therefore, in some cases, a checkpoint may complete
sooner than the specified duration or may run longer than the specified
duration.
No comments:
Post a Comment