[RFE] Add ability to specify request and limits for ptp daemon
williamcaban opened this issue · comments
In setups with high density of processes or low latency configuration, the ptp-daemon should be running on QoS guaranteed to avoid the ptp4l from experiencing latency due to be competing with resources.
The asks are:
- enable the ability to specify resource.request and resource.limits for the ptp-daemon through the CRs
- enable the ability to annotate the Pod and assign the Pod to runtime classes (eg. like for PAO) such that it can be run with guaranteed resources (e.g. dedicated 1c and memory) and maintain numa alignment with the NIC from which is reading the timing.
- Alternatively, provide a way to guarantee NUMA alignment between the ptp-daemon and the NICs for timing when running in multi-socket nodes.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.