... | ... | @@ -15,7 +15,7 @@ intervals: |
|
|
|
|
|
These values could be configurable. If so, we should enforce that:
|
|
|
|
|
|
- interval_1 < interval_2
|
|
|
- interval_1 < interval_2
|
|
|
- interval_1 > (statistics pull rate * 2)
|
|
|
|
|
|
RPS is loosely calculated as:
|
... | ... | @@ -27,12 +27,12 @@ where response packets are: |
|
|
DHCPv4 = DHCPACKs
|
|
|
DHCPv6 = DHCPV6_REPLYs
|
|
|
|
|
|
If at some point later, we care to add additional packet types (e.g. DHCPOFFERs
|
|
|
If at some point later, we care to add additional packet types (e.g. DHCPOFFERs
|
|
|
and DHCPV6_ADVERTISEs) we can and the label is still meaningful
|
|
|
|
|
|
If data we have for a given Kea daemon does not span an entire interval, we will
|
|
|
display the value based on the data we do have. We could toggle the column color or
|
|
|
put an asterisk next to it, to signify that we do not yet have a full interval.
|
|
|
If data we have for a given Kea daemon does not span an entire interval, we will
|
|
|
display the value based on the data we do have. We could toggle the column color or
|
|
|
put an asterisk next to it, to signify that we do not yet have a full interval.
|
|
|
For example, if we have only have 12 hours of data, we could alter the 24 hour column's
|
|
|
appearance.
|
|
|
|
... | ... | @@ -41,12 +41,12 @@ appearance. |
|
|
We will not be adding anything new to Kea to support this. The data will be derived
|
|
|
from the following existing Kea statistics:
|
|
|
|
|
|
- pkt4-ack-sent (v4 servers)
|
|
|
- pkt6-reply-sent (v6 servers).
|
|
|
- pkt4-ack-sent (v4 servers)
|
|
|
- pkt6-reply-sent (v6 servers).
|
|
|
|
|
|
These statistics are not currently mined by Stork and so the Kea StatsPuller will
|
|
|
need to be extended to retrieve and store them. Alternatively, we could add a
|
|
|
new puller if we want more individualized control.
|
|
|
new puller if we want more individualized control.
|
|
|
|
|
|
We will need to retain the last recorded value and sample time for this statistic
|
|
|
for each daemon. We can use a map of values:
|
... | ... | @@ -54,7 +54,7 @@ for each daemon. We can use a map of values: |
|
|
```
|
|
|
type SampledValue struct {
|
|
|
Sampled_at int64 // time statistic was recorded (secs since epoch)
|
|
|
Value int64 // e.g. value of pkt4-ack-sent or pkt6-reply-sent
|
|
|
Value int64 // e.g. value of pkt4-ack-sent or pkt6-reply-sent
|
|
|
}
|
|
|
|
|
|
ResponsesSent := map[daemon_id]SampledValue;
|
... | ... | @@ -65,24 +65,24 @@ since startup, statistics reset, or rollover (unlikely). For now this map will |
|
|
likely only be held in memory and not persisted to storage.
|
|
|
|
|
|
We will use these values along with the value pulled at each pull cycle to
|
|
|
create and persist a running history of the incremental changes (aka the deltas) in
|
|
|
create and persist a running history of the incremental changes (aka the deltas) in
|
|
|
responses sent between consecutive statistic pulls, in a new table:
|
|
|
|
|
|
```
|
|
|
ResponsesSentHistory {
|
|
|
daemon_id - bigint
|
|
|
interval_start - timestamp // timestamp of this interval
|
|
|
interval_start - timestamp // timestamp of this interval
|
|
|
duration - bigint // seconds in this interval
|
|
|
responses_sent - bigint // number of responses in this interval
|
|
|
}
|
|
|
```
|
|
|
|
|
|
This will produce one row per daemon per pull iteration. Each row represents
|
|
|
the difference between the previous absolute value (from the map) and the newly mined
|
|
|
This will produce one row per daemon per pull iteration. Each row represents
|
|
|
the difference between the previous absolute value (from the map) and the newly mined
|
|
|
absolute value for a given statistic. We also save the difference between the
|
|
|
two sample times so we have a precise measure of the interval described by the row.
|
|
|
|
|
|
If we assume a statistic pull rate of 60 seconds, then this will produce 1440 rows
|
|
|
If we assume a statistic pull rate of 60 seconds, then this will produce 1440 rows
|
|
|
per daemon. Rows can be aged off this table once they are more than interval_2 old.
|
|
|
|
|
|
## On each statistic pull iteration
|
... | ... | @@ -91,10 +91,10 @@ For each Kea daemon, we do the following: |
|
|
|
|
|
1. Pull the new statistic value from the daemon:
|
|
|
|
|
|
```
|
|
|
```
|
|
|
sampled_at := time.Now()
|
|
|
value = pktX-<type>-sent from Kea getStatistic()
|
|
|
```
|
|
|
```
|
|
|
|
|
|
2. Calculate the delta:
|
|
|
|
... | ... | @@ -118,27 +118,27 @@ For each Kea daemon, we do the following: |
|
|
|
|
|
3. Insert a new row into ResponsesSentHistory
|
|
|
|
|
|
```
|
|
|
```
|
|
|
insert (daemon_id, sampled_at, responses_sent, duration)
|
|
|
```
|
|
|
```
|
|
|
|
|
|
4. Update previous values in ResponsesSent:
|
|
|
4. Update previous values in ResponsesSent:
|
|
|
|
|
|
```
|
|
|
```
|
|
|
ResponsesSent[daemon_id].Sampled_at = sampled_at
|
|
|
ResponsesSent[daemon_id].Value = value;
|
|
|
```
|
|
|
```
|
|
|
|
|
|
After all daemons have been processed records more older than the current time - (interval_2 + interval_1) could be deleted.
|
|
|
|
|
|
## Fetching RPS for Display
|
|
|
The RPS for all daemons for an interval could fetched in single select:
|
|
|
|
|
|
```
|
|
|
|
|
|
```
|
|
|
SELECT daemon_id, SUM(responses_sent) as responses, SUM(duration) as duration
|
|
|
WHERE interval_begin >= ? AND interval_begin < ?
|
|
|
GROUP BY daemon_id;
|
|
|
```
|
|
|
```
|
|
|
|
|
|
This would produce a single row per daemon:
|
|
|
|
... | ... | @@ -160,4 +160,4 @@ Response: It is simpler and possibly faster to "add to the end" and age off "fro |
|
|
## Authors (please add yourself when you contribute):
|
|
|
|
|
|
List of authors as of July 1st, 2020:
|
|
|
* Thomas Markwalder |
|
|
\ No newline at end of file |
|
|
* Thomas Markwalder |