... | ... | @@ -2,7 +2,9 @@ The following page describes several ideas that may be interesting for people co |
|
|
|
|
|
# Currently available ideas
|
|
|
|
|
|
* **DHCP Degradation Canary** - Stork is able to monitor Kea instances. However, admins want some additional assurance that the server is able to provide a DHCP service. Note that the server is up and alive doesn't imply that it is always able to provide service (think about cases like running out of disk space => memfile will not be able to store leases, the DB credentials problem => DB connection works but is unable to insert leases, firewall problems dropping responses, a bug causing Kea to stuck in an infinite loop or drop responses, various types of misconfiguration, etc). Therefore a mechanism is needed that will act as a DHCP client and actually get a lease (and then release it quickly to not consume resources). The basic approach would be to conduct this action manually, by clicking a button on Stork UI. The improved version would be to offer an ability to conduct such a check periodically. The difficulties are listed below.
|
|
|
## DHCP Degradation Canary
|
|
|
|
|
|
Stork is able to monitor Kea instances. However, admins want some additional assurance that the server is able to provide a DHCP service. Note that the server is up and alive doesn't imply that it is always able to provide service (think about cases like running out of disk space => memfile will not be able to store leases, the DB credentials problem => DB connection works but is unable to insert leases, firewall problems dropping responses, a bug causing Kea to stuck in an infinite loop or drop responses, various types of misconfiguration, etc). Therefore a mechanism is needed that will act as a DHCP client and actually get a lease (and then release it quickly to not consume resources). The basic approach would be to conduct this action manually, by clicking a button on Stork UI. The improved version would be to offer an ability to conduct such a check periodically. The difficulties are listed below.
|
|
|
|
|
|
- Figure out who will send it - the server or agent. Both approaches have pros and cons.
|
|
|
- Figure out if a DHCP server needs any special configuration to reply to the DHCP queries from Stork. It is particularly interesting when the agent sends queries to the DHCP server from its localhost.
|
... | ... | @@ -12,14 +14,21 @@ The following page describes several ideas that may be interesting for people co |
|
|
- If this is sent by a server, the scalability needs to be considered.
|
|
|
|
|
|
|
|
|
* **Display Kea config (JSON)** - As of March 2021, Stork can show specific elements (networks, subnets, reservations), but lacks the ability to show the whole Kea configuration. We would like to get the ability to display the whole configuration assigned to the specific Kea daemon (DHCP server, Control Agent etc). This feature can be implemented incrementally, starting from the easiest to more sophisticated. In the first step, a configuration should be presented in a JSON format with the possibility to show and hide certain nodes. In the second step, some of the elements should be made clickable, and clicking on them would direct the user to the specific view, e.g. shared network view, subnet view etc. Finally, in the third step, the old configurations should be tracked in the Stork database, and it should be possible to view the selected configuration. It must be possible to generate a diff view between any two configurations by selecting them to compare and clicking a compare button. The diff may look the same as a diff between two JSON files in a version control system. Configurations tracking may sound a bit complicated, but Stork already stores the current configuration in the database as JSONB. We also have a mechanism to detect configuration changes. A new table needs to be added to store old configurations with appropriate metadata and a trigger in the database to move the current configuration to this new table.
|
|
|
## Display Kea config (JSON)
|
|
|
As of March 2021, Stork can show specific elements (networks, subnets, reservations), but lacks the ability to show the whole Kea configuration. We would like to get the ability to display the whole configuration assigned to the specific Kea daemon (DHCP server, Control Agent etc). This feature can be implemented incrementally, starting from the easiest to more sophisticated. In the first step, a configuration should be presented in a JSON format with the possibility to show and hide certain nodes. In the second step, some of the elements should be made clickable, and clicking on them would direct the user to the specific view, e.g. shared network view, subnet view etc. Finally, in the third step, the old configurations should be tracked in the Stork database, and it should be possible to view the selected configuration. It must be possible to generate a diff view between any two configurations by selecting them to compare and clicking a compare button. The diff may look the same as a diff between two JSON files in a version control system. Configurations tracking may sound a bit complicated, but Stork already stores the current configuration in the database as JSONB. We also have a mechanism to detect configuration changes. A new table needs to be added to store old configurations with appropriate metadata and a trigger in the database to move the current configuration to this new table.
|
|
|
|
|
|
## Host Reservation details
|
|
|
|
|
|
* **Host Reservation details** - Stork is able to show host reservations from Kea, but as of March 2021, its capabilities are basic - only identifier (e.g. MAC or DUID), IP address, delegated prefix, hostname and whether the reservation was retrieved from config or host backend, are shown. The missing pieces are: fixed fields (e.g. bootfile, sname), DHCP options, client classes. Displaying this additional information requires adding a detailed view of a host reservation. Currently, all the available information is displayed on a list of host reservations which is already wide. The next step is adding a button in the detailed host reservation view that checks if the reservation is currently in use, i.e. the Kea server has a valid lease for this reservation. Leases data change dynamically, and we currently have no systematic way of gathering this information from the Kea servers. Gathering the information for the whole network is not a goal of this task. This task aims to generate a lease4-get or the lease6-get commands on demand for a single host reservation as a result of clicking the button. If the server returns valid leases, the reservations should be highlighted as in use.
|
|
|
Stork is able to show host reservations from Kea, but as of March 2021, its capabilities are basic - only identifier (e.g. MAC or DUID), IP address, delegated prefix, hostname and whether the reservation was retrieved from config or host backend, are shown. The missing pieces are: fixed fields (e.g. bootfile, sname), DHCP options, client classes. Displaying this additional information requires adding a detailed view of a host reservation. Currently, all the available information is displayed on a list of host reservations which is already wide. The next step is adding a button in the detailed host reservation view that checks if the reservation is currently in use, i.e. the Kea server has a valid lease for this reservation. Leases data change dynamically, and we currently have no systematic way of gathering this information from the Kea servers. Gathering the information for the whole network is not a goal of this task. This task aims to generate a lease4-get or the lease6-get commands on demand for a single host reservation as a result of clicking the button. If the server returns valid leases, the reservations should be highlighted as in use.
|
|
|
|
|
|
## Potential future ideas
|
|
|
# Potential future ideas
|
|
|
|
|
|
The following areas of interest are currently blocked for various technical and business reasons. If you are interested, please reach out to the Stork team to discuss the matter.
|
|
|
|
|
|
* **User management** - As of March 2021, Stork has only two user roles: super-admin (can do everything) and admin (can do everything, except managing other users). We need more fine grained access control. The most basic addition would be a read-only user. This would be used by a junior admin, who can only observe the system, but is not permitted to make any changes. In the future, the role system will become more sophisticated, so the solution must be extensible. In particular, the following use cases will need to be possible: a role to manage a single server, a role to manage certain subnet (including situations where it is handled by a pair of HA servers). This is currently blocked, because the Stork team needs to write down requirements and our early attempt indicates it's more complex than it looks at the first glance.
|
|
|
## User management
|
|
|
|
|
|
As of March 2021, Stork has only two user roles: super-admin (can do everything) and admin (can do everything, except managing other users). We need more fine grained access control. The most basic addition would be a read-only user. This would be used by a junior admin, who can only observe the system, but is not permitted to make any changes. In the future, the role system will become more sophisticated, so the solution must be extensible. In particular, the following use cases will need to be possible: a role to manage a single server, a role to manage certain subnet (including situations where it is handled by a pair of HA servers). This is currently blocked, because the Stork team needs to write down requirements and our early attempt indicates it's more complex than it looks at the first glance.
|
|
|
|
|
|
## Showing pool status
|
|
|
|
|
|
* **Showing pool status** - As of March 2021, we have the ability to show statistics for networks and subnets. We can say that 30 of 250 addresses are used. The statistics are good first approximation, but they have several flaws. First, there were bugs in statistics that caused them to not truly reflect the pool state, in particular in cases where several Kea instances are sharing the same DB. Second, getting an overview of the pool utilization is often not enough and admins want to have more detailed insight. The major difficulty here is to come up with an efficient way to keep this information roughly up to date. The current mechanisms available in Kea (e.g. lease4-get-all) are insufficient and wouldn't scale for deployments that count devices in millions. There's a plan to implement [incremental lease updates](https://gitlab.isc.org/isc-projects/kea/-/issues/1230), so Stork would retrieve all leases just once (that's acceptable) and then only get the lease updates periodically. This is currently blocked until such a mechanism is implemented in Kea. |
|
|
As of March 2021, we have the ability to show statistics for networks and subnets. We can say that 30 of 250 addresses are used. The statistics are good first approximation, but they have several flaws. First, there were bugs in statistics that caused them to not truly reflect the pool state, in particular in cases where several Kea instances are sharing the same DB. Second, getting an overview of the pool utilization is often not enough and admins want to have more detailed insight. The major difficulty here is to come up with an efficient way to keep this information roughly up to date. The current mechanisms available in Kea (e.g. lease4-get-all) are insufficient and wouldn't scale for deployments that count devices in millions. There's a plan to implement [incremental lease updates](https://gitlab.isc.org/isc-projects/kea/-/issues/1230), so Stork would retrieve all leases just once (that's acceptable) and then only get the lease updates periodically. This is currently blocked until such a mechanism is implemented in Kea. |