... | ... | @@ -6,21 +6,33 @@ As of late 2021, Stork is able to monitor Kea configuration, but not able to cha |
|
|
|
|
|
Each of the following use cases emphasizes certain aspect of the configuration management. In many actual deployments, several use cases may be applicable.
|
|
|
|
|
|
UC1: **Basic** - This is the most basic use case. There is existing Kea deployment with some configuration already done by the sysadmin. The configuration is stored in a local config file. Stork server and agent are deployed. The configuration is then pulled into Stork. The sysadmin now wants to manage its configuration using Stork. Once a change is applied in Stork, it should be deployed to a single Kea instance.
|
|
|
UC1: **Fresh deployment** - This is a greenfield deployment. Stork is deployed with new Kea with minimal configuration only (ca, logging). It should be possible to define the remaining parameters (networks, subnets, pools, reservations, options) in Stork and then push the configuration to Kea. There should be some feedback to the administrator as the configuration is validated, applied to the Kea server and saved in the local Kea config file.
|
|
|
|
|
|
UC2: **HA** - This is similar to the basic configuration, except there is a pair of HA servers. They both share mostly the same configuration with the same shared networks, subnets and options, except some differences between partners (different pools, server name). The sysadmin now wants to manage its configuration using Stork. Once a change is applied in Stork, it should be deployed to both HA partners. Also, see related UC9.
|
|
|
UC2: **Existing Kea config file** - There is an existing Kea deployment with some configuration already done by the sysadmin. This current running configuration is stored in a local config file. Stork server and agent are deployed. The sysadmin now wants to view and change this Kea configuration using Stork. Once a change is applied in Stork, it should be deployed to this existing single Kea instance. It may be important to ensure that the local configuration file has not been changed after Stork retrieved it, and before Stork pushes the updates, particularly if Stork is only pushing a partial configuration.
|
|
|
|
|
|
UC3: **Config Backend** - An existing Kea deployment is using Config Backend, which stores subnets, pools, and options in a MySQL database. The Stork server and agents are deployed and from now on the configuration should be managed with Stork.
|
|
|
UC3: **HA, local configuration** - This is similar to UC2, except there is a pair of HA servers already configured using local configuration files. The Kea servers share most of their configuration (the same shared networks, subnets and options), but there are some differences between partners (different pools, server names). The sysadmin now wants to manage the configuration of both servers using Stork. Once a change is applied in Stork, it should be deployed to both HA partners. Also, see related UC9.
|
|
|
|
|
|
UC4: **Fresh deployment** - This is a greenfield deployment. Stork is deployed with new Kea with minimal (ca, logging only) configuration. It should be possible to define essential (networks, subnets, pools, reservations, options) in Stork and then push the configuration to Kea.
|
|
|
> It seems like this is the obvious use case for the config backend, isn't it? where the shared data is in the config backend? I think we need to prioritize either reading/modifying individual local config files or the config backend. I think recognizing where segments of multiple separate configuration are the same is a complicated advanced task, so if we want to manage shared configuration elements, we could require those are in the config backed.
|
|
|
|
|
|
UC5: **Virtualization** - This is a logical extension of the fresh deployment scenario. This is addressed at large deployments. They want to have centralized configuration storage (Stork or CB) and spin up new Kea VMs easily. If the existing Kea instances are not able to keep up, more instances should be created and Stork should be able to push configuration reasonably easily.
|
|
|
> I am not sure if this is meant to be something other than, multiple Kea servers sharing a configuration backend. Is there some way that HA pairs are special here?
|
|
|
|
|
|
UC6: **Reverting changes** - The sysadmin implemented a change and users started complaining. He/she would like to compare to earlier configuration and revert changes, effectively rolling back to old configuration.
|
|
|
UC4: **Config Backend** - An existing Kea deployment is using the Config Backend, which stores subnets, pools, and options in a MySQL database. There will be multiple Kea servers, likely sharing some data in the config backend. Other configuration information is stored in the config file on the individual Kea servers. The Stork server and agents are deployed and *from now on the Kea configurations should be managed with Stork*.
|
|
|
|
|
|
> in this case, will Stork manage only the data in the local config file, or will it manage the data in the configuration backend as well?? ... What does it mean that "from now on the configuration should be managed with Stork"?, does that mean the config backend is disabled somehow? Why? A change in the Stork server should push changes to the Config backend, and the Kea servers will continue to pull data from the Config backend. This way we can make one change and update a number of servers that share that configuration element.
|
|
|
|
|
|
UC5: **Clone an existing Kea server** - This is a logical extension of the fresh deployment scenario. This is addressed at large deployments. They want to have centralized configuration storage (Stork or CB) and spin up new Kea VMs easily. If the existing Kea instances are not able to keep up, more instances should be created and Stork should be able to push configuration reasonably easily.
|
|
|
|
|
|
It should be possible to browse Kea configurations stored locally in Stork, select one, duplicate it, 'save as' with another name, make changes to it, and then apply it to a minimal machine as in UC1. Creating the VM with the minimal config on it is a likely phase2 of this use case, automating even more of the work for the admin.
|
|
|
|
|
|
UC6: **Reverting changes** - The sysadmin implemented a change and there appears to be a problem. He/she would like to roll back to an older, known good configuration. One step in this process should be to compare the new/current configuration to an earlier configuration, highlighting the differences.
|
|
|
|
|
|
> This is exactly what a CVS is for - I wonder if in a larger deployment people would just keep config files in a git repo and look there for changes?? Is it possible for Stork to leverage that? I see Stork as aimed at smaller deployments, and less-sophisticated users however...
|
|
|
|
|
|
UC7: **Audit** - In a larger deployment, there is a group of admins. There should be an audit trail of who implemented given change. There should be some history of the configuration elements being changed.
|
|
|
|
|
|
UC8: **Planning changes** - In more complex environments, larger and strategic changes are planned in advance. It should be possible to prepare new configuration, but not apply it immediately and define a point in time where it should be deployed. Examples: renumber to new ISP, retire old printer/router/etc.
|
|
|
UC8: **Planning changes** - In more complex environments, larger and strategic changes are planned in advance. It should be possible to prepare a configuration change, validate it, but not apply it immediately and define a point in time where it should be deployed. Examples: renumber to new ISP, retire old printer/router/etc.
|
|
|
|
|
|
> Staging and scheduling changes is very important, esp where you need to allow time for leases to expire to say, empty out a pool or subnet you are trying to free up. IMHO this is equally valuable in a smaller deployment as in a large one.
|
|
|
|
|
|
UC9: **Enabling HA** - This is somewhat related to UC2. The initial configuration is a stand-alone Kea server. The sysadmin now wants to enable HA and needs to come up with a configuration for the second server. Stork should allow extending stand-alone config into HA-enabled mode and then clone it (with the necessary changes) to the second server.
|
|
|
|
... | ... | @@ -36,4 +48,8 @@ Kea configuration is incredibly complicated. It is unlikely that every knob woul |
|
|
|
|
|
## Requirements
|
|
|
|
|
|
TBD |
|
|
\ No newline at end of file |
|
|
Stork should guide the user in making and updating Kea configurations, providing tabular and graphical representation of configuration data where that is more convenient and reduces errors.
|
|
|
|
|
|
Stork should then compose a valid Kea configuration snippet (or entire configuration) including formatting (brackets, etc) and manage the validation and application of the configuration change.
|
|
|
|
|
|
Stork should be able to send a complete configuration file to Kea, including any options that can be configured locally (that are not required for the Stork and network connection) - but not all of the configuration needs to be represented graphically. This allows a more advanced user to use Stork for centrally-managing configurations that are more complex than the GUI can display, and it helps new admins learn the Kea configuration language by reading the actual JSON. |
|
|
\ No newline at end of file |