... | ... | @@ -8,23 +8,20 @@ Each of the following use cases emphasizes certain aspect of the configuration m |
|
|
|
|
|
UC1: **Fresh deployment** - This is a greenfield deployment. Stork is deployed with new Kea with minimal configuration only (ca, logging). It should be possible to define the remaining parameters (networks, subnets, pools, reservations, options) in Stork and then push the configuration to Kea. There should be some feedback to the administrator as the configuration is validated, applied to the Kea server and saved in the local Kea config file.
|
|
|
|
|
|
UC2: **Existing Kea config file** - There is an existing Kea deployment with some configuration already done by the sysadmin. This current running configuration is stored in a local config file. Stork server and agent are deployed. The sysadmin now wants to view and change this Kea configuration using Stork. Once a change is applied in Stork, it should be deployed to this existing single Kea instance. It may be important to ensure that the local configuration file has not been changed after Stork retrieved it, and before Stork pushes the updates, particularly if Stork is only pushing a partial configuration.
|
|
|
UC2: **Existing Kea config file** - There is an existing Kea deployment with some configuration already done by the sysadmin. This current running configuration is stored in a local config file. Stork server and agent are deployed. The sysadmin now wants to view and change this Kea configuration using Stork. Once a change is applied in Stork, it should be deployed to this existing single Kea instance. It is important to ensure that the local configuration file has not been changed after Stork retrieved it, and before Stork pushes the updates, particularly if Stork is only pushing a partial configuration.
|
|
|
|
|
|
UC3: **HA, local configuration** - This is similar to UC2, except there is a pair of HA servers already configured using local configuration files. The Kea servers share most of their configuration (the same shared networks, subnets and options), but there are some differences between partners (different pools, server names). The sysadmin now wants to manage the configuration of both servers using Stork. Once a change is applied in Stork, it should be deployed to both HA partners. Also, see related UC9.
|
|
|
UC3: **HA, local configuration** - This is probably the most common configuration for Kea. This is similar to UC2, except there is a pair of HA servers already configured using local configuration files. The Kea servers share most of their configuration (the same shared networks, subnets and options), but there are some differences between partners (different pools, server names). The sysadmin now wants to manage the configuration of both servers using Stork. Once a change is applied in Stork, it should be deployed to both HA partners. Also, see related UC9.
|
|
|
|
|
|
UC4: **Config Backend** - An existing Kea deployment is using the Config Backend, which stores subnets, pools, and options in a MySQL database. There will be multiple Kea servers, likely sharing some data in the config backend. Other configuration information is stored in the config file on the individual Kea servers. The Stork server and agents are deployed.
|
|
|
|
|
|
UC4: **Config Backend** - An existing Kea deployment is using the Config Backend, which stores subnets, pools, and options in a MySQL database. There will be multiple Kea servers, likely sharing some data in the config backend. Other configuration information is stored in the config file on the individual Kea servers. The Stork server and agents are deployed and *from now on the Kea configurations should be managed with Stork*.
|
|
|
|
|
|
> in this case, will Stork manage only the data in the local config file, or will it manage the data in the configuration backend as well?? ... What does it mean that "from now on the configuration should be managed with Stork"?, does that mean the config backend is disabled somehow? Why? A change in the Stork server should push changes to the Config backend, and the Kea servers will continue to pull data from the Config backend. This way we can make one change and update a number of servers that share that configuration element.
|
|
|
> in this case, will Stork manage the Kea servers directly, and they will update the config backend, where their running configurations are saved (so Stork is not *replacing* the config backend function).
|
|
|
|
|
|
UC5: **Clone an existing Kea server** - This is a logical extension of the fresh deployment scenario. This is addressed at large deployments. They want to have centralized configuration storage (Stork or CB) and spin up new Kea VMs easily. If the existing Kea instances are not able to keep up, more instances should be created and Stork should be able to push configuration reasonably easily.
|
|
|
|
|
|
It should be possible to browse Kea configurations stored locally in Stork, select one, duplicate it, 'save as' with another name, make changes to it, and then apply it to a minimal machine as in UC1. Creating the VM with the minimal config on it is a likely phase2 of this use case, automating even more of the work for the admin.
|
|
|
It should be possible to browse Kea configurations stored locally in Stork, select one, duplicate it, 'save as' with another name, make changes to it, and then apply it to a minimal machine as in UC1. Creating the VM with the minimal config on it is a likely phase2 of this use case, automating even more of the work for the admin. It might be good to use a hook for creating the VM, so if we publish one (e.g. for AWS) then users can use that as a model to create hooks for other vm systems.
|
|
|
|
|
|
UC6: **Reverting changes** - The sysadmin implemented a change and there appears to be a problem. He/she would like to roll back to an older, known good configuration. One step in this process should be to compare the new/current configuration to an earlier configuration, highlighting the differences.
|
|
|
|
|
|
> This is exactly what a CVS is for - I wonder if in a larger deployment people would just keep config files in a git repo and look there for changes?? Is it possible for Stork to leverage that? I see Stork as aimed at smaller deployments, and less-sophisticated users however...
|
|
|
|
|
|
UC7: **Audit** - In a larger deployment, there is a group of admins. There should be an audit trail of who implemented given change. There should be some history of the configuration elements being changed.
|
|
|
|
|
|
UC8: **Planning changes** - In more complex environments, larger and strategic changes are planned in advance. It should be possible to prepare a configuration change, validate it, but not apply it immediately and define a point in time where it should be deployed. Examples: renumber to new ISP, retire old printer/router/etc.
|
... | ... | |