In addition, in order to have access to the features which require Kea premium hooks, an access token to the Kea premium repository is required. The access token can be found on the following page https://cloudsmith.io/~isc/repos/kea-1-7-prv/setup/#formats-deb which is available to ISC employees and paid ISC customers. The access token is found within the following link https://dl.cloudsmith.io//isc/kea-1-7-prv/cfg/setup/bash.deb.sh after logging in to the Cloudsmith.
In order to start the demo with the premium Kea features run the following:
Once Stork becomes a bit more mature, we're planning to have a public demo site. Stork documentation is available at https://stork.readthedocs.io. It may be useful during this self-guided tour.
Log in using admin/admin credentials.
Note the version displayed. It is not hardcoded. This is a version of the Stork server that is retrieved over REST API.
Check your role in the system.
You logged in as admin, which is an account with super-admin role. You can check you currently assigned roles by going to your Profile page. Click on the little triangle button next to Logout.
Note: as of 0.9 there are currently two roles defined: super-admin (can do everything including managing and adding new users) and admin (can do everything, except managing or adding users). A third role, read-only users, coming up soon.
Add new user.
Since you're logged in as super-admin, you can see the Configuration menu and Users within it. Click on it and you'll see a list of all users. Click on Create User Account to create a new account. It's recommended to create a new account role admin, so the new user can't create more users. Go ahead and try it.
Add new BIND9 machine to monitor.
Go to Services->Machines and click Add New Machine, type in agent-bind9.
Normally you would type in an FQDN or an IP address of the machine you want to monitor. By default Stork is being deployed using Docker. There are several docker containers simulating machines with Kea and BIND in various modes of operation. agent-bind9 is a name of one of such containers. Note that you didn't specify what kind of software is running on the agent-bind9 machine. Stork server connected to the stork agent running that and the agent looked for Kea and BIND and found only BIND. It should detect BIND app running there.
Inspect the agent-bind9 machine.
Click around. As of 0.9 the BIND capabilities are basic. Stork is able to check if BIND process is running and display its version, number of zones configured, time since it last reconfiguration and more.
Add new Kea machine to monitor.
Go to Services->Machines and click Add New Machine, type in agent-kea. The procedure is the same as before, but this time Stork detected Kea servers running. Notice that a problem is reported.
Kea is being shipped with CA (Control Agent) preconfigured with control sockets for DHCPv4, DHCPv6 and DDNS. This simplifies deployment. CA tries to connect to all of those daemons and continues with only those that respond. That makes it easy to deploy daemons selectively. However, Stork looks at the CA config and determines that there are 3 daemons expected, but only DHCPv4 is running. Therefore it reports a problem of non-running DHCPv6 and DDNS daemons.
Stork developers have several ideas how to deal with the situation, but we'd love to hear your thoughts on this. We could simply modify the docker container to run all daemons. This would nice feeling of seeing all green, but wouldn't demonstrate that Stork is able to detect problems. Second alternative would be to modify the CA config, so it would attempt only to connect to daemons that are actually running (DHCPv4 only). Third, we could add an Ignore or That's ok button that the user could click to indicate that it's ok that DHCPv6 or DDNS is not running. Ultimately, the network admin is the source of truth that knows whether the daemon is supposed to be running or not.
Inspect Kea details.
You can either click on the Version on the Kea apps list or click of the machine and the link to details in the Kea app panel on the machine details page. Note the Kea version being returned and a list of currently loaded hooks. List of subnets is displayed as well.
Note that the Kea app running on agent-kea does not have HA enabled, so HA status is not displayed.
The sample Kea configuration has couple subnets that you can inspect here.
Add two Kea servers that work as HA pair.
Go to Services->Machines and click Add New Machine, and add agent-kea-ha1. Repeat for agent-kea-ha2.
You can now inspect the HA status of those servers. Note the difference between the status between two partners.
Optionally, you may want to connect to simulate one of the Kea servers crashing and observe how the surviving Kea server detects the problem and how Stork provides extra insight into what's happening.
You may open a new console. Make sure you are in the Stork directory where the demo is being run, then connect to the docker machine and use kill command to abruptly stop kea.
# Make sure this is the directory where you run `rake docker_up` from.cd ~/devel/stork# Connect to the containerdocker-compose exec agent-kea-ha2 /bin/bash# See that the kea process is running. The kea-dhcp4 should have low value of process id.# In this example it was 10.ps aux | grep kea-dhcp4root 10 0.0 1.0 168576 20872 ? S 09:28 0:01 /usr/sbin/kea-dhcp4 -c /etc/kea/kea-dhcp4.confroot 295 0.0 0.0 11464 1000 pts/0 S+ 10:31 0:00 grep --color=auto kea-dhcp4# Now kill the processkill 10
After couple seconds Stork will report a problem. Observe how Kea goes through various stages. You may see something similar to this:
To restart Kea, use the following command:
/usr/sbin/kea-dhcp4 -c /etc/kea/kea-dhcp4.conf &
You can now logout from the docker image using exit or by pressing ctrl-d.
Stork fully supports IPv6 from the day one. Add another machine called agent-kea6. Notice the IPv6 subnet and several pools.
All subnets in your network.
Stork lets you view and search through the subnets and pools. Go to DHCP and then Subnets. You will see all the subnets with pools in them. You can filter the subnets by type (any, DHCPv4 or DHCPv6). You can also type any string. For example, to limit the subnet to 22.214.171.124, you can search for 0.3. Note that strings shorter than 4 characters require you to confirm with Enter (strings of 4 chars or longer does not require that). You can search for specific subnets, pools or pool boundaries.
Host reservations on monitored machines
After adding agent-kea machine all host reservations configured on the Kea app running on this machine will be fetched into Stork can be presented in the UI. Navigate to DHCP and then Hosts. All host reservations detected on the monitored machines will be listed, including the DHCP identifiers, reserved IP addresses and the subnets that each reservation belongs to. Finally, the last column comprises the number of servers that the particular host reservation is configured on.
The filtering box placed above the list of host reservations can be used to search hosts by DHCP identifier types, DHCP identifier values and/or reserved IP addresses. Just type a part of the searched phrase and the list of reservations will be adjusted to display only those matching the filtering text. For example, typing clien should result in displaying only those reservations which DHCP identifier type is client-id.
Host reservations within Kea host backends
The reservations observed in the previous step were only those that are specified within the Kea configuration files. Kea also supports defining host reservations within a database via host_cmds premium hooks library. Those reservations are available in the same view as previously. They are fetched when the Kea app is configured to use host_cmds hooks library. The demo setup optionally includes such machine if the demo is started with the cs_repo_access_token variable.
In order to see the reservations stored in the host database on this machine, start monitoring this machine by adding it to Stork. The machine name is agent-kea-hosts. Stork is currently configured to fetch and refresh the reservations from the hosts backend at the 60 seconds interval. Thus, you may need to wait a little while before the host reservations appear on the list. If the fetch is successful, you should observe new IPv4 reservations starting with IP address of 192.0.2.200 and higher.
Open a new tab in your browser and connect to http://localhost:5001 (if running locally) or to http://stork.lab.isc.org:5001 to take a look at the traffic generator. This is not part of the Stork itself, it's a tool we developed to simulate some traffic. It retrieves the DNS servers known by stork and enables to generate traffic to it. You can send a simple query with Dig or start a query stream with the Start button. This is quite basic and may be extended in the future with the option to query different names, replay a pcap, and emulate different clients. Go ahead and experiment. Once you got some traffic, go to Grafana and see the BIND dashboard.
Open a new tab in your browser and connect to http://localhost:5000 (if running locally) or to http://stork.lab.isc.org:5000 to take a look at the traffic generator. This is not part of the Stork itself, it's a tool we developed to simulate actual networks. It's a bit simple, but sufficient enough to generate traffic. It retrieves list of subnets known by Stork and enables to generate traffic for each subnet. You may want to experiment with it. Things to play with it:
set the number of clients to somewhat smaller value than the pool size. You'll see a warning (yellow triangle) once you cross the 80% utilization and critical (red error) once you cross 90%. Those thresholds are hardcoded as of 0.5, but will be configurable in the future versions.
traffic generator is currently not able to generate traffic for more than one subnet in a shared network. When you click Start in the next subnet, the traffic in the other subnet for the same shared network stops. This is a limitation of the underlying perfdhcp tool that will be fixed in future Kea releases.
Stork retrieves the information once per 10 seconds from its database. Unfortunately, the UI does not refresh itself yet and you need to press F5 or ctrl-R to reload the page to see. This will be fixed in Stork 0.6.
The lease lifetimes have unnaturally short lifetimes of only 3 minutes. Wait a little bit and they'll expire.
With a little bit of juggling around, you can see something like this:
Make sure you take a look at the shared networks, too!
Grouping subnets into shared networks is a very popular feature in Kea and other DHCP servers. Stork supports this ability by showing networks. You can go to DHCP -> Shared networks. It offers the same filtering mechanism as subnets.
One of easily missed features of Stork is its dashboard. Make sure you click on the Stork logo (or the Stork name next to it). This is a high level overview of all the things being currently monitored. If you followed the demo, you should see something similar to this:
The list of subnets shows top 5 subnets with highest pool utilization. There's a list of events on the right hand side. If you configured Grafana, you will see links to Grafana to inspect historic values for subnets and how they changed over time.
An early Grafana integration was introduced in 0.5. In 0.9, you can go to Configuration and Settings and set up the link to your Grafana. In case of demo, type in http://localhost:3000 (if running the demo locally) or http://stork.lab.isc.org:3000 (if using on stork.lab). Go to http://localhost:3000 or http://stork.lab.isc.org:3000 and log in using admin/admin credentials. Please don't change the password, so the next person viewing the demo can take a look, too.
There are two dashboards. One for Kea and another one for BIND 9.
Click on Home and then Stork Kea DHCPv4 dashboard. You have plenty of statistics being shown here. Make sure to use the traffic generator, otherwise you'll see boring zeros all the time. Note the pool utilization with two thresholds (80% and 90%) set up.
We're currently using third party exporter together with a Stork embedded exporter. If you don't see the BIND dashboard, you need to add it yourself. This is a limitation of 0.6 that will be improved in 0.7. You need to get the BIND dashboard (a single JSON file) from here: https://gitlab.isc.org/isc-projects/stork/-/tree/master/grafana. Go to Grafana homepage (http://stork.lab.isc.org:3000), click Home, then Import dashboard, then Upload .json file and upload it. Alternatively, you can paste this link https://gitlab.isc.org/isc-projects/stork/-/raw/master/grafana/bind9-resolver.json there.
Click on Home and then Stork Bind DNS dashboard. There are some generic statistics, as well as the Cache Hit Ratio/Hit/Miss statistics provided by Stork. More are coming in future Stork releases.
You can change your own password.
Go to Profile -> Settings.
Change other users password.
If you're a super-admin, you can change others passwords. Log in as super-admin (e.g. admin user), click on Configuration -> Users and then on the user you want to modify. Click Edit.
You can delete machines.
Go to Services -> Machines and pick a machine you want to have deleted. Click on the hamburger button (three horizontal lines) on the right hand and choose Delete from the menu. Note anything running on the machine will disappear from Stork. The actual services running won't be affected, Stork simply stopped monitoring them. You can re-add them to see that they're doing fine.
Check Stork version.
You can hover your mouse over the Stork logo. It will display some information about Stork itself (version, compilation time).
Once you're done, please clean up, so the the next person starts from scratch:
stop any traffic generator
delete all machines
Stork in very early stages and is being developed rapidly. Your feedback is essential. Please go to Stork project page and report new issues or add your comments to existing ones. In particular, we're interested in:
things that are non-intuitive, broken, confusing
missing functionalities you'd like to see (we're aware of many, please search the existing issues and +1 or comment on existing ones)
feedback on the UI (ISC has many good engineers, but our area of expertise is mostly in the command line area.)