- Stork demo
- 1. Log in using admin/admin credentials
- 2. Check your role in the system
- 3. Add new user
- 4. Add new BIND9 machine to monitor
- 5. Inspect the agent-bind9 machine
- 6. Add new Kea machine to monitor
- 7. Inspect Kea details
- 8. Add two Kea servers that work as HA pair
- 9. DHCPv6 support
- 10. All subnets in your network
- 11. Shared networks
- 12. Host reservations on monitored machines
- 13. Host reservations within Kea host backends
- 14. DHCP Dashboard
- 15. DHCP address pool utilization
- 16. DNS traffic
- 17. Grafana
- 18. Events
- 19. Other tasks
- Wrapping up
This page describes a self-demo that you can run on Stork. To run it you need one of the following:
- access to https://stork.lab.isc.org (ISC employees only for now)
- an Ubuntu box (download Stork sources from the repository, type
rake docker_up, connect to http://localhost:8080)
In addition, in order to have access to the features which require Kea premium hooks, an access token to the Kea premium repository is required. The access token can be found on the following page https://cloudsmith.io/~isc/repos/kea-1-7-prv/setup/#formats-deb which is available to ISC employees and paid ISC customers. The access token is found within the following link https://dl.cloudsmith.io//isc/kea-1-7-prv/cfg/setup/bash.deb.sh after logging in to the Cloudsmith.
In order to start the demo with the premium Kea features run the following:
rake docker_up cs_repo_access_token=<access token>
Once Stork becomes a bit more mature, we're planning to have a public demo site.
Stork documentation is available at https://stork.readthedocs.io. It may be useful during this self-guided tour.
1. Log in using admin/admin credentials
Note the version displayed. It is not hardcoded. This is a version of the Stork server that is retrieved over REST API.
2. Check your role in the system
You logged in as admin, which is an account with
super-admin role. You can check you currently assigned roles by going to your
Profile page. Click on the little triangle button next to
Note: as of 0.9 there are currently two roles defined:
super-admin (can do everything including managing and adding new users) and
admin (can do everything, except managing or adding users). A third role, read-only users, coming up soon.
3. Add new user
Since you're logged in as super-admin, you can see the
Configuration menu and
Users within it. Click on it and you'll see a list of all users. Click on
Create User Account to create a new account. It's recommended to create a new account with
admin role, so the new user can't create more users. Go ahead and try it.
4. Add new BIND9 machine to monitor
Machines and click
Add New Machine, type in
Normally you would type in an FQDN or an IP address of the machine you want to monitor. By default Stork is being deployed using Docker. There are several docker containers simulating machines with Kea and BIND in various modes of operation. agent-bind9 is a name of one of such containers. Note that you didn't specify what kind of software is running on the
agent-bind9 machine. Stork server connected to the stork agent running there and the agent looked for Kea and BIND and found only BIND. It should detect BIND app running there.
5. Inspect the agent-bind9 machine
Click around. As of 0.9 the BIND capabilities are basic. Stork is able to check if BIND process is running and display its version, number of zones configured, time since it last reconfiguration and more.
6. Add new Kea machine to monitor
Machines and click
Add New Machine, type in
agent-kea. The procedure is the same as before, but this time Stork detected Kea servers running. Notice that a problem is reported.
Kea is being shipped with CA (Control Agent) preconfigured with control sockets for DHCPv4, DHCPv6 and DDNS. This simplifies deployment. In this particular Kea deployment only DHCPv4 daemon is installed. CA tries to connect to all of those daemons and continues with only those that respond. That makes it easy to deploy daemons selectively. However, Stork looks at the CA config and determines that there are 3 daemons expected, but only DHCPv4 is running. The other ones are greyed out and on their tab there is information that Stork agent cannot communicate with them. As this is initial situation Stork concludes that this is as expected and switches of monitoring of these daemons, only DHCPv4 is monitored and its status is green.
7. Inspect Kea details
You can either click on the Version on the Kea apps list or click of the machine and the link to details in the Kea app panel on the machine details page. Note the Kea version being returned and a list of currently loaded hooks. List of subnets is displayed as well.
Note that the Kea app running on agent-kea does not have HA enabled, so HA status is not displayed.
The sample Kea configuration has couple subnets that you can inspect here.
8. Add two Kea servers that work as HA pair
Machines and click
Add New Machine, and add
agent-kea-ha1. Repeat for
You can now inspect the HA status of those servers. Note the difference between the status between two partners.
Optionally, you may want to connect to simulate one of the Kea servers crashing and observe how the surviving Kea server detects the problem and how Stork provides extra insight into what's happening.
As of 0.12, the Stork Environment Simulator listening on http://localhost:5000/ (this available in demo only) allows you to conveniently stop and start services to simulate all sorts of failures. Go ahead and kill one of the kea-dhcp4 daemons. Alternatively, you can do it youself. Open a new console. Make sure you are in the Stork directory where the demo is being run, then connect to the docker machine and use kill command to abruptly stop kea.
# Make sure this is the directory where you run `rake docker_up` from. cd ~/devel/stork # Connect to the container docker-compose exec agent-kea-ha2 /bin/bash # See that the kea process is running. The kea-dhcp4 should have low value of process id. # In this example it was 10. ps aux | grep kea-dhcp4 root 10 0.0 1.0 168576 20872 ? S 09:28 0:01 /usr/sbin/kea-dhcp4 -c /etc/kea/kea-dhcp4.conf root 295 0.0 0.0 11464 1000 pts/0 S+ 10:31 0:00 grep --color=auto kea-dhcp4 # Now kill the process kill 10
After couple seconds Stork will report a problem. Observe how Kea goes through various stages. You may see something similar to this:
To restart Kea, use the following command:
/usr/sbin/kea-dhcp4 -c /etc/kea/kea-dhcp4.conf &
You can now logout from the docker image using
exit or by pressing ctrl-d.
9. DHCPv6 support
Stork fully supports IPv6 from the day one. Add another machine called
agent-kea6. Notice the IPv6 subnet and several pools.
10. All subnets in your network
Stork lets you view and search through the subnets and pools. Go to
DHCP and then
Subnets. You will see all the subnets with pools in them. You can filter the subnets by type (any, DHCPv4 or DHCPv6). You can also type any string. For example, to limit the subnet to 220.127.116.11, you can search for
0.3. Note that strings shorter than 4 characters require you to confirm with Enter (strings of 4 chars or longer does not require that). You can search for specific subnets, pools or pool boundaries.
11. Shared networks
Grouping subnets into shared networks is a very popular feature in Kea and other DHCP servers. Stork supports this ability by showing networks. You can go to
Shared networks. It offers the same filtering mechanism as subnets.
12. Host reservations on monitored machines
agent-kea machine all host reservations configured on the Kea app running on this machine will be fetched into Stork can be presented in the UI. Navigate to
DHCP and then
Host Reservations. All host reservations detected on the monitored machines will be listed, including the DHCP identifiers, reserved IP addresses and the subnets that each reservation belongs to. Finally, the last column comprises the list of servers that the particular host reservation is configured on.
The filtering box placed above the list of host reservations can be used to search hosts by DHCP identifier types, DHCP identifier values and/or reserved IP addresses. Just type a part of the searched phrase and the list of reservations will be adjusted to display only those matching the filtering text. For example, typing
clien should result in displaying only those reservations which DHCP identifier type is
13. Host reservations within Kea host backends
The reservations observed in the previous step were only those that are specified within the Kea configuration files. Kea also supports defining host reservations within a database via
host_cmds premium hooks library. Those reservations are available in the same view as previously. They are fetched when the Kea app is configured to use
host_cmds hooks library. The demo setup optionally includes such machine if the demo is started with the
In order to see the reservations stored in the host database on this machine, start monitoring this machine by adding it to Stork. The machine name is
agent-kea-hosts. Stork is currently configured to fetch and refresh the reservations from the hosts backend at the 60 seconds interval. Thus, you may need to wait a little while before the host reservations appear on the list. If the fetch is successful, you should observe new IPv4 reservations starting with IP address of 192.0.2.200 and higher.
14. DHCP Dashboard
One of easily missed features of Stork is its dashboard. Make sure you click on the Stork logo (or the Stork name next to it). Since 0.10, there is also an explicit link in the DHCP menu. This is a high level overview of all the things being currently monitored. If you followed the demo, you should see something similar to this:
The list of subnets shows top 5 subnets with highest pool utilization. There's a list of events on the right hand side. If you configured Grafana, you will see links to Grafana to inspect historic values for subnets and how they changed over time.
15. DHCP address pool utilization
Open a new tab in your browser and connect to http://localhost:5000 (if running locally) or to http://stork.lab.isc.org:5000 to take a look at the DHCP traffic generator. This is not part of the Stork itself, it's a tool we developed to simulate actual networks. It's a bit simple, but sufficient enough to generate traffic. It retrieves list of subnets known by Stork and enables to generate traffic for each subnet. You may want to experiment with it. Things to play with it:
set the number of clients to somewhat smaller value than the pool size. You'll see a warning (yellow triangle) once you cross the 80% utilization and critical (red error) once you cross 90%. Those thresholds are hardcoded as of 0.5, but will be configurable in the future versions.
traffic generator is currently not able to generate traffic for more than one subnet in a shared network. When you click Start in the next subnet, the traffic in the other subnet for the same shared network stops. This is a limitation of the underlying perfdhcp tool that will be fixed in future Kea releases.
Stork retrieves the information once per 10 seconds from its database. Unfortunately, the UI does not refresh itself yet and you need to press F5 or ctrl-R to reload the page to see. This will be fixed in Stork 0.6.
The lease lifetimes have unnaturally short lifetimes of only 3 minutes. Wait a little bit and they'll expire.
With a little bit of juggling around, you can see something like this:
Make sure you take a look at the shared networks, too!
16. DNS traffic
Open a new tab in your browser and connect to the Stork Environment Simulator on http://localhost:5000 (if running locally) or http://stork.lab.isc.org:5000 to take a look at the DNS traffic generator. This is not part of the Stork itself, it's a tool we developed to simulate some traffic. It retrieves the DNS servers known by stork and enables to generate traffic to it. You can send a simple query with
Dig or start a query stream with the
Start button. This is quite basic and may be extended in the future with the option to query different names, replay a pcap, and emulate different clients. Go ahead and experiment. Once you got some traffic, go to Grafana and see the BIND dashboard.
An early Grafana integration was introduced in 0.5. In 0.9, you can go to
Settings and set up the link to your Grafana. In case of demo, type in
http://localhost:3000 (if running the demo locally) or
http://stork.lab.isc.org:3000 (if using on stork.lab). Go to http://localhost:3000 or http://stork.lab.isc.org:3000 and log in using admin/admin credentials. Please don't change the password, so the next person viewing the demo can take a look, too.
There are two dashboards. One for Kea and another one for BIND 9.
Click on Home and then Stork Kea DHCPv4 dashboard. You have plenty of statistics being shown here. Make sure to use the traffic generator, otherwise you'll see boring zeros all the time. Note the pool utilization with two thresholds (80% and 90%) set up.
For Kea the dashboard for Grafana was already preconfigured. In case of BIND 9 it needs to be manually import. The dashboard definition is available here: https://gitlab.isc.org/isc-projects/stork/-/tree/master/grafana. Now go to Grafana homepage (http://stork.lab.isc.org:3000), click
Import dashboard, then
Upload .json file and upload it. Alternatively, you can paste this link
Click on Home and then Stork Bind DNS dashboard. There are some generic statistics, as well as the Cache Hit Ratio/Hit/Miss statistics provided by Stork. More are coming in future Stork releases.
Stork records various events in the system. There are several places where you can observe events;
- The dashboard - here you can see the latest 15 events in the whole networks being monitored.
- The Machine view - here you can the events related to this specific machine
- Events viewer - this is a full page event viewer, which lets you filter events by severity, machine, app type, daemon and for the events that were results of user's actions - also by user.
19. Other tasks
Here's a list of smaller things you can do.
change your password by going to
- *change others password. If you're a super-admin, you can change others passwords. Log in as super-admin (e.g. admin user), click on
Usersand then on the user you want to modify. Click
remove machines. Go to
Machinesand pick a machine you want to have removed. Click on the hamburger button (three horizontal lines) on the right hand and choose
Removefrom the menu. Note anything running on the machine will disappear from Stork. The actual services running won't be affected, Stork simply stopped monitoring them. You can re-add them to see that they're doing fine.
- Check Stork version. You can hover your mouse over the Stork logo. It will display some information about Stork itself (version, compilation time).
Once you're using the stork.lab service, please clean up, so the the next person starts from scratch:
- stop any traffic generator
- delete all machines
Stork in early stages and is being developed rapidly. Your feedback is essential. Please go to Stork project page and report new issues or add your comments to existing ones. In particular, we're interested in:
- things that are non-intuitive, broken, confusing
- missing functionalities you'd like to see (we're aware of many, please search the existing issues and +1 or comment on existing ones)
- feedback on the UI (ISC has many good engineers, but our area of expertise is mostly in the command line area.)