stork issueshttps://gitlab.isc.org/isc-projects/stork/-/issues2020-07-06T14:40:25Zhttps://gitlab.isc.org/isc-projects/stork/-/issues/325when BIND 9 is restarted then it is not longer monitored by prometheus.NewPro...2020-07-06T14:40:25ZMichal Nowikowskiwhen BIND 9 is restarted then it is not longer monitored by prometheus.NewProcessCollectorCurrently monitoring of BIND 9 process is initialized for BIND 9 PID at exporter initialization phase. So restarting BIND 9 is not noticed and exporter is still monitoring non existing, killed process.Currently monitoring of BIND 9 process is initialized for BIND 9 PID at exporter initialization phase. So restarting BIND 9 is not noticed and exporter is still monitoring non existing, killed process.0.10https://gitlab.isc.org/isc-projects/stork/-/issues/558Stork Agent registers with different token on each service/server restart2021-09-07T08:46:54ZToozStork Agent registers with different token on each service/server restartWhen using auto registration to stork server from stork agent side, the same machine registers with different agent token each time, so you have to move it from unauthorised to authorised on each server/service restart, configuration eg:...When using auto registration to stork server from stork agent side, the same machine registers with different agent token each time, so you have to move it from unauthorised to authorised on each server/service restart, configuration eg:
```
STORK_AGENT_SERVER_URL=http://example.com
STORK_AGENT_ADDRESS=111.111.111.111
```
to reproduce this just use the configuration above, restart `isc-stork-agent` and look at the Machines dashboard, it is moved from Authorised to Unauthorised and has different token provided.
Not sure if this behaviour is expected?0.20https://gitlab.isc.org/isc-projects/stork/-/issues/1141Rewording of Configuration Review Reports2023-10-02T10:50:04Zfue36Rewording of Configuration Review ReportsWording is confusing. May lead to system administrator thinking that issues were found. Is there a issue. Is there 12 issues? Are there no issues?
See below, instead of "issues found in 12 reports 2023-08-14 15:50:52" it should sta...Wording is confusing. May lead to system administrator thinking that issues were found. Is there a issue. Is there 12 issues? Are there no issues?
See below, instead of "issues found in 12 reports 2023-08-14 15:50:52" it should state some like "12 configuration report checks performed at 2023-08-14 15:50:52".
![kea-config-review](/uploads/e7e804f594ba9d7ec30255403e3982cc/kea-config-review.png)1.13Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/164Fix webui unit-tests2020-10-27T14:42:34ZTomek MrugalskiFix webui unit-testsI've tried to run webui unit-tests. Here's what I did:
```
rake build_ui
export CHROME_BIN=chromium
cd webui
npx ng test
```
An example error in the first comment.
We should:
1. investigate why the tests are failing. If they're non-s...I've tried to run webui unit-tests. Here's what I did:
```
rake build_ui
export CHROME_BIN=chromium
cd webui
npx ng test
```
An example error in the first comment.
We should:
1. investigate why the tests are failing. If they're non-sense, mark them as such.
2. extend developer's guide with a description how to run webui tests. A single section or a list of steps will do.
3. look at running those tests as part of CI.
The last step can easily be moved to a separate ticket.0.13Tomek MrugalskiTomek Mrugalskihttps://gitlab.isc.org/isc-projects/stork/-/issues/413Stork not pulling lease stats for DHCP42023-03-15T16:52:51ZleecgenesisStork not pulling lease stats for DHCP4---
name: Stork not pulling lease stats for DHCP4
about: Stork colocated on a single kea-dhcp4/6 host running Kea 1.8.0 and Stork 0.11 is not reporting DHCP4 lease statistics.
---
**Describe the bug**
I have a single DHCP VM built with...---
name: Stork not pulling lease stats for DHCP4
about: Stork colocated on a single kea-dhcp4/6 host running Kea 1.8.0 and Stork 0.11 is not reporting DHCP4 lease statistics.
---
**Describe the bug**
I have a single DHCP VM built with Kea version 1.8.0 and Stork 0.11 running on CentOS 7 using prebuilt packages. It's sole purpose is to serve as a DHCP helper for a small regional network. It's currently got 3 IPv4 ranges and 4 IPv6 ranges. The IPv4 ranges contain approximately 1000 IPv4 addresses, and the IPv6 ranges are /64's with prefix delegation handing out /56's from a larger /48 block.
Stork is working fine, on non-standard ports, however the DHCP4 statistics never report. IPv6 statistics, including leases AND prefix-delegations report just fine. Looking at the stdout for the isc-stork-server process, I see the following repeating errors every minute:
statspuller.go:59 missing key total-addreses in LocalSubnet 4 stats
statspuller.go:59 missing key total-addreses in LocalSubnet 5 stats
statspuller.go:59 missing key total-addreses in LocalSubnet 6 stats
Additionally,
**To Reproduce**
Steps to reproduce the behavior:
1. Install Kea 1.8.0, Stork 0.11 from package manager in CentOS 7.
2. Configure kea-ctrl-agent, kea-dhcp4, kea-dhcp6, and agent.env, server.env as attached.
3. DHCP Processes work, clients receive addresses and IPv6 Prefix Delegations.
4. Kea functions as expected.
5. Stork does NOT display DHCP4 statistics, DOES display DHCP6 statistics.
**Expected behavior**
I expect stork to display both DHCP4 and DHCP6 statistics for all discovered ranges.
**Environment:**
- Kea version: 1.8.0
- Stork: 0.11.0
- OS: CentOS 7.8
- Kea: Using MySQL backend, but memfile and PSQL backend also compiled in.
- Kea: Using libdhcp_stat_cmds.so and libdhcp_lease_cmds.so on both DHCP4 and DHCP6.
[agent.env](/uploads/d8037ffb05b4d499d03d5114cd04d819/agent.env)
[kea-ctrl-agent.conf](/uploads/f5c4d0cfeca2c8f72c17160f842bc189/kea-ctrl-agent.conf)
[kea-dhcp4.conf](/uploads/56f052faff28a147b0ca5f0dddbcb249/kea-dhcp4.conf)
[kea-dhcp6.conf](/uploads/91598b56fcd21981d1a9da2cf27a7524/kea-dhcp6.conf)
[server.env](/uploads/6e79c09f7caa83e28275697779c5f8da/server.env)0.13Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/409stork agent does not connect to public ip2020-11-06T14:34:06ZDirkLaurenzstork agent does not connect to public ipI installed Stork/Kea from scratch in order to setup a ha scenario. at the moment the first server is configured.
If the controll agent is bound to 127.0.0.1 the stork agent can connect and collect infos. if i change the crtl-agent to th...I installed Stork/Kea from scratch in order to setup a ha scenario. at the moment the first server is configured.
If the controll agent is bound to 127.0.0.1 the stork agent can connect and collect infos. if i change the crtl-agent to the public ip, the agent complains it cannot connect to 127.0.0.1
```
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] main.go:75 Starting Stork Agent, version 0.11.0, build date 2020-09-04 15:29
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] promkeaexporter.go:272 Prometheus Kea Exporter listening on 0.0.0.0:9547, stats pulling interval: 10 seconds
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] monitor.go:80 Started app monitor
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] monitor.go:148 new or updated apps detected:
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] monitor.go:155 kea: control: 192.168.125.31:8000
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] monitor.go:155 bind9: control: 127.0.0.1:953, statistics: 127.0.0.1:80
Sep 16 22:35:40 ladc01 kea-ctrl-agent[19039]: INFO COMMAND_RECEIVED Received command 'config-get'
Sep 16 22:35:40 ladc01 kea-ctrl-agent[19039]: INFO COMMAND_RECEIVED Received command 'config-get'
Sep 16 22:35:40 ladc01 kea-dhcp4[32070]: 2020-09-16 22:35:40.341 INFO [kea-dhcp4.commands/32070.140437553672960] COMMAND_RECEIVED Received command 'config-get'
Sep 16 22:35:40 ladc01 kea-ctrl-agent[19039]: INFO CTRL_AGENT_COMMAND_FORWARDED command config-get successfully forwarded to the service dhcp4
Sep 16 22:35:40 ladc01 stork-agent[19743]: WARN[2020-09-16 22:35:40] kea.go:57 skipped refreshing viewable log files because config-get returned non success result
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] kea.go:72 no loggers found in the returned configuration while trying to refresh the viewable log files
Sep 16 22:35:40 ladc01 stork-agent[19743]: WARN[2020-09-16 22:35:40] kea.go:57 skipped refreshing viewable log files because config-get returned non success result
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40]prombind9exporter.go:825 Prometheus BIND 9 Exporter listening on 0.0.0.0:9119, stats pulling interval: 10 seconds
Sep 16 22:35:40 ladc01 stork-agent[19743]: INFO[2020-09-16 22:35:40] agent.go:309 started serving Stork Agent address="[::]:8080"
Sep 16 22:35:40 ladc01 named[25746]: validating ./SOA: got insecure response; parent indicates it should be secure
Sep 16 22:35:40 ladc01 named[25746]: no valid RRSIG resolving 'local/DS/IN': 172.24.38.124#53
Sep 16 22:35:40 ladc01 named[25746]: validating ./SOA: got insecure response; parent indicates it should be secure
Sep 16 22:35:40 ladc01 named[25746]: no valid RRSIG resolving 'local/DS/IN': 172.24.38.123#53
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] agent.go:244 Failed to forward commands to Kea CA: Post http://127.0.0.1:8000/: dial tcp 127.0.0.1:8000: connect: connection refused
Sep 16 22:35:48 ladc01 stork-agent[19743]: problem with sending POST to http://127.0.0.1:8000/
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*HTTPClient).Call
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/caclient.go:38
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*StorkAgent).ForwardToKeaOverHTTP
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/agent.go:240
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/api._Agent_ForwardToKeaOverHTTP_Handler
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/api/agent.pb.go:1361
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).processUnaryRPC
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1024
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).handleStream
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1313
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).serveStreams.func1.1
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:722
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357 URL="http://127.0.0.1:8000/"
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] agent.go:244 Failed to forward commands to Kea CA: Post http://127.0.0.1:8000/: dial tcp 127.0.0.1:8000: connect: connection refused
Sep 16 22:35:48 ladc01 stork-agent[19743]: problem with sending POST to http://127.0.0.1:8000/
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*HTTPClient).Call
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/caclient.go:38
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*StorkAgent).ForwardToKeaOverHTTP
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/agent.go:240
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/api._Agent_ForwardToKeaOverHTTP_Handler
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/api/agent.pb.go:1361
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).processUnaryRPC
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1024
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).handleStream
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1313
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).serveStreams.func1.1
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:722
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357 URL="http://127.0.0.1:8000/"
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] keaintercept.go:109 failed to parse Kea responses while invoking asynchronous handlers for command config-get: unexpected end of JSON input
Sep 16 22:35:48 ladc01 stork-agent[19743]: failed to parse responses from Kea:
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/appctrl/kea.UnmarshalResponseList
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/appctrl/kea/kea_command.go:152
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*keaInterceptor).asyncHandle
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/keaintercept.go:107
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] agent.go:244 Failed to forward commands to Kea CA: Post http://127.0.0.1:8000/: dial tcp 127.0.0.1:8000: connect: connection refused
Sep 16 22:35:48 ladc01 stork-agent[19743]: problem with sending POST to http://127.0.0.1:8000/
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*HTTPClient).Call
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/caclient.go:38
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*StorkAgent).ForwardToKeaOverHTTP
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/agent.go:240
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/api._Agent_ForwardToKeaOverHTTP_Handler
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/api/agent.pb.go:1361
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).processUnaryRPC
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1024
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).handleStream
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1313
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).serveStreams.func1.1
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:722
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357 URL="http://127.0.0.1:8000/"
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] agent.go:244 Failed to forward commands to Kea CA: Post http://127.0.0.1:8000/: dial tcp 127.0.0.1:8000: connect: connection refused
Sep 16 22:35:48 ladc01 stork-agent[19743]: problem with sending POST to http://127.0.0.1:8000/
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*HTTPClient).Call
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/caclient.go:38
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*StorkAgent).ForwardToKeaOverHTTP
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/agent.go:240
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/api._Agent_ForwardToKeaOverHTTP_Handler
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/api/agent.pb.go:1361
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).processUnaryRPC
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1024
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).handleStream
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1313
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).serveStreams.func1.1
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:722
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357 URL="http://127.0.0.1:8000/"
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] agent.go:244 Failed to forward commands to Kea CA: Post http://127.0.0.1:8000/: dial tcp 127.0.0.1:8000: connect: connection refused
Sep 16 22:35:48 ladc01 stork-agent[19743]: problem with sending POST to http://127.0.0.1:8000/
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*HTTPClient).Call
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/caclient.go:38
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*StorkAgent).ForwardToKeaOverHTTP
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/agent.go:240
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/api._Agent_ForwardToKeaOverHTTP_Handler
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/api/agent.pb.go:1361
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).processUnaryRPC
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1024
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).handleStream
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1313
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).serveStreams.func1.1
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:722
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357 URL="http://127.0.0.1:8000/"
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] agent.go:244 Failed to forward commands to Kea CA: Post http://127.0.0.1:8000/: dial tcp 127.0.0.1:8000: connect: connection refused
Sep 16 22:35:48 ladc01 stork-agent[19743]: problem with sending POST to http://127.0.0.1:8000/
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*HTTPClient).Call
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/caclient.go:38
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*StorkAgent).ForwardToKeaOverHTTP
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/agent.go:240
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/api._Agent_ForwardToKeaOverHTTP_Handler
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/api/agent.pb.go:1361
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).processUnaryRPC
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1024
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).handleStream
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:1313
Sep 16 22:35:48 ladc01 stork-agent[19743]: google.golang.org/grpc.(*Server).serveStreams.func1.1
Sep 16 22:35:48 ladc01 stork-agent[19743]: /root/go/pkg/mod/google.golang.org/grpc@v1.27.0/server.go:722
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357 URL="http://127.0.0.1:8000/"
Sep 16 22:35:48 ladc01 stork-agent[19743]: ERRO[2020-09-16 22:35:48] keaintercept.go:109 failed to parse Kea responses while invoking asynchronous handlers for command config-get: unexpected end of JSON input
Sep 16 22:35:48 ladc01 stork-agent[19743]: failed to parse responses from Kea:
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/appctrl/kea.UnmarshalResponseList
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/appctrl/kea/kea_command.go:152
Sep 16 22:35:48 ladc01 stork-agent[19743]: isc.org/stork/agent.(*keaInterceptor).asyncHandle
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/backend/agent/keaintercept.go:107
Sep 16 22:35:48 ladc01 stork-agent[19743]: runtime.goexit
Sep 16 22:35:48 ladc01 stork-agent[19743]: /build/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
Sep 16 22:35:48 ladc01 stork-agent[19743]: DEBU[2020-09-16 22:35:48] rndc.go:45 rndc: [rndc -s 127.0.0.1 -p 953 -k /etc/bind/rndc.key status]
^C
here are the versions:
ii isc-kea-admin 1.8.0-isc0000420200825110759 amd64 Administration utilities for ISC Kea DHCP server
ii isc-kea-common 1.8.0-isc0000420200825110759 amd64 Common libraries for the ISC Kea DHCP server
ii isc-kea-ctrl-agent 1.8.0-isc0000420200825110759 amd64 ISC Kea DHCP server REST API service
ii isc-kea-dhcp4-server 1.8.0-isc0000420200825110759 amd64 ISC Kea IPv4 DHCP server
ii isc-stork-agent 0.11.0.200904152903 amd64 ISC Stork Agent
``
root@ladc01:~# cat /etc/debian_version
10.50.13Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/277HA status UI widget does not show properly some state2020-10-30T17:17:49ZMichal NowikowskiHA status UI widget does not show properly some state![image](/uploads/c73700a75fdacff99be4baa2e447066b/image.png)
In Local Server `state` is empty, it has only red cross.
![image](/uploads/1e960ff9d17044895bfb951c20af70c8/image.png)
Here in the same server this state is reported as `not ...![image](/uploads/c73700a75fdacff99be4baa2e447066b/image.png)
In Local Server `state` is empty, it has only red cross.
![image](/uploads/1e960ff9d17044895bfb951c20af70c8/image.png)
Here in the same server this state is reported as `not configured` but it should be something else.0.13Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/100UI bug: closing add user panel doesn't close anything2019-12-04T13:55:27ZTomek MrugalskiUI bug: closing add user panel doesn't close anythingI found two small bugs in the user management code.
First is trivial. The "Create UserAccount" is missing space between User and Account.
The second one is a bit more involved.
STEPS TO REPRODUCE
1. log in, click on Configuration => u...I found two small bugs in the user management code.
First is trivial. The "Create UserAccount" is missing space between User and Account.
The second one is a bit more involved.
STEPS TO REPRODUCE
1. log in, click on Configuration => users
2. Click Create UserAccount
3. There's new tab "New Account" with X on it. Click the X.
Only the tab header will disappear, but not the tab content.Stork-0.2Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/105Possible memory leak in Stork UI2021-09-29T12:06:51ZMarcin SiodelskiPossible memory leak in Stork UIWhile working for the prolonged amount of time with the Stork UI, the browser tab becomes unresponsive and seems to consume a lot of memory. This should be investigated.While working for the prolonged amount of time with the Stork UI, the browser tab becomes unresponsive and seems to consume a lot of memory. This should be investigated.0.21https://gitlab.isc.org/isc-projects/stork/-/issues/180shared networks in db should be distinghuished by inet family2020-03-05T17:00:06ZMichal Nowikowskishared networks in db should be distinghuished by inet familyWhen there are 2 networks defined, one in dhcp4 and one in dhcp6 and they have the same name then creating LocalSubnets go wrong.When there are 2 networks defined, one in dhcp4 and one in dhcp6 and they have the same name then creating LocalSubnets go wrong.0.5https://gitlab.isc.org/isc-projects/stork/-/issues/135when there is several tabs opened on Kea apps page some data of these tabs ar...2020-01-21T17:03:32ZMichal Nowikowskiwhen there is several tabs opened on Kea apps page some data of these tabs are mixedhttps://gitlab.isc.org/isc-projects/stork/-/issues/138Display of Kea hooks installed in Stork2020-03-03T16:20:51ZVicky Riskvicky@isc.orgDisplay of Kea hooks installed in Stork
**Describe the bug**
Looking at the stork demo running at http://stork.lab.isc.org:8080/machines/all
I can see that for the same application, if you look at it by first navigating to the machine and clicking on the application, you ca...
**Describe the bug**
Looking at the stork demo running at http://stork.lab.isc.org:8080/machines/all
I can see that for the same application, if you look at it by first navigating to the machine and clicking on the application, you can see the Kea hooks installed.
If you instead navigate directly to applications/Kea it says there are no hooks installed (for each of the 3 demo Kea servers that are set up). It even shows no hooks installed for the HA servers for which there is HA status shown, and I KNOW those must be running the HA hook.
**To Reproduce**
Steps to reproduce the behavior:
1. Navigate to the demo stork app
3. Select the pull down for Services and Select Kea
4. In the resulting table, click on the Kea version to bring up the detail display
5. Note there are no hooks listed
1. Now select the pull down for services and select Machines
1. Click on the word Kea under the Apps column to bring up the detail display
1. Note that there are now hooks listed.
Ponder the difference.
**Expected behavior**
I would expect since each of these screens is reporting the information about the Kea version and modules installed, that they would both show the same hooks installed.
**Environment:**
This is the version up on storklab as of 1/17/2020. I don't see a Stork version listed anywhere in the UI, but I believe it is 0.3.
If it matters, I was using the Brave browser, Version 1.1.23 Chromium: 79.0.3945.88 (Official Build) (64-bit)
**Additional Information**
![Screen_Shot_2020-01-17_at_2.38.46_PM](/uploads/bf9fb76e558e058168e598dfbaf2bade/Screen_Shot_2020-01-17_at_2.38.46_PM.png)![Screen_Shot_2020-01-17_at_2.41.53_PM](/uploads/2a65f8f3bbc8e03341b1527122c1b60c/Screen_Shot_2020-01-17_at_2.41.53_PM.png)0.5Vicky Riskvicky@isc.orgVicky Riskvicky@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/148Adding the same machine after removing it causes db constraints errors2020-03-05T13:17:52ZMarcin SiodelskiAdding the same machine after removing it causes db constraints errorsWhile I was reviewing !63, I did the following test:
- I've added a machine with one Kea server having a subnet within the shared network. It did it fine, although didn't show the subnet because the logic there does not support subnets w...While I was reviewing !63, I did the following test:
- I've added a machine with one Kea server having a subnet within the shared network. It did it fine, although didn't show the subnet because the logic there does not support subnets within a shared network
- I removed the machine via the UI. The machine got marked as deleted in the db.
- I changed the configuration of the Kea server by making the subnet global rather than belong to a shared network.
- I used UI to add the machine. The machine was added again but an error was reported while adding the app because of the constraint violation:
```
RRO[2020-02-03 13:46:43] restimpl.go:150 problem with inserting app &{0 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC 1 0xc0004ba900 kea 192.168.56.33 8000 false {1.7.3-git} { [0xc000194460 0xc0001b36c0 0xc0001b3730]}}: ERROR #23505 duplicate key value violates unique constraint "app_machine_id_ctrl_port_key
```
The machine was undeleted it seems but the app configuration remains old.0.5https://gitlab.isc.org/isc-projects/stork/-/issues/159when machine is deleted its app with configuration is not deleted2020-03-05T13:18:00ZMichal Nowikowskiwhen machine is deleted its app with configuration is not deletedThis is connected with @marcin redesign where deleted field will be removed. So them it should start just work.This is connected with @marcin redesign where deleted field will be removed. So them it should start just work.0.5https://gitlab.isc.org/isc-projects/stork/-/issues/179Parsing prefix delegation pool from Kea expects invalid format of the pool2020-03-04T18:36:34ZMarcin SiodelskiParsing prefix delegation pool from Kea expects invalid format of the poolThe current code assumes that the prefix delegation pool is specified like this:
```
"pd-pools": [
{
"prefix": 2001:db8:8::/56",
"delegated-len": 96
}
]
```
whereas the actual format is:
```
"pd-pools": [
{
...The current code assumes that the prefix delegation pool is specified like this:
```
"pd-pools": [
{
"prefix": 2001:db8:8::/56",
"delegated-len": 96
}
]
```
whereas the actual format is:
```
"pd-pools": [
{
"prefix": 2001:db8:8::",
"prefix-len": 56,
"delegated-len": 96
}
]
```
This parsing must be corrected.0.5Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/225Host reservations from config are tossed by hosts puller2020-04-03T11:49:10ZMarcin SiodelskiHost reservations from config are tossed by hosts pullerApparently, host_cmds also returns hosts from the config. The hosts puller periodically fetching the hosts from Kea instances would delete hosts with non matching sequence id. The hosts fetched from the config have non matching sequence ...Apparently, host_cmds also returns hosts from the config. The hosts puller periodically fetching the hosts from Kea instances would delete hosts with non matching sequence id. The hosts fetched from the config have non matching sequence id, thus they are removed after first fetch of the hosts from the host_cmds. This must be corrected.0.6Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/189Fix docker traffic-dhcp build2020-03-06T07:45:50ZMatthijs Mekkingmatthijs@isc.orgFix docker traffic-dhcp build0.5Matthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/463Events panel is not refreshed when switching between machine tabs2021-01-28T13:32:14ZMarcin SiodelskiEvents panel is not refreshed when switching between machine tabsWhile doing #429 I noticed that, unlike in case of app panels, when you open several machines panels and switch between them the events are not updated to the currently selected machine. In order to view events from the current machine o...While doing #429 I noticed that, unlike in case of app panels, when you open several machines panels and switch between them the events are not updated to the currently selected machine. In order to view events from the current machine one has to switch to the first (all machines) tab and then go back to the desired one. Another way is to refresh the page.
In order to reproduce:
- Start Stork demo
- Add two new machines, e.g. agent-kea-ha1 and agent-kea-ha2
- Click between agent-kea-ha1 and agent-kea-ha2 tabs. The events panel is not refreshed and is showing events specific to the other machine.
- Click on the Machines tab and go back. Now, events are properly displayed.0.15Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/446Race and constraint errors when adding Kea with many subnets2020-11-30T07:31:17ZMarcin SiodelskiRace and constraint errors when adding Kea with many subnetsThis is a result of sanity checks for Stork 0.13.0 release. See https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174606
Copying over the errors and the debugging result below:
While testing the demo on Ubuntu 20.04 I apparen...This is a result of sanity checks for Stork 0.13.0 release. See https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174606
Copying over the errors and the debugging result below:
While testing the demo on Ubuntu 20.04 I apparently reproduced the issue earlier mentioned by @godfryd here: https://gitlab.isc.org/isc-projects/stork/-/merge_requests/230#note_173942
I've got the following error trace:
```
server_1 | ERRO[2020-11-05 10:03:47] statepuller.go:278 cannot store application state: ERROR #23505 duplicate key value violates unique constraint "access_point_unique_idx"
server_1 | problem with adding access point to app 11: &{11 control 8 127.0.0.1 8000 }
server_1 | isc.org/stork/server/database/model.updateAppAccessPoints
server_1 | /repo/build-root/backend/server/database/model/app.go:68
server_1 | isc.org/stork/server/database/model.AddApp
server_1 | /repo/build-root/backend/server/database/model/app.go:226
server_1 | isc.org/stork/server/apps/kea.CommitAppIntoDB
server_1 | /repo/build-root/backend/server/apps/kea/appkea.go:462
server_1 | isc.org/stork/server/apps.GetMachineAndAppsState
server_1 | /repo/build-root/backend/server/apps/statepuller.go:269
server_1 | isc.org/stork/server/apps.(*StatePuller).pullData
server_1 | /repo/build-root/backend/server/apps/statepuller.go:59
server_1 | isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
server_1 | /repo/build-root/backend/server/agentcomm/puller.go:93
server_1 | runtime.goexit
server_1 | /repo/build-root/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
server_1 | problem with adding access points to app: &{ID:11 CreatedAt:2020-11-05 10:03:46.320564 +0000 UTC MachineID:8 Machine:0xc000e8e160 Type:kea Active:false Meta:{Version:1.7.3 ExtendedVersion:} AccessPoints:[0xc002c34820] Daemons:[0xc0007c00d0 0xc00023e8f0 0xc00037e0d0 0xc00037ef70]}
server_1 | isc.org/stork/server/database/model.AddApp
server_1 | /repo/build-root/backend/server/database/model/app.go:228
server_1 | isc.org/stork/server/apps/kea.CommitAppIntoDB
server_1 | /repo/build-root/backend/server/apps/kea/appkea.go:462
server_1 | isc.org/stork/server/apps.GetMachineAndAppsState
server_1 | /repo/build-root/backend/server/apps/statepuller.go:269
server_1 | isc.org/stork/server/apps.(*StatePuller).pullData
server_1 | /repo/build-root/backend/server/apps/statepuller.go:59
server_1 | isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
server_1 | /repo/build-root/backend/server/agentcomm/puller.go:93
server_1 | runtime.goexit
server_1 | /repo/build-root/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
server_1 | ERRO[2020-11-05 10:03:47] statepuller.go:62 error occurred while getting info from machine 8: problem with storing application state in the database
server_1 | INFO[2020-11-05 10:03:47] statepuller.go:67 completed pulling information from machines: 0/1 succeeded
server_1 | ERRO[2020-11-05 10:03:47] puller.go:95 errors were encountered while pulling data from apps: problem with storing application state in the database
server_1 | isc.org/stork/server/apps.(*StatePuller).pullData
server_1 | /repo/build-root/backend/server/apps/statepuller.go:61
server_1 | isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
server_1 | /repo/build-root/backend/server/agentcomm/puller.go:93
server_1 | runtime.goexit
server_1 | /repo/build-root/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
server_1 | INFO[2020-11-05 10:03:48] middleware.go:42 served request
```
Looking further into the code I think that this is what happens:
* User adds new machine via UI. The machine runs Kea with many subnets.
* The machine entry is fairly quickly added to the database and then apps state is being fetched.
* Before the app is added to the db its configuration is being fetched so it takes a while before we create app entry in the db.
* The state puller monitors this machine already so it tries to get its updated state.
* The state puller checks if the app is already in the db, but it is not because the first operation of machine creation is still in progress.
* The state puller assumes it found a new Kea app because there is no such app in the db yet. The state puller will proceed as it was to add a new app.
* The initial create machine operation finally completes.
* The state pulling also completes and the state puller attempts to add the "new app" into db.
* This operation causes no conflict with (app_id, type) because the puller assigned new app_id, treating it as a new machine.
* This causes an attempt to insert the new app which fails because of the (machine_id, port) constraint.
Overall, the database state should remain consistent because the second instance of the app is not added. But, it produces a lot of work for the server to do parallel updates and also causes a lot of traffic.
The solution in #409 apparently worked because it fetched existing app by (type) rather than (type, app_id). Overall, the right approach would rather be to not do state pulling before the previous pull has completed. We'd need to implement some signaling for it.0.14Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/444Fix typos and line lengths in ChangeLog2020-11-26T15:57:16ZMarcin SiodelskiFix typos and line lengths in ChangeLogThis is a result of sanity checks for Stork 0.13.0: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174495
There are typos in the ChangeLog: "develope" instead of "develop" in the entry 106. "eg." instead of "e.g." in entry 1...This is a result of sanity checks for Stork 0.13.0: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174495
There are typos in the ChangeLog: "develope" instead of "develop" in the entry 106. "eg." instead of "e.g." in entry 113. "adjuste" instead of "adjusted" in entry 114.
Also, some ChangeLog lines are too long (>73) and wrap in the release notes.0.14Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/212lint_ui fails on files in src/assets/arm2020-05-06T09:02:55ZMarcin Siodelskilint_ui fails on files in src/assets/armI get the following errors while linting UI:
```
Checking formatting...
src/assets/arm/_static/basic.css
src/assets/arm/_static/css/badge_only.css
src/assets/arm/_static/css/theme.css
src/assets/arm/_static/doctools.js
src/assets/arm/_st...I get the following errors while linting UI:
```
Checking formatting...
src/assets/arm/_static/basic.css
src/assets/arm/_static/css/badge_only.css
src/assets/arm/_static/css/theme.css
src/assets/arm/_static/doctools.js
src/assets/arm/_static/documentation_options.js
src/assets/arm/_static/fonts/fontawesome-webfont.eot[error] No parser could be inferred for file: src/assets/arm/_static/fonts/fontawesome-webfont.eot
...
```
The commands I used:
```
$ rake build_ui
$ rake lint_ui
```
This doesn't happen if I don't build the UI prior to linting it. So this eliminates the issue:
```
$ git clean -dfx
$ rake lint_ui
```
but I want to be able to run the linter after the UI has been built. Apparently some directories need to be excluded from linting.0.7https://gitlab.isc.org/isc-projects/stork/-/issues/239settings can be broken when -1 is entered as interval2020-04-16T15:54:35ZMichal Nowikowskisettings can be broken when -1 is entered as interval@tomek:
I misconfigured data pulling intervals. One of them was configured to 0 and another one to -1.
When I clicked Save settings, it was accepted. But something snapped internally and now Stork doesn't let fix my "mistake". When I t...@tomek:
I misconfigured data pulling intervals. One of them was configured to 0 and another one to -1.
When I clicked Save settings, it was accepted. But something snapped internally and now Stork doesn't let fix my "mistake". When I try to view the seetings, there's error message popping up: "Getting settings erred: Unknown Error" and the form is empty. Also, the grafana link disappeared after that.
Anyway, this is almost malicious negligence on the user side, so I'm very much ok if you want to push this to separate ticket and put it in the backlog category...https://gitlab.isc.org/isc-projects/stork/-/issues/240Remote HA partner link is not displayed in docker demo2020-04-17T12:13:14ZMarcin SiodelskiRemote HA partner link is not displayed in docker demoSteps to reproduce:
- rake docker_up cache=false
- Open UI
- Add two machines: agent-kea-ha1 and agent-kea-ha2
- Click on one of the Kea apps added
The HA status shows two boxes, one for local server, one for remote. In the title of the...Steps to reproduce:
- rake docker_up cache=false
- Open UI
- Add two machines: agent-kea-ha1 and agent-kea-ha2
- Click on one of the Kea apps added
The HA status shows two boxes, one for local server, one for remote. In the title of the remote partner's box there should be a link to the partner, but it is not shown.
The reason for it seems to be that Storks ends up creating two HA services for the two apps, rather than one. I suspect that the reason for it is that their configs slightly differ. Strictly speaking they differ by the following url:
```
"url": "http://172.20.0.102:8002/"
```
vs
```
"url": "http://172.20.0.102:8002"
```
We could make the URLs comparison slightly less fragile but I think it should come in some other issue that looks at more aspects of HA configuration matching than URLs. Here, we should really just make it work with the minimal effort given the time constraints.0.7https://gitlab.isc.org/isc-projects/stork/-/issues/242Filtering kea apps doesn't work on app names2020-08-27T10:35:37ZTomek MrugalskiFiltering kea apps doesn't work on app namesThis is a follow-up to https://gitlab.isc.org/isc-projects/stork/-/merge_requests/123#note_125138.
The problem is on Kea apps page. The filtering field says "version or any other field", but in fact it doesn't filter using app names. I'...This is a follow-up to https://gitlab.isc.org/isc-projects/stork/-/merge_requests/123#note_125138.
The problem is on Kea apps page. The filtering field says "version or any other field", but in fact it doesn't filter using app names. I've added agent-kea-ha1 and agent-kea-ha2. When used "kea-ha" as filtering string, all apps disappeared.0.11https://gitlab.isc.org/isc-projects/stork/-/issues/245Subnet filtering bug: showing the same subnet several times2020-05-29T14:51:28ZTomek MrugalskiSubnet filtering bug: showing the same subnet several timesThere's a bug in filtering subnets. I have only agent-kea configured. It reports there are 9 subnets. I went to DHCP->Subnets and used "6" as a filtering string hoping to see only 192.0.6.0 subnet. However, it now shows 11 subnets includ...There's a bug in filtering subnets. I have only agent-kea configured. It reports there are 9 subnets. I went to DHCP->Subnets and used "6" as a filtering string hoping to see only 192.0.6.0 subnet. However, it now shows 11 subnets including 3 copies of 192.0.6.0.
If I use a longer filter string 0.6.0 it now limits the subnets correctly, but still shows 3 copies of 192.0.6.0 subnet.
This is what I have in the db:
```
stork=> select * from subnet;
id | created_at | prefix | shared_network_id | client_class | addr_utilization | pd_utilization
----+---------------------------+---------------+-------------------+--------------+------------------+----------------
1 | 2020-04-20 11:35:49.21212 | 192.0.5.0/24 | 1 | class-01-00 | |
2 | 2020-04-20 11:35:49.21212 | 192.0.6.0/24 | 1 | class-01-01 | |
3 | 2020-04-20 11:35:49.21212 | 192.0.7.0/24 | 1 | class-01-02 | |
4 | 2020-04-20 11:35:49.21212 | 192.0.8.0/24 | 1 | class-01-03 | |
5 | 2020-04-20 11:35:49.21212 | 192.0.9.0/24 | 1 | class-01-04 | |
6 | 2020-04-20 11:35:49.21212 | 192.1.15.0/24 | 2 | class-02-00 | |
7 | 2020-04-20 11:35:49.21212 | 192.1.16.0/24 | 2 | class-02-01 | |
8 | 2020-04-20 11:35:49.21212 | 192.1.17.0/24 | 2 | class-02-02 | |
9 | 2020-04-20 11:35:49.21212 | 192.0.2.0/24 | | class-00-00 | |
(9 rows)
```
and this is how it looks filtered:
![bug-duplicate-subnets](/uploads/93df4695f51dd1872cb44ec193bcf405/bug-duplicate-subnets.png)0.8Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/248after entering kea app page with ha status there are lots of errors in the we...2020-06-03T18:16:19ZMichal Nowikowskiafter entering kea app page with ha status there are lots of errors in the web browser consolethe errors:
```
ERROR TypeError: "this._receivedStatus is undefined"
```the errors:
```
ERROR TypeError: "this._receivedStatus is undefined"
```0.8Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/253paging bar elements are misaligned2022-02-04T09:12:02ZMichal Nowikowskipaging bar elements are misalignedThis happens on subnets page, but can appear on other pages too.
![image](/uploads/0868b4faf7b7acc94e4e19cf49173247/image.png)This happens on subnets page, but can appear on other pages too.
![image](/uploads/0868b4faf7b7acc94e4e19cf49173247/image.png)backlogMichal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/259Eliminate errors printed in console2023-03-07T09:16:54ZTomek MrugalskiEliminate errors printed in consoleThere are way too many errors printed when running demo.
Steps to reproduce:
1. `rake docker_up`
2. add agent-kea, agent-kea-ha1, agent-kea-ha2
Here's a couple of errors:
```
agent-kea-ha2_1 | ERRO[2020-04-28 11:40:41] promkeaexporte...There are way too many errors printed when running demo.
Steps to reproduce:
1. `rake docker_up`
2. add agent-kea, agent-kea-ha1, agent-kea-ha2
Here's a couple of errors:
```
agent-kea-ha2_1 | ERRO[2020-04-28 11:40:41] promkeaexporter.go:488 cannot get stat from daemon: response result from Kea != 0: 1, text: unable to forward command to the dhcp6 service: No such file or directory. The server is likely to be offline
agent-kea-ha2_1 | isc.org/stork/agent.(*PromKeaExporter).setDaemonStats
agent-kea-ha2_1 | /home/thomson/devel/stork/backend/agent/promkeaexporter.go:345
agent-kea-ha2_1 | isc.org/stork/agent.(*PromKeaExporter).collectStats
agent-kea-ha2_1 | /home/thomson/devel/stork/backend/agent/promkeaexporter.go:486
agent-kea-ha2_1 | isc.org/stork/agent.(*PromKeaExporter).statsCollectorLoop
agent-kea-ha2_1 | /home/thomson/devel/stork/backend/agent/promkeaexporter.go:313
agent-kea-ha2_1 | runtime.goexit
agent-kea-ha2_1 | /home/thomson/devel/stork/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
```
```
gent-kea_1 | time="2020-04-28T11:40:22Z" level=error msg="ERROR: diskstats collector failed after 0.001589s: invalid line for /proc/diskstats for sdh" source="collector.go:123"
agent-kea_1 | time="2020-04-28T11:40:22Z" level=error msg="error gathering metrics: collected metric node_hwmon_temp_celsius label:<name:\"chip\" value:\"thermal_thermal_zone0\" > label:<name:\"sensor\" value:\"temp0\" > gauge:<value:27.8 > has help \"Hardware monitor for temperature ()\" but should have \"Hardware monitor for temperature (input)\"
agent-kea_1 | " source="<autogenerated>:1"
agent-kea_1 | INFO[2020-04-28 11:40:22] promkeaexporter.go:444 APP &{Type:kea AccessPoints:[{Type:control Address:127.0.0.1 Port:8000 Key:}]}
```
```
agent-bind9_1 | * collected metric node_hwmon_temp_celsius label:<name:\"chip\" value:\"platform_coretemp_0\" > label:<name:\"sensor\" value:\"temp3\" > gauge:<value:41 > has help \"Hardware monitor for temperature (input)\" but should have \"Hardware monitor for temperature ()\"
agent-bind9_1 | * collected metric node_hwmon_temp_celsius label:<name:\"chip\" value:\"platform_coretemp_0\" > label:<name:\"sensor\" value:\"temp4\" > gauge:<value:43 > has help \"Hardware monitor for temperature (input)\" but should have \"Hardware monitor for temperature ()\"
agent-bind9_1 | * collected metric node_hwmon_temp_celsius label:<name:\"chip\" value:\"platform_nct6775_656\" > label:<name:\"sensor\" value:\"temp8\" > gauge:<value:0 > has help \"Hardware monitor for temperature (input)\" but should have \"Hardware monitor for temperature ()\"
agent-bind9_1 | * collected metric node_hwmon_temp_celsius label:<name:\"chip\" value:\"platform_nct6775_656\" > label:<name:\"sensor\" value:\"temp1\" > gauge:<value:44 > has help \"Hardware monitor for temperature (input)\" but should have \"Hardware monitor for temperature ()\"
```
There may be more. I think we should catch exceptions and log something more reasonable.0.7Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/264problem with parsing kea config with comments2022-01-04T12:33:56ZMichal Nowikowskiproblem with parsing kea config with commentsWhen `http-port` setting in kea config is commented out then agent will incorrectly take it as valid setting.When `http-port` setting in kea config is commented out then agent will incorrectly take it as valid setting.1.1Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/329Enable authentication for the traffic generators2020-07-09T15:26:51ZMarcin SiodelskiEnable authentication for the traffic generatorsNow that we have fixed a bug that allowed for unauthorized access to some Stork views we need to modify the DHCP and DNS traffic generators to create the session (login) prior to getting configured DHCP subnets from the server. Without t...Now that we have fixed a bug that allowed for unauthorized access to some Stork views we need to modify the DHCP and DNS traffic generators to create the session (login) prior to getting configured DHCP subnets from the server. Without this, the traffic generators fail t get the list of subnets and this makes them unusable.
The useful reference I got from @godfryd was: https://requests.readthedocs.io/en/master/user/advanced/0.9Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/364App refresh mechanism keeps tossing information about log files2020-08-12T10:03:35ZMarcin SiodelskiApp refresh mechanism keeps tossing information about log filesIn order to reproduce the bug:
- Start with rake docker_up
- Add agent-kea
- Navigate to the Kea app
- Click on the log file which will take you to the log viewer
- Click refresh button
You're going to see error message saying that the ...In order to reproduce the bug:
- Start with rake docker_up
- Add agent-kea
- Navigate to the Kea app
- Click on the log file which will take you to the log viewer
- Click refresh button
You're going to see error message saying that the log file with the given id doesn't exist. It seems like we have some background task updating the app information which keeps tossing the log files and adds them with a different id. If you now:
- click back in the browser
- click again on the same log file
you will see that the log file ID has changed. That's what seems to be causing the error message to be displayed.0.10Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/305error while pulling reservations by server from agent2020-06-22T10:48:18ZMichal Nowikowskierror while pulling reservations by server from agentThe problem appears in subsequent fetch of reservations in case when kea-dhcp4 daemon is stopped.
```
ERRO[2020-05-31 11:14:47] host.go:88 error occurred while fetching hosts from app {ID:1 CreatedAt:2020-05-31 09:14:02.2...The problem appears in subsequent fetch of reservations in case when kea-dhcp4 daemon is stopped.
```
ERRO[2020-05-31 11:14:47] host.go:88 error occurred while fetching hosts from app {ID:1 CreatedAt:2020-05-31 09:14:02.279493 +0000 UTC MachineID:1 Machine:0xc000844160 Type:kea Active:false Meta:{Version:1.7.3 ExtendedVersion:} AccessPoints:[0xc0009b20f0] Daemons:[0xc0007c2fc0 0xc0007c3080 0xc0007c3140 0xc0007c3200]}: error returned by Kea in response to reservation-get-page command
isc.org/stork/server/apps/kea.(*HostDetectionIterator).sendReservationGetPage
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:253
isc.org/stork/server/apps/kea.(*HostDetectionIterator).DetectHostsPageFromHostCmds
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:366
isc.org/stork/server/apps/kea.updateHostsFromHostCmds
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:546
isc.org/stork/server/apps/kea.(*HostsPuller).pullData
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:85
isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
/home/godfryd/isc/repos/stork-dashb/backend/server/agentcomm/puller.go:93
runtime.goexit
/home/godfryd/isc/repos/stork-dashb/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
problem with sending reservation-get-page command upon attempt to detect host reservations over the host_cmds hooks library
WARN[2020-05-31 11:14:47] statspuller.go:49 missing key total-addreses in LocalSubnet 15 stats
INFO[2020-05-31 11:14:47] host.go:101 completed pulling hosts from Kea apps: 0/1 succeeded
ERRO[2020-05-31 11:14:47] puller.go:95 errors were encountered while pulling data from Kea apps: error returned by Kea in response to reservation-get-page command
isc.org/stork/server/apps/kea.(*HostDetectionIterator).sendReservationGetPage
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:253
isc.org/stork/server/apps/kea.(*HostDetectionIterator).DetectHostsPageFromHostCmds
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:366
isc.org/stork/server/apps/kea.updateHostsFromHostCmds
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:546
isc.org/stork/server/apps/kea.(*HostsPuller).pullData
/home/godfryd/isc/repos/stork-dashb/backend/server/apps/kea/host.go:85
isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
/home/godfryd/isc/repos/stork-dashb/backend/server/agentcomm/puller.go:93
runtime.goexit
/home/godfryd/isc/repos/stork-dashb/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
problem with sending reservation-get-page command upon attempt to detect host reservations over the host_cmds hooks library
```0.9Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/310wrong or no params in user login crashes stork server2020-09-09T13:56:31ZMichal Nowikowskiwrong or no params in user login crashes stork server```
Jun 05 06:00:39 stork-server stork-server[3800]: 2020/06/05 06:00:39 http: panic serving 127.0.0.1:47870: runtime error: invalid memory address or nil pointer dereference
Jun 05 06:00:39 stork-server stork-server[3800]: goroutine 39 ...```
Jun 05 06:00:39 stork-server stork-server[3800]: 2020/06/05 06:00:39 http: panic serving 127.0.0.1:47870: runtime error: invalid memory address or nil pointer dereference
Jun 05 06:00:39 stork-server stork-server[3800]: goroutine 39 [running]:
Jun 05 06:00:39 stork-server stork-server[3800]: net/http.(*conn).serve.func1(0xc0000adea0)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:1767 +0x139
Jun 05 06:00:39 stork-server stork-server[3800]: panic(0xbe2e60, 0x134ee80)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/runtime/panic.go:679 +0x1b2
Jun 05 06:00:39 stork-server stork-server[3800]: isc.org/stork/server/restservice.(*RestAPI).CreateSession(0xc000128900, 0xe29ac0, 0xc0004085d0, 0xc0001d4b00, 0x0, 0x0, 0xc000408540, 0xe29ac0)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/backend/server/restservice/restusers.go:56 +0x57
Jun 05 06:00:39 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi.HandlerAPI.func3(0xc0001d4b00, 0x0, 0x0, 0xc000408501, 0xc00069a280)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/backend/server/gen/restapi/configure_stork.go:144 +0x6e
Jun 05 06:00:39 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi/operations/users.CreateSessionHandlerFunc.Handle(0xc0001aa150, 0xc0001d4b00, 0x0, 0x0, 0xc00069a280, 0x0)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/backend/server/gen/restapi/operations/users/create_session.go:19 +0x44
Jun 05 06:00:39 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi/operations/users.(*CreateSession).ServeHTTP(0xc0001c7c40, 0xe25f00, 0xc000518340, 0xc0001d4b00)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/backend/server/gen/restapi/operations/users/create_session.go:54 +0x176
Jun 05 06:00:39 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.NewOperationExecutor.func1(0xe25f00, 0xc000518340, 0xc0001d4b00)
Jun 05 06:00:39 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/operation.go:28 +0x75
Jun 05 06:00:39 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc0000abd00, 0xe25f00, 0xc000518340, 0xc0001d4b00)
Jun 05 06:00:39 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:00:39 stork-server stork-server[3800]: github.com/alexedwards/scs/v2.(*SessionManager).LoadAndSave.func1(0xe27540, 0xc0005341c0, 0xc0001d4a00)
Jun 05 06:00:39 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/alexedwards/scs/v2@v2.2.0/session.go:136 +0x205
Jun 05 06:00:39 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc0001c7ee0, 0xe27540, 0xc0005341c0, 0xc0001d4a00)
```0.10https://gitlab.isc.org/isc-projects/stork/-/issues/311wrong params to create user crashes stork server2020-07-08T07:46:28ZMichal Nowikowskiwrong params to create user crashes stork server```
un 05 06:15:41 stork-server stork-server[3800]: 2020/06/05 06:15:41 http: panic serving 127.0.0.1:39950: runtime error: invalid memory address or nil pointer dereference
Jun 05 06:15:41 stork-server stork-server[3800]: goroutine 97 [...```
un 05 06:15:41 stork-server stork-server[3800]: 2020/06/05 06:15:41 http: panic serving 127.0.0.1:39950: runtime error: invalid memory address or nil pointer dereference
Jun 05 06:15:41 stork-server stork-server[3800]: goroutine 97 [running]:
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.(*conn).serve.func1(0xc0001546e0)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:1767 +0x139
Jun 05 06:15:41 stork-server stork-server[3800]: panic(0xbe2e60, 0x134ee80)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/runtime/panic.go:679 +0x1b2
Jun 05 06:15:41 stork-server stork-server[3800]: isc.org/stork/server/restservice.(*RestAPI).CreateUser(0xc000128900, 0xe29ac0, 0xc00051eed0, 0xc0006de300, 0x0, 0xc00015b010, 0xe29ac0)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/backend/server/restservice/restusers.go:155 +0x3a
Jun 05 06:15:41 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi.HandlerAPI.func4(0xc0006de300, 0x0, 0xb98c60, 0xc00015b010, 0xc0006d3401, 0xc00015b000)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/backend/server/gen/restapi/configure_stork.go:149 +0xbc
Jun 05 06:15:41 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi/operations/users.CreateUserHandlerFunc.Handle(0xc0001aa230, 0xc0006de300, 0x0, 0xb98c60, 0xc00015b010, 0x0, 0x0)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/backend/server/gen/restapi/operations/users/create_user.go:19 +0x4e
Jun 05 06:15:41 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi/operations/users.(*CreateUser).ServeHTTP(0xc0001c7c60, 0xe25f00, 0xc000019ac0, 0xc0006de300)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/backend/server/gen/restapi/operations/users/create_user.go:69 +0x28d
Jun 05 06:15:41 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.NewOperationExecutor.func1(0xe25f00, 0xc000019ac0, 0xc0006de100)
Jun 05 06:15:41 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/operation.go:28 +0x75
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc0000abd00, 0xe25f00, 0xc000019ac0, 0xc0006de100)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: github.com/alexedwards/scs/v2.(*SessionManager).LoadAndSave.func1(0xe27540, 0xc000534460, 0xc00043aa00)
Jun 05 06:15:41 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/alexedwards/scs/v2@v2.2.0/session.go:136 +0x205
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc0001c7ee0, 0xe27540, 0xc000534460, 0xc00043aa00)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.NewRouter.func1(0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/router.go:76 +0x356
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000275aa0, 0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.Redoc.func1(0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/redoc.go:72 +0x286
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000528340, 0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.Spec.func1(0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/spec.go:46 +0x188
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000528380, 0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: isc.org/stork/server/restservice.fileServerMiddleware.func1(0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/backend/server/restservice/middleware.go:51 +0x85
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc00051f7d0, 0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: isc.org/stork/server/restservice.loggingMiddleware.func1(0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/backend/server/restservice/middleware.go:32 +0x32e
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000275fe0, 0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.serverHandler.ServeHTTP(0xc00039b500, 0xe27540, 0xc000534460, 0xc00043a800)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2802 +0xa4
Jun 05 06:15:41 stork-server stork-server[3800]: net/http.(*conn).serve(0xc0001546e0, 0xe29a00, 0xc00011e140)
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:1890 +0x875
Jun 05 06:15:41 stork-server stork-server[3800]: created by net/http.(*Server).Serve
Jun 05 06:15:41 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2928 +0x384
```0.10https://gitlab.isc.org/isc-projects/stork/-/issues/312getting users is crashing when no params passed2020-07-08T07:46:27ZMichal Nowikowskigetting users is crashing when no params passed```
Jun 05 06:49:47 stork-server stork-server[3800]: 2020/06/05 06:49:47 http: panic serving 127.0.0.1:57510: runtime error: invalid memory address or nil pointer dereference
Jun 05 06:49:47 stork-server stork-server[3800]: goroutine 214...```
Jun 05 06:49:47 stork-server stork-server[3800]: 2020/06/05 06:49:47 http: panic serving 127.0.0.1:57510: runtime error: invalid memory address or nil pointer dereference
Jun 05 06:49:47 stork-server stork-server[3800]: goroutine 214 [running]:
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.(*conn).serve.func1(0xc000154820)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:1767 +0x139
Jun 05 06:49:47 stork-server stork-server[3800]: panic(0xbe2e60, 0x134ee80)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/runtime/panic.go:679 +0x1b2
Jun 05 06:49:47 stork-server stork-server[3800]: isc.org/stork/server/restservice.(*RestAPI).GetUsers(0xc000128900, 0xe29ac0, 0xc00051eba0, 0xc00070cd00, 0x0, 0x0, 0x0, 0xc00051eba0, 0xe29ac0)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/backend/server/restservice/restusers.go:92 +0x46
Jun 05 06:49:47 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi.HandlerAPI.func19(0xc00070cd00, 0x0, 0x0, 0x0, 0xb98c60, 0xc00015a140, 0xc000733401, 0xc0002f67e0)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/backend/server/gen/restapi/configure_stork.go:211 +0xd6
Jun 05 06:49:47 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi/operations/users.GetUsersHandlerFunc.Handle(0xc0001aaa80, 0xc00070cd00, 0x0, 0x0, 0x0, 0xb98c60, 0xc00015a140, 0x0, 0xc00015a0f0)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/backend/server/gen/restapi/operations/users/get_users.go:19 +0x65
Jun 05 06:49:47 stork-server stork-server[3800]: isc.org/stork/server/gen/restapi/operations/users.(*GetUsers).ServeHTTP(0xc0001c7e40, 0xe25f00, 0xc0000e1c80, 0xc00070cd00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/backend/server/gen/restapi/operations/users/get_users.go:69 +0x29f
Jun 05 06:49:47 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.NewOperationExecutor.func1(0xe25f00, 0xc0000e1c80, 0xc00070cb00)
Jun 05 06:49:47 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/operation.go:28 +0x75
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc0000abd00, 0xe25f00, 0xc0000e1c80, 0xc00070cb00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: github.com/alexedwards/scs/v2.(*SessionManager).LoadAndSave.func1(0xe27540, 0xc0005342a0, 0xc0001d4d00)
Jun 05 06:49:47 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/alexedwards/scs/v2@v2.2.0/session.go:136 +0x205
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc0001c7ee0, 0xe27540, 0xc0005342a0, 0xc0001d4d00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.NewRouter.func1(0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/router.go:76 +0x356
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000275aa0, 0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.Redoc.func1(0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/redoc.go:72 +0x286
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000528340, 0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: github.com/go-openapi/runtime/middleware.Spec.func1(0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /root/go/pkg/mod/github.com/go-openapi/runtime@v0.19.6/middleware/spec.go:46 +0x188
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000528380, 0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: isc.org/stork/server/restservice.fileServerMiddleware.func1(0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/backend/server/restservice/middleware.go:51 +0x85
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc00051f7d0, 0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: isc.org/stork/server/restservice.loggingMiddleware.func1(0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/backend/server/restservice/middleware.go:32 +0x32e
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.HandlerFunc.ServeHTTP(0xc000275fe0, 0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2007 +0x44
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.serverHandler.ServeHTTP(0xc00039b500, 0xe27540, 0xc0005342a0, 0xc0001d4b00)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2802 +0xa4
Jun 05 06:49:47 stork-server stork-server[3800]: net/http.(*conn).serve(0xc000154820, 0xe29a00, 0xc00011e740)
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:1890 +0x875
Jun 05 06:49:47 stork-server stork-server[3800]: created by net/http.(*Server).Serve
Jun 05 06:49:47 stork-server stork-server[3800]: /build/tools/1.13.5/go/src/net/http/server.go:2928 +0x384
```0.10https://gitlab.isc.org/isc-projects/stork/-/issues/313Docker up does fails on rebuild2020-06-12T12:38:38ZMarcin SiodelskiDocker up does fails on rebuildThe original problem was found here: https://gitlab.isc.org/isc-projects/stork/-/merge_requests/139#note_13813
I ran this on macOS. After switching between the branches containing code differences I got build failures. Apparently, the d...The original problem was found here: https://gitlab.isc.org/isc-projects/stork/-/merge_requests/139#note_13813
I ran this on macOS. After switching between the branches containing code differences I got build failures. Apparently, the docker_up does not rebuild the code when necessary.0.9Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/316agent crashes when it encounters unknown stat from kea2020-06-10T12:35:15ZMichal Nowikowskiagent crashes when it encounters unknown stat from keahttps://support.isc.org/Ticket/Display.html?id=16582https://support.isc.org/Ticket/Display.html?id=165820.8Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/326prombind9exporter does not unregister all collectors2020-07-10T12:35:26ZMichal Nowikowskiprombind9exporter does not unregister all collectorsCurrently it registers:
```go
pbe.Registry.MustRegister(version.NewCollector("bind_exporter"))
pbe.Registry.MustRegister(pbe)
if bind9Pid > 0 {
procExporter := prometheus.NewProcessCollector(
prometheus.ProcessCollectorOpts{
...Currently it registers:
```go
pbe.Registry.MustRegister(version.NewCollector("bind_exporter"))
pbe.Registry.MustRegister(pbe)
if bind9Pid > 0 {
procExporter := prometheus.NewProcessCollector(
prometheus.ProcessCollectorOpts{
PidFn: func() (int, error) {
return int(bind9Pid), nil
},
Namespace: namespace,
})
pbe.Registry.MustRegister(procExporter)
}
```
but only deregisters:
```go
// unregister bind9 counters from prometheus framework
pbe.Registry.Unregister(pbe)
```0.9Matthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/327Handle HTTP 403 errors for unauthorized users2020-06-24T13:52:25ZMarcin SiodelskiHandle HTTP 403 errors for unauthorized usersThis is the follow up ticket to #119. It was found that when the session is destroyed (e.g. removed from the database) but the session information is stored in the local storage (the user didn't log off explicitly via the UI), the user g...This is the follow up ticket to #119. It was found that when the session is destroyed (e.g. removed from the database) but the session information is stored in the local storage (the user didn't log off explicitly via the UI), the user gets redirected to the forbidden page when trying to navigate to the login page. This is because, the login page sends some REST calls to the server and the server apparently returns error 403 for the unauthorized user, rather than 401. This case has to be handled and if the session doesn't exist the user must not navigate to forbidden page, even upon receiving error 403.0.9Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/330Blank page after logging2020-07-09T15:26:51ZMarcin SiodelskiBlank page after loggingReproduction steps:
- Login to the system and display dashboard page.
- Login to the database and remove a session from the sessions table
- Refresh the page and you should be taken to the login page
- Login to the system again
- Blank p...Reproduction steps:
- Login to the system and display dashboard page.
- Login to the database and remove a session from the sessions table
- Refresh the page and you should be taken to the login page
- Login to the system again
- Blank page should be displayed
It appears that the dashboard component displays the contents only when some stats are available. The stats are gathered upon page refresh or when the interval of 30 mins elapses. When we log in to the system there is no page refresh so the stats are not gathered. This seems to cause the page to remain blank.0.9Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/331Issues with initial fetch of user groups2020-07-09T15:26:51ZMarcin SiodelskiIssues with initial fetch of user groupsWhen I was reviewing #310 I found that system groups are not properly fetched by the UI. When I entered the users view, the drop down that normally allows for selecting which group the new user should belong to was empty. Also, the non a...When I was reviewing #310 I found that system groups are not properly fetched by the UI. When I entered the users view, the drop down that normally allows for selecting which group the new user should belong to was empty. Also, the non admin user had 'unknown' group assigned in the users list. It seems there was a race between services in getting the list of groups.
The viable solution seems to be to move the initialization of the groups to the `ServerDataService`.0.9https://gitlab.isc.org/isc-projects/stork/-/issues/334upgrading rpm with stork-agent from 0.8 to 0.9 fails2020-08-11T07:19:14ZMichal Nowikowskiupgrading rpm with stork-agent from 0.8 to 0.9 fails```
useradd: user 'stork-agent' already exists
```
and
```
usermod: user 'stork-agent' does not exist
```
https://support.isc.org/Ticket/Display.html?id=16776```
useradd: user 'stork-agent' already exists
```
and
```
usermod: user 'stork-agent' does not exist
```
https://support.isc.org/Ticket/Display.html?id=167760.10Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/429SSE events from HA2 are shown on HA1.2020-12-07T11:16:17ZTomek MrugalskiSSE events from HA2 are shown on HA1.As reported here https://gitlab.isc.org/isc-projects/stork/-/issues/426#note_169307 by @godfryd:
`SSE` events from `HA2` are shown on `HA1`.
Repro:
1. Add machines `agent-kea-ha1` and `agent-kea-ha2`
2. Kill DHCPv4 daemon on `agent-kea...As reported here https://gitlab.isc.org/isc-projects/stork/-/issues/426#note_169307 by @godfryd:
`SSE` events from `HA2` are shown on `HA1`.
Repro:
1. Add machines `agent-kea-ha1` and `agent-kea-ha2`
2. Kill DHCPv4 daemon on `agent-kea-ha2` using Stork Env Simulator on http://localhost:5000/
3. Observe list of events on Kea app on `agent-kea-ha1`
4. After a minute or 2 there are automatically added events about Kea from `agent-kea-ha2`
When the page is refreshed with F5 key then only this Kea events are presented.0.14Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/433strok 0.12.0 don't show dhcp4 statistics2020-12-01T12:02:40Zahiya zadokstrok 0.12.0 don't show dhcp4 statisticshi
ive installed strok agent/server to monitor my kea dhcp server (1.8.0).
all components can communicate and kea apps status is green, both dhcp4 and ca.
i can see subnets, reserved IPs, but I can't see any statistics.
any idea how to ...hi
ive installed strok agent/server to monitor my kea dhcp server (1.8.0).
all components can communicate and kea apps status is green, both dhcp4 and ca.
i can see subnets, reserved IPs, but I can't see any statistics.
any idea how to fix this issue?
thanks0.14Tomek MrugalskiTomek Mrugalskihttps://gitlab.isc.org/isc-projects/stork/-/issues/351DDNS daemon is not detected properly2020-08-12T13:04:46ZTomek MrugalskiDDNS daemon is not detected properlyThe DDNS daemon was not detected properly. Here's what I did:
1. `rake docker_up`
2. `docker-compose exec agent-kea bash`
3. `apt install isc-kea-ddns-server`
4. `kea-dhcp-ddns -c /etc/kea/kea-dhcp-ddns.conf`
The server had started in ...The DDNS daemon was not detected properly. Here's what I did:
1. `rake docker_up`
2. `docker-compose exec agent-kea bash`
3. `apt install isc-kea-ddns-server`
4. `kea-dhcp-ddns -c /etc/kea/kea-dhcp-ddns.conf`
The server had started in the console and was running:
```
root@agent-kea:/agent# kea-dhcp-ddns -c /etc/kea/kea-dhcp-ddns.conf
2020-07-20 15:42:32.524 INFO [kea-dhcp-ddns.dctl/218.139649607673728] DCTL_STARTING DhcpDdns starting, pid: 218, version: 1.7.9 (development)
2020-07-20 15:42:32.524 WARN [kea-dhcp-ddns.dctl/218.139649607673728] DCTL_DEVELOPMENT_VERSION This software is a development branch of Kea. It is not recommended for production use.
INFO COMMAND_ACCEPTOR_START Starting to accept connections via unix domain socket bound to /tmp/ddns-ctrl-socket
INFO DCTL_CONFIG_COMPLETE server has completed configuration: listening on 127.0.0.1, port 53001, using UDP
INFO DHCP_DDNS_STARTED Kea DHCP-DDNS server version 1.7.9 started
```
Refreshing app and machine didn't fix this. The DDNS was still labeled as not detected.0.10https://gitlab.isc.org/isc-projects/stork/-/issues/359Link to log viewer doesn't work2020-08-10T13:56:19ZTomek MrugalskiLink to log viewer doesn't workOn the latest master (8fa945083e9fb4a1ce6c87f72ab89fcbd21baf81), I have clean stork running, using rake docker_up.
It used to work, but right now the list of logs files is not clickable anymore.
Perhaps this is a failed rebase/merge pr...On the latest master (8fa945083e9fb4a1ce6c87f72ab89fcbd21baf81), I have clean stork running, using rake docker_up.
It used to work, but right now the list of logs files is not clickable anymore.
Perhaps this is a failed rebase/merge problem?
Screenshot
![Screenshot_2020-08-04_at_19.15.26](/uploads/d7fd0446ee8bf2af6b3bdf0213320978/Screenshot_2020-08-04_at_19.15.26.png)0.10Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/360UI can't retrieve data (server side events broken?)2020-08-10T13:29:30ZTomek MrugalskiUI can't retrieve data (server side events broken?)On the latest master (8fa945083e9fb4a1ce6c87f72ab89fcbd21baf81), I have clean stork running, using rake docker_up.
I've observed a plethora of problems:
1. added agent-kea, which initially worked ok, but later (after couple mins) report...On the latest master (8fa945083e9fb4a1ce6c87f72ab89fcbd21baf81), I have clean stork running, using rake docker_up.
I've observed a plethora of problems:
1. added agent-kea, which initially worked ok, but later (after couple mins) reported comm problem with the dhcpv4 and ca daemons ("There is observed issue in communication with the daemon.", see screenshot 1)
1. added agent-kea-ha1, agent-kea-ha2, they report losing communication. ha1 becomes unreachable, ha2 reports partner down (see screenshot 2)
1. the RPS and pool utilization is not updated anymore (clicking refresh buttons in the ui, pressing ctrl-r doesn't change a thing, see screenshot 3)
1. the daemons status is broken (shows all as grey no-entry sign, but when you hover your cursor over it, some say the communication is ok, see screenshot 4)
I'm reporting all of those in a single issue, because I believe most of them (maybe except the log viewer) can be explained with a problem with server side events. More on this in the first comment.
SCREENSHOT 1 (lost comm with dhcpv4 and ca running on agent-kea)
![Screenshot_2020-08-04_at_19.10.28](/uploads/928c65b9fe461d7f5cc90addab65bb89/Screenshot_2020-08-04_at_19.10.28.png)
SCREENSHOT 2 (ha1 lost comm with its partner)
![Screenshot_2020-08-04_at_19.13.27](/uploads/74f21272b7ad6adafc00523efe25ffee/Screenshot_2020-08-04_at_19.13.27.png)
SCREENSHOT 3 (dashboard not updated)
![Screenshot_2020-08-04_at_19.17.15](/uploads/d7ac2e1fccf7c08a1992fa6c30a94432/Screenshot_2020-08-04_at_19.17.15.png)
SCREENSHOT 4 (broken app status update)
![Screenshot_2020-08-04_at_19.18.54](/uploads/6b8e39c75637771cb5e72aeb24d99a99/Screenshot_2020-08-04_at_19.18.54.png)0.10Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/371The list of most active subnets is not dynamic2020-09-04T15:47:00ZTomek MrugalskiThe list of most active subnets is not dynamicYou can click the refresh button to reload the pool utilization for the 5 subnets that are shown there, but those will always be the same 5 subnets, until you reload the dashboard.
How to reproduce this:
1. `rake docker_up`
1. add kea w...You can click the refresh button to reload the pool utilization for the 5 subnets that are shown there, but those will always be the same 5 subnets, until you reload the dashboard.
How to reproduce this:
1. `rake docker_up`
1. add kea with some subnets
1. open dashboards
1. open traffic panel and generate traffic in some subnets that are not on the list.
1. wait a bit, hit refresh.0.12https://gitlab.isc.org/isc-projects/stork/-/issues/379app status on machine tab with bind app is incorrect2020-08-31T09:37:08ZMichal Nowikowskiapp status on machine tab with bind app is incorrectmore details here: https://gitlab.isc.org/isc-projects/stork/-/issues/370#note_154481more details here: https://gitlab.isc.org/isc-projects/stork/-/issues/370#note_1544810.11Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/384Issues with daemon status presentation2020-09-02T14:39:09ZMarcin SiodelskiIssues with daemon status presentationI observed several issues with presenting the status of the Kea daemons after shutting down the agent and/or Kea daemons. The steps to reproduce the first issue:
- Start Stork Agent and Kea DHCPv4 and Agent on the machine
- Add the machi...I observed several issues with presenting the status of the Kea daemons after shutting down the agent and/or Kea daemons. The steps to reproduce the first issue:
- Start Stork Agent and Kea DHCPv4 and Agent on the machine
- Add the machine to Stork
- Make sure you can see the Kea app and you get green ticks on DHCPv4 and CA daemons and red cross on DHCPv6
- Shut down the agent. After a while you should observe communication errors and red crosses on DHCPv4 and CA
- Restart the agent and wait a while. The red crosses are still there for DHCPv4 and CA.
The situation doesn't even seem to improve when refreshing the page nor after refreshing machine/app.
Another interesting case is this:
- I had everything running and I now shut down Kea CA
- I was expecting that the communication error will show up after a while in the daemon tab, but it didn't. Nevertheless...
- I clicked on the dashboard and saw red crosses next to the Kea daemons (as expected)
- I restarted Kea CA
- I started navigating between the dashboard and the Kea app view. I still see error boxes and red crosses even though the CA is working fine
- Refreshing the page doesn't work. Still see errors.0.11Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/389UI review ticket: Subnet filtering when using the MORE button from the dashboard2020-11-04T10:30:20ZVicky Riskvicky@isc.orgUI review ticket: Subnet filtering when using the MORE button from the dashboardWhen clicking on the More button on the dashboard under the subnets display, it should display just the DHCPv4 or DHCPv6 subnets (depending on which More button is clicked).When clicking on the More button on the dashboard under the subnets display, it should display just the DHCPv4 or DHCPv6 subnets (depending on which More button is clicked).0.13Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/396Canceling edit breaks down adding new machines2020-10-12T12:42:06ZTomek MrugalskiCanceling edit breaks down adding new machinesHere's how to repro this problem:
1. start as usual (`rake docker_up`)
1. add machine (don't close the tab)
1. click the crayon icon near the agent-kea:8080, click cancel
1. try to add another machine
When I click on the add new machin...Here's how to repro this problem:
1. start as usual (`rake docker_up`)
1. add machine (don't close the tab)
1. click the crayon icon near the agent-kea:8080, click cancel
1. try to add another machine
When I click on the add new machine, there's the popup (this works as intended). But once I type the name and click add, nothing happens visually and there's this error in the console.
![add-machine-fail](/uploads/c76a90bd79f0b48c7dd1814c50cf238c/add-machine-fail.png)0.11https://gitlab.isc.org/isc-projects/stork/-/issues/397Error in the example command line starting the agent2020-10-12T12:34:09ZMarcin SiodelskiError in the example command line starting the agentThe ARM says this in section 2.2.4:
```
$ sudo systemctl enable isc-stork-server
$ sudo systemctl start isc-stork-server
```
but should be:
```
$ sudo systemctl enable isc-stork-agent
$ sudo systemctl start isc-stork-agent
```The ARM says this in section 2.2.4:
```
$ sudo systemctl enable isc-stork-server
$ sudo systemctl start isc-stork-server
```
but should be:
```
$ sudo systemctl enable isc-stork-agent
$ sudo systemctl start isc-stork-agent
```0.12Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/408system tests end with error2020-09-16T09:33:05ZMichal Nowikowskisystem tests end with errorThis occurs in pylxd 2.2.10 that we are using.
It seems to be addressed in 2.2.11 so upgrading should help.
```python
_____________________ test_machines[centos/7-ubuntu/18.04] _____________________
agent_distro = 'centos/7', server_dis...This occurs in pylxd 2.2.10 that we are using.
It seems to be addressed in 2.2.11 so upgrading should help.
```python
_____________________ test_machines[centos/7-ubuntu/18.04] _____________________
agent_distro = 'centos/7', server_distro = 'ubuntu/18.04'
@pytest.mark.parametrize("agent_distro,server_distro", SUPPORTED_DISTROS)
def test_machines(agent_distro, server_distro):
> s, a = prepare_one_server_and_agent(agent_distro, server_distro)
tests.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests.py:20: in prepare_one_server_and_agent
s.setup_wait()
containers.py:171: in setup_wait
raise e
containers.py:175: in setup
self._setup(*args)
containers.py:333: in _setup
self.prepare_stork_server(pkg_ver)
containers.py:324: in prepare_stork_server
self.run('bash -c "ps axu|grep isc"')
containers.py:144: in run
result = self.cntr.execute(cmd2, env)
venv/lib/python3.6/site-packages/pylxd/models/container.py:440: in execute
manager.close_all()
venv/lib/python3.6/site-packages/ws4py/manager.py:345: in close_all
ws.close(code=code, reason=message)
venv/lib/python3.6/site-packages/ws4py/client/__init__.py:205: in close
self._write(self.stream.close(code=code, reason=reason).single(mask=True))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pylxd.models.container._CommandWebsocketClient object at 0x7f22c98c6ac8>
b = b'\x88\x99\xd5E\x02e\xd6\xacQ\x00\xa73g\x17\xf5,qE\xa6-w\x11\xa1,l\x02\xf5!m\x12\xbb'
def _write(self, b):
"""
Trying to prevent a write operation
on an already closed websocket stream.
This cannot be bullet proof but hopefully
will catch almost all use cases.
"""
if self.terminated or self.sock is None:
raise RuntimeError("Cannot send on a terminated websocket")
> self.sock.sendall(b)
E BrokenPipeError: [Errno 32] Broken pipe
venv/lib/python3.6/site-packages/ws4py/websocket.py:285: BrokenPipeError
```0.12Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/411Stork cannot parse named.conf with multiple allow addresses in inet_spec2020-09-22T07:22:57ZMatthijs Mekkingmatthijs@isc.orgStork cannot parse named.conf with multiple allow addresses in inet_specFrom the mailing list:
I am testing out using Stork for BIND9 in a lab environment. I can connect to the Stork Agent running on the BIND server without any problems, but Stork never shows the DNS service as running. I have verified that...From the mailing list:
I am testing out using Stork for BIND9 in a lab environment. I can connect to the Stork Agent running on the BIND server without any problems, but Stork never shows the DNS service as running. I have verified that BIND is running.
When I check messages on the BIND server, I see the following:
```
Sep 21 14:46:22 bind_server stork-agent: #033[33mWARN#033[0m[2020-09-21 14:46:22] bind9.go:284 cannot parse BIND 9 statistics-channels clause
Sep 21 14:46:32 bind_server stork-agent: #033[33mWARN#033[0m[2020-09-21 14:46:32] bind9.go:91 cannot parse BIND 9 inet configuration: no match (controls {
```
The config syntax is valid from BIND's perspective. Does Stork have requirements above and beyond that?
Thanks!
Details:
BIND: 9.11.4
Stork: 0.11.0
Controls Clause of bind config:
```
controls {
inet 127.0.0.1 allow { localhost; };
inet * allow {localhost; 10.50.0.100; 10.50.0.105; };
};
```
Statistics Channel clause of bind config:
```
statistics-channels {
inet 10.50.0.105 port 80 allow { localhost; 10.50.0.100; 10.50.0.105; };
};
```0.12https://gitlab.isc.org/isc-projects/stork/-/issues/398Stork cannot connect large Kea installation2023-06-12T17:00:19ZPeter DaviesStork cannot connect large Kea installation---
name: Stork cannot connect large Kea installation
about: Stork 0.10 is unable to connect to agent when there a large number of subnets defined in Kea (1.8).
---
KEA 1.8.0 today and installed Stork 0.10, I noticed that Stork server c...---
name: Stork cannot connect large Kea installation
about: Stork 0.10 is unable to connect to agent when there a large number of subnets defined in Kea (1.8).
---
KEA 1.8.0 today and installed Stork 0.10, I noticed that Stork server can't connect to my large instances (with ~4500 subnets) and I see the following errors logged in Events tab:
grpc manager is unable to re-establish connection with the agent 192.168.1.5:8080: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (4607854 vs. 4194304)
RT [#17072](https://support.isc.org/Ticket/Modify.html?id=17072)
## Comments:
According to the gRPC doc there are two functions
to set maximum sizes:
- MaxCallRecvMsgSize
- MaxCallSendMsgSize
The default seems to be 4MB (confirmed by the error message).
Another idea is to use compression: JSON should
be very easy to compress efficiently...0.13Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/stork/-/issues/421pulling periodically information by server from kea with 4500 subnets consume...2020-11-24T11:56:13ZMichal Nowikowskipulling periodically information by server from kea with 4500 subnets consumes whole server's CPUAfter adding a machine with kea that is configured with 4500 subnets the servers starts to consume whole CPU.
The first observations indicate that processing large responses on the Stork server side takes much time.After adding a machine with kea that is configured with 4500 subnets the servers starts to consume whole CPU.
The first observations indicate that processing large responses on the Stork server side takes much time.0.14Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/425Spurious directory created when running rake install_server2020-10-12T14:50:15ZMarcin SiodelskiSpurious directory created when running rake install_serverAs pointed out in the following comment: https://gitlab.isc.org/isc-projects/stork/-/issues/395#note_160894
there is empty directory created when running `rake install_server`.
The original report:
Extra directory. I did install stork...As pointed out in the following comment: https://gitlab.isc.org/isc-projects/stork/-/issues/395#note_160894
there is empty directory created when running `rake install_server`.
The original report:
Extra directory. I did install stork the following way:
```
rake install_server DESTDIR=/home/thomson/stork-0.11
```
The contents was installed properly, but there was an extra directory created: `/home/thomson/stork-0.11/home/thomson/devel/stork-0.11.0/tools/node-v12.16.2-linux-x64`
The dir is empty, though. There's a bug somewhere in the install scripts.0.12Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/445Crashes observed in traffic simulator in 0.132022-02-04T09:06:46ZMarcin SiodelskiCrashes observed in traffic simulator in 0.13This is a result of sanity checks for 0.13.0. See the following comment: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174569
It was pointed out that:
"After using Stork Demo for a while I wanted to try Stork Environment ...This is a result of sanity checks for 0.13.0. See the following comment: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174569
It was pointed out that:
"After using Stork Demo for a while I wanted to try Stork Environment Simulator. I noticed that it crashes on http://localhost:5000/services URL."
With the following stack trace:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/sim/sim.py", line 309, in get_services
data = _get_services()
File "/sim/sim.py", line 284, in _get_services
machines = r.json()['items']
KeyError: 'items'
```1.0-backloghttps://gitlab.isc.org/isc-projects/stork/-/issues/447Cannot fetch machine state when using IPv6 address2021-12-02T16:59:54ZMarcin SiodelskiCannot fetch machine state when using IPv6 addressThis is a result of Stork 0.13.0 sanity checks: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174641
@godfryd pointed out that: "Adding agent to server using IPv6 address does not work well. Server cannot fetch machine sta...This is a result of Stork 0.13.0 sanity checks: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174641
@godfryd pointed out that: "Adding agent to server using IPv6 address does not work well. Server cannot fetch machine state."
Need to investigate it further.1.0Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/465Logrus must be upgraded after go upgrade to 1.152020-12-07T16:20:27ZMarcin SiodelskiLogrus must be upgraded after go upgrade to 1.15After upgrade of golang to 1.15 there is a regression in logrus logging library. We need to upgrade logrus to circumvent this. See for reference: https://github.com/sirupsen/logrus/issues/1096After upgrade of golang to 1.15 there is a regression in logrus logging library. We need to upgrade logrus to circumvent this. See for reference: https://github.com/sirupsen/logrus/issues/10960.14Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/484Committing Kea app to the database may hang2021-03-02T16:08:17ZMarcin SiodelskiCommitting Kea app to the database may hangWhen I was testing #483, I came across an issue described in the following comment: https://gitlab.isc.org/isc-projects/stork/-/merge_requests/267#note_195348.
In order to reproduce, follow these steps:
- Start Stork server,
- Make sure...When I was testing #483, I came across an issue described in the following comment: https://gitlab.isc.org/isc-projects/stork/-/merge_requests/267#note_195348.
In order to reproduce, follow these steps:
- Start Stork server,
- Make sure that no Kea app runs on the machine with an agent,
- Launch agent registration procedure,
- Approve agent registration in the UI,
- Start Kea app on the monitored machine and allow some time for the agent to discover the Kea app,
- Navigate to the machine page and click "Get Latest State"
The request to get state should hang (not return status 200) and the app should be neither visible in the UI nor in the database. The database transaction committing the app to the database should hang.
As I explained in the following comment: https://gitlab.isc.org/isc-projects/stork/-/merge_requests/267#note_195373, the issue appears to be related to committing events to the database outside of an open transaction. We may consider committing the events within the transaction, but we should investigate why exactly it hangs to avoid this issue in the future.https://gitlab.isc.org/isc-projects/stork/-/issues/495Prevent getting apps state for unauthorized machine2021-02-26T18:32:56ZMarcin SiodelskiPrevent getting apps state for unauthorized machineThis is a result of the following comment https://gitlab.isc.org/isc-projects/stork/-/merge_requests/272#note_196568.
As Tomek pointed out, if you click on the unauthorized machine you're taken to the same view as in case of authorized ...This is a result of the following comment https://gitlab.isc.org/isc-projects/stork/-/merge_requests/272#note_196568.
As Tomek pointed out, if you click on the unauthorized machine you're taken to the same view as in case of authorized machines. There used to be a button Get Latest State which, if clicked, would fetch apps information regardless if the machine is authorized or not. The button was removed for unauthorized machines in #485, but it is still possible to fetch the state via REST. I think it should be secured at the REST level, i.e. when the machine is unauthorized we should not fetch apps state.https://gitlab.isc.org/isc-projects/stork/-/issues/504Agent host command line option is broken in the 0.15.0 release2021-03-11T07:59:03ZMarcin SiodelskiAgent host command line option is broken in the 0.15.0 releaseThe Stork Agent's ``--host`` command line option doesn't work in the 0.15.0 release. This command line option should configure the agent to use a specific IP address to receive the connections from the ``Stork Server`` over gRPC. The age...The Stork Agent's ``--host`` command line option doesn't work in the 0.15.0 release. This command line option should configure the agent to use a specific IP address to receive the connections from the ``Stork Server`` over gRPC. The agent always uses the default 0.0.0.0 address regardless of the ``--host`` setting.0.16Tomek MrugalskiTomek Mrugalskihttps://gitlab.isc.org/isc-projects/stork/-/issues/541there is no breadcrumb on events page2022-11-10T11:00:48ZMichal Nowikowskithere is no breadcrumb on events page![image](/uploads/d1c6c2d58a2143d342c71c286b21dde8/image.png)![image](/uploads/d1c6c2d58a2143d342c71c286b21dde8/image.png)1.8Andrei Pavelandrei@isc.orgAndrei Pavelandrei@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/536Stork agent does not honor listen-only flags2021-05-24T15:07:18Zymartin-ovhStork agent does not honor listen-only flags---
name: Bug report
about: Stork agent does not honor listen-only flags
---
**Describe the bug**
It appears that despite --listen-only-* flags, Stork Agent starts the agent server.
**To Reproduce**
1. Build v0.16
2. Runs stork-agent ...---
name: Bug report
about: Stork agent does not honor listen-only flags
---
**Describe the bug**
It appears that despite --listen-only-* flags, Stork Agent starts the agent server.
**To Reproduce**
1. Build v0.16
2. Runs stork-agent --listen-prometheus-only
3. Stork does the following:
started serving Stork Agent address="0.0.0.0:8080"
**Expected behavior**
I expect only prometheus endpoints
**Environment:**
- Stork v0.16.0
- OS: Debian Buster
**Additional Information**
It seems that issue is located in CLI code of Stork Agent.
Regards0.18https://gitlab.isc.org/isc-projects/stork/-/issues/580Error while reading host reservations2021-11-03T10:14:38ZFabian KretschmerError while reading host reservationsHi,
we are facing a problem reading the host reservations in Stork 0.19.0 with the kea premium hook host_cmds library (tested with compiled kea 1.8.2 and 1.9.11).
Using the API, multiple hosts reservations were successfully added to a M...Hi,
we are facing a problem reading the host reservations in Stork 0.19.0 with the kea premium hook host_cmds library (tested with compiled kea 1.8.2 and 1.9.11).
Using the API, multiple hosts reservations were successfully added to a MySQL database backend. The Library is successfully loaded, the kea server is registered in stork and the agent is running, so everything looks fine until here.
At the DHCP host reservation page in stork no reservations are displayed. At this time, an Error is logged in stork and kea, please have a look at the logs below. In both Kea versions they are looking the same and the error is reproducible.
Please let me know, if you need any additional informations, any help would be appreciated.
Kind regards,
Fabian
**Environment:**
- Kea version: 1.8.2 / 1.9.11 with Stork 0.19.0
- OS: Ubuntu 18.04 Docker Container
- Which hooks where loaded in: libdhcp_host_cmds.so, libdhcp_stat_cmds.so, libdhcp_lease_cmds.so
**Logs:**
The reservation-add command logs looks like:
```
INFO COMMAND_RECEIVED Received command 'reservation-add'
INFO HOST_CMDS_RESERV_ADD reservation-add command successful (parameters: { "reservation": { "boot-file-name": "bootfile.efi", "hostname": "HOST1", "hw-address": "00:11:22:33:44:55", "ip-address": "192.168.10.100", "subnet-id": 10, "user-context": "{key': 'value'}" } })
INFO CTRL_AGENT_COMMAND_FORWARDED command reservation-add successfully forwarded to the service dhcp4
```
Kea Server:
`ERROR HOOKS_CALLOUT_ERROR error returned by callout on hook $reservation_get_page registered by library with index 3 (callout address 0x7ff88302e410) (callout duration 18.124 ms)`
Stork Server:
```
INFO[2021-09-03 07:38:44] eventcenter.go:117 event 'communication with <daemon id="12" name="dhcp4" appId="3" appType="kea"> of <app id="3" name="kea@172.17.0.3" type="kea" version="1.9.11"> failed'
ERRO[2021-09-03 07:38:44] host.go:87 error occurred while fetching hosts from app 3: error returned by Kea in response to reservation-get-page command
isc.org/stork/server/apps/kea.(*HostDetectionIterator).sendReservationGetPage
/tmp/build/backend/server/apps/kea/host.go:254
isc.org/stork/server/apps/kea.(*HostDetectionIterator).DetectHostsPageFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:356
isc.org/stork/server/apps/kea.updateHostsFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:536
isc.org/stork/server/apps/kea.(*HostsPuller).pullData
/tmp/build/backend/server/apps/kea/host.go:84
isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
/tmp/build/backend/server/agentcomm/puller.go:169
runtime.goexit
/tmp/build/tools/1.15.5/go/src/runtime/asm_amd64.s:1374
problem with sending reservation-get-page command upon attempt to detect host reservations over the host_cmds hooks library
INFO[2021-09-03 07:38:44] host.go:100 completed pulling hosts from Kea apps: 0/1 succeeded
ERRO[2021-09-03 07:38:44] puller.go:172 errors were encountered while pulling data from apps: error returned by Kea in response to reservation-get-page command
isc.org/stork/server/apps/kea.(*HostDetectionIterator).sendReservationGetPage
/tmp/build/backend/server/apps/kea/host.go:254
isc.org/stork/server/apps/kea.(*HostDetectionIterator).DetectHostsPageFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:356
isc.org/stork/server/apps/kea.updateHostsFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:536
isc.org/stork/server/apps/kea.(*HostsPuller).pullData
/tmp/build/backend/server/apps/kea/host.go:84
isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
/tmp/build/backend/server/agentcomm/puller.go:169
runtime.goexit
/tmp/build/tools/1.15.5/go/src/runtime/asm_amd64.s:1374
problem with sending reservation-get-page command upon attempt to detect host reservations over the host_cmds hooks library
INFO[2021-09-03 06:37:47] eventcenter.go:117 event 'communication with <daemon id="3" name="dhcp4" appId="1" appType="kea"> of <app id="1" name="kea@172.17.0.3" type="kea" version="1.8.2"> failed'
ERRO[2021-09-03 06:37:47] host.go:87 error occurred while fetching hosts from app 1: error returned by Kea in response to reservation-get-page command
isc.org/stork/server/apps/kea.(*HostDetectionIterator).sendReservationGetPage
/tmp/build/backend/server/apps/kea/host.go:254
isc.org/stork/server/apps/kea.(*HostDetectionIterator).DetectHostsPageFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:356
isc.org/stork/server/apps/kea.updateHostsFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:536
isc.org/stork/server/apps/kea.(*HostsPuller).pullData
/tmp/build/backend/server/apps/kea/host.go:84
isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
/tmp/build/backend/server/agentcomm/puller.go:169
runtime.goexit
/tmp/build/tools/1.15.5/go/src/runtime/asm_amd64.s:1374
problem with sending reservation-get-page command upon attempt to detect host reservations over the host_cmds hooks library
INFO[2021-09-03 06:37:47] host.go:100 completed pulling hosts from Kea apps: 0/1 succeeded
ERRO[2021-09-03 06:37:47] puller.go:172 errors were encountered while pulling data from apps: error returned by Kea in response to reservation-get-page command
isc.org/stork/server/apps/kea.(*HostDetectionIterator).sendReservationGetPage
/tmp/build/backend/server/apps/kea/host.go:254
isc.org/stork/server/apps/kea.(*HostDetectionIterator).DetectHostsPageFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:356
isc.org/stork/server/apps/kea.updateHostsFromHostCmds
/tmp/build/backend/server/apps/kea/host.go:536
isc.org/stork/server/apps/kea.(*HostsPuller).pullData
/tmp/build/backend/server/apps/kea/host.go:84
isc.org/stork/server/agentcomm.(*PeriodicPuller).pullerLoop
/tmp/build/backend/server/agentcomm/puller.go:169
runtime.goexit
/tmp/build/tools/1.15.5/go/src/runtime/asm_amd64.s:1374
problem with sending reservation-get-page command upon attempt to detect host reservations over the host_cmds hooks library
```
Edit: Here's a more verbose (debuglevel 99) output from the kea server:
```
WARN[2021-09-07 13:32:42] promkeaexporter.go:368 problem with connecting to dhcp daemon: unable to forward command to the dhcp6 service: No such file or directory. The server is likely to be offline
DEBUG DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
DEBUG DHCPSRV_MYSQL_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 0.581 ms
DEBUG ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
DEBUG DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
DEBUG DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: flush-reclaimed-leases
DEBUG ALLOC_ENGINE_V4_RECLAIMED_LEASES_DELETE begin deletion of reclaimed leases expired more than 3600 seconds ago
DEBUG DHCPSRV_MYSQL_DELETE_EXPIRED_RECLAIMED4 deleting reclaimed IPv4 leases that expired more than 3600 seconds ago
DEBUG DHCPSRV_MYSQL_DELETED_EXPIRED_RECLAIMED deleted 0 reclaimed leases from the database
DEBUG ALLOC_ENGINE_V4_RECLAIMED_LEASES_DELETE_COMPLETE successfully deleted 0 expired-reclaimed leases
DEBUG DHCPSRV_TIMERMGR_START_TIMER starting timer: flush-reclaimed-leases
INFO COMMAND_RECEIVED Received command 'reservation-get-page'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 128 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'reservation-get-page'
DEBUG HOOKS_CALLOUTS_BEGIN begin all callouts for hook $reservation_get_page
DEBUG HOOKS_CALLOUT_CALLED hooks library with index 3 has called a callout on hook $reservation_get_page that has address 0x7f176473c4a0 (callout duration: 4.296 ms)
DEBUG HOOKS_CALLOUTS_COMPLETE completed callouts for hook $reservation_get_page (total callouts duration: 4.296 ms)
DEBUG COMMAND_SOCKET_WRITE Sent response of 92 bytes (0 bytes left to send) over command socket 23
INFO CTRL_AGENT_COMMAND_FORWARDED command reservation-get-page successfully forwarded to the service dhcp4
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO[2021-09-07 13:32:50] agent.go:375 Compressing response from 96 B to 108 B, ratio 112%
INFO COMMAND_RECEIVED Received command 'stat-lease4-get'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 56 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'stat-lease4-get'
DEBUG HOOKS_CALLOUTS_BEGIN begin all callouts for hook $stat_lease4_get
INFO STAT_CMDS_LEASE4_GET stat-lease4-get command successful, parameters: [all subnets] rows found: 1
DEBUG HOOKS_CALLOUT_CALLED hooks library with index 2 has called a callout on hook $stat_lease4_get that has address 0x7f17649600c0 (callout duration: 0.431 ms)
DEBUG HOOKS_CALLOUTS_COMPLETE completed callouts for hook $stat_lease4_get (total callouts duration: 0.431 ms)
INFO CTRL_AGENT_COMMAND_FORWARDED command stat-lease4-get successfully forwarded to the service dhcp4
DEBUG COMMAND_SOCKET_WRITE Sent response of 303 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO[2021-09-07 13:32:50] agent.go:375 Compressing response from 307 B to 212 B, ratio 69%
INFO COMMAND_RECEIVED Received command 'statistic-get'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 96 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'statistic-get'
DEBUG COMMAND_SOCKET_WRITE Sent response of 90 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command statistic-get successfully forwarded to the service dhcp4
INFO[2021-09-07 13:32:50] agent.go:375 Compressing response from 94 B to 110 B, ratio 117%
INFO COMMAND_RECEIVED Received command 'reservation-get-page'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 129 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'reservation-get-page'
DEBUG HOOKS_CALLOUTS_BEGIN begin all callouts for hook $reservation_get_page
ERROR HOOKS_CALLOUT_ERROR error returned by callout on hook 3 registered by library with index $reservation_get_page (callout address 0x7f176473c4a0) (callout duration 15.274 ms)
DEBUG HOOKS_CALLOUTS_COMPLETE completed callouts for hook $reservation_get_page (total callouts duration: 15.274 ms)
INFO CTRL_AGENT_COMMAND_FORWARDED command reservation-get-page successfully forwarded to the service dhcp4
DEBUG COMMAND_SOCKET_WRITE Sent response of 657 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO[2021-09-07 13:32:50] agent.go:375 Compressing response from 661 B to 365 B, ratio 55%
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 86 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_WRITE Sent response of 1759 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command statistic-get-all successfully forwarded to the service dhcp4
WARN[2021-09-07 13:32:52] promkeaexporter.go:368 problem with connecting to dhcp daemon: unable to forward command to the dhcp6 service: No such file or directory. The server is likely to be offline
DEBUG DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
DEBUG DHCPSRV_MYSQL_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 1.478 ms
DEBUG ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
DEBUG DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
INFO COMMAND_RECEIVED Received command 'version-get'
INFO[2021-09-07 13:32:59] agent.go:375 Compressing response from 116 B to 125 B, ratio 107%
INFO COMMAND_RECEIVED Received command 'config-get'
INFO[2021-09-07 13:32:59] agent.go:375 Compressing response from 683 B to 314 B, ratio 45%
INFO COMMAND_RECEIVED Received command 'version-get'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 67 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'version-get'
DEBUG COMMAND_SOCKET_WRITE Sent response of 205 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command version-get successfully forwarded to the service dhcp4
INFO[2021-09-07 13:32:59] agent.go:375 Compressing response from 482 B to 273 B, ratio 56%
INFO COMMAND_RECEIVED Received command 'status-get'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 60 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'status-get'
DEBUG COMMAND_SOCKET_WRITE Sent response of 107 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command status-get successfully forwarded to the service dhcp4
INFO[2021-09-07 13:32:59] agent.go:375 Compressing response from 249 B to 196 B, ratio 78%
INFO COMMAND_RECEIVED Received command 'config-get'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 66 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'config-get'
DEBUG COMMAND_SOCKET_WRITE Sent response of 3167 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command config-get successfully forwarded to the service dhcp4
WARN[2021-09-07 13:32:59] kea.go:69 skipped refreshing viewable log files because config-get returned non success result
WARN[2021-09-07 13:32:59] kea.go:69 skipped refreshing viewable log files because config-get returned non success result
INFO[2021-09-07 13:32:59] agent.go:375 Compressing response from 3444 B to 1296 B, ratio 37%
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 86 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_WRITE Sent response of 1759 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command statistic-get-all successfully forwarded to the service dhcp4
WARN[2021-09-07 13:33:02] promkeaexporter.go:368 problem with connecting to dhcp daemon: unable to forward command to the dhcp6 service: No such file or directory. The server is likely to be offline
DEBUG DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
DEBUG DHCPSRV_MYSQL_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 0.633 ms
DEBUG ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
DEBUG DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 86 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_WRITE Sent response of 1759 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command statistic-get-all successfully forwarded to the service dhcp4
WARN[2021-09-07 13:33:12] promkeaexporter.go:368 problem with connecting to dhcp daemon: unable to forward command to the dhcp6 service: No such file or directory. The server is likely to be offline
DEBUG DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: flush-reclaimed-leases
DEBUG ALLOC_ENGINE_V4_RECLAIMED_LEASES_DELETE begin deletion of reclaimed leases expired more than 3600 seconds ago
DEBUG DHCPSRV_MYSQL_DELETE_EXPIRED_RECLAIMED4 deleting reclaimed IPv4 leases that expired more than 3600 seconds ago
DEBUG DHCPSRV_MYSQL_DELETED_EXPIRED_RECLAIMED deleted 0 reclaimed leases from the database
DEBUG ALLOC_ENGINE_V4_RECLAIMED_LEASES_DELETE_COMPLETE successfully deleted 0 expired-reclaimed leases
DEBUG DHCPSRV_TIMERMGR_START_TIMER starting timer: flush-reclaimed-leases
DEBUG DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
DEBUG DHCPSRV_MYSQL_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
DEBUG ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 0.626 ms
DEBUG ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
DEBUG DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_CONNECTION_OPENED Opened socket 23 for incoming command connection
DEBUG COMMAND_SOCKET_READ Received 86 bytes over command socket 23
INFO COMMAND_RECEIVED Received command 'statistic-get-all'
DEBUG COMMAND_SOCKET_WRITE Sent response of 1759 bytes (0 bytes left to send) over command socket 23
DEBUG COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 23 for existing command connection
INFO CTRL_AGENT_COMMAND_FORWARDED command statistic-get-all successfully forwarded to the service dhcp4
WARN[2021-09-07 13:33:22] promkeaexporter.go:368 problem with connecting to dhcp daemon: unable to forward command to the dhcp6 service: No such file or directory. The server is likely to be offline
```0.22https://gitlab.isc.org/isc-projects/stork/-/issues/473When subnet is moved into a shared network the old subnet entry is not removed2021-12-02T15:00:59ZMarcin SiodelskiWhen subnet is moved into a shared network the old subnet entry is not removedI did the following test:
- Started Kea with a single subnet and without any shared networks
- I let Stork pull the subnet and store in its database
- Stopped Kea and moved the subnet into a new shared network
- Started Kea again and let...I did the following test:
- Started Kea with a single subnet and without any shared networks
- I let Stork pull the subnet and store in its database
- Stopped Kea and moved the subnet into a new shared network
- Started Kea again and let Stork pull the updated configuration
- Navigated to the list of subnets
- I now see two instances of the same subnet, one under shared network and one global.1.0Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/928Cannot delete machine with config reports2023-02-08T16:52:56ZSlawek FigielCannot delete machine with config reportsI cannot delete a machine with config reports.
```
time="2022-12-29 15:15:43" level="info" msg="HTTP request incoming" file=" middleware.go:79 " method="DELETE" path="/api/machines/5" remote="[::1]:46082"
time="2022-12-29 15:15:...I cannot delete a machine with config reports.
```
time="2022-12-29 15:15:43" level="info" msg="HTTP request incoming" file=" middleware.go:79 " method="DELETE" path="/api/machines/5" remote="[::1]:46082"
time="2022-12-29 15:15:56" level="error" msg="problem deleting machine 5: ERROR #23503 update or delete on table \"daemon\" violates foreign key constraint \"config_report_daemon_id\" on table \"config_report\"" file=" machines.go:687 "
```
The problem occurs on the `agent-kea` and `agent-kea6` demo machines.
After executing the `DELETE FROM config_report;` SQL query, the machine is properly deleted.1.9Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/927Address and PD pools aren't updated2024-01-17T13:03:22ZSlawek FigielAddress and PD pools aren't updatedThe Stork server ignores changes in the address and PD pools.The Stork server ignores changes in the address and PD pools.1.9Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/196Stork server crashed ("inet_family" contains null values)2023-01-17T15:42:35ZTomek MrugalskiStork server crashed ("inet_family" contains null values)I was running the latest master (8486047fbd6cfac836f77dbd57d6e5c53e69b57d) and got the following error:
```
$ docker-compose logs server
WARNING: Some networks were defined but are not used by any service: subnet-04, subnet-03
Attachin...I was running the latest master (8486047fbd6cfac836f77dbd57d6e5c53e69b57d) and got the following error:
```
$ docker-compose logs server
WARNING: Some networks were defined but are not used by any service: subnet-04, subnet-03
Attaching to stork_server_1
server_1 | INFO[2020-03-09 21:50:57] main.go:18 Starting Stork Server, version 0.5.0, build date 2020-03-09 22:41
server_1 | INFO[2020-03-09 21:50:57] agentcomm.go:85 Stopping communication with agents
server_1 | INFO[2020-03-09 21:50:57] agentcomm.go:93 Stopped communication with agents
server_1 | FATA[2020-03-09 21:50:57] main.go:23 unexpected error: ERROR #23502 column "inet_family" contains null values
server_1 | problem with migrating database
server_1 | isc.org/stork/server/database.Migrate
server_1 | /home/thomson/devel/stork/backend/server/database/migrations.go:55
server_1 | isc.org/stork/server/database.MigrateToLatest
server_1 | /home/thomson/devel/stork/backend/server/database/migrations.go:64
server_1 | isc.org/stork/server/database.NewPgDB
server_1 | /home/thomson/devel/stork/backend/server/database/connection.go:57
server_1 | isc.org/stork/server.NewStorkServer
server_1 | /home/thomson/devel/stork/backend/server/server.go:75
server_1 | main.main
server_1 | /home/thomson/devel/stork/backend/cmd/stork-server/main.go:21
server_1 | runtime.main
server_1 | /home/thomson/devel/stork/tools/1.13.5/go/src/runtime/proc.go:203
server_1 | runtime.goexit
server_1 | /home/thomson/devel/stork/tools/1.13.5/go/src/runtime/asm_amd64.s:1357
```
This is something I built with `rake docker_up`.1.9Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/448HA state disappears on Kea app page2023-02-08T16:52:49ZMarcin SiodelskiHA state disappears on Kea app pageThis is a result of the Stork 0.13.0 sanity checks: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174642
@godfryd pointed out that "HA state disappears on Kea app page". Repro:
* add in the demo two kea instances (ha1 and...This is a result of the Stork 0.13.0 sanity checks: https://gitlab.isc.org/isc-projects/stork/-/issues/441#note_174642
@godfryd pointed out that "HA state disappears on Kea app page". Repro:
* add in the demo two kea instances (ha1 and ha2)
* open kea apps page
* open both apps on separate tabs
* stop ha1/dhcp4 service in Stork Env Simulator
* on ha1 tab there should be presented info about connectivity issues
* on ha2 tab HA state may be presented by when tabs are switched between ha1 and ha2 then HA state disappears on ha2 tab (it may reappear after a while when new HA state is retrieved from the server)1.9Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/882Communication events should be bound to the machine2023-01-18T10:41:10ZSlawek FigielCommunication events should be bound to the machineThe issue was found during 1.8.0 sanity checks by @slawek. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/875#note_320902).
I have a different set of events on the Dashboard and the machine page. The communication messages ...The issue was found during 1.8.0 sanity checks by @slawek. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/875#note_320902).
I have a different set of events on the Dashboard and the machine page. The communication messages with a warning level are missing on the machine page.
The event viewer presents all messages until I filter by a specific machine. In this case, the communication messages disappear.
![image](https://gitlab.isc.org/isc-projects/stork/uploads/e121b7184e2ca7b39d7e5602496c12b0/image.png) ![image](https://gitlab.isc.org/isc-projects/stork/uploads/947fe6e517cbec24e31f1ed5bcc0ccb2/image.png)
![image](https://gitlab.isc.org/isc-projects/stork/uploads/171f5769f1bb2e0d3bd390cb6759d3fd/image.png) ![image](https://gitlab.isc.org/isc-projects/stork/uploads/12cd5dbf6e0afcc2d66a4dfe0c3ee5ba/image.png)1.9Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/560Pool utilization shows inaccurate numbers2022-08-29T08:33:23ZTommyPool utilization shows inaccurate numbers**Setup:**
Kea DHCPv4. There is a pool with 21 ip addresses to assign dynamically while others are set by reservations.
**Stork Server version:**
<del>0.18.0</del> **1.0.0**
**Problem:**
Stork Server sees only those 21 ip addresses b...**Setup:**
Kea DHCPv4. There is a pool with 21 ip addresses to assign dynamically while others are set by reservations.
**Stork Server version:**
<del>0.18.0</del> **1.0.0**
**Problem:**
Stork Server sees only those 21 ip addresses but host reservations are counted in the statistics as well so I get numbers like:
```
Available: 21
Used: 91
```
It also affects the Web UI:
![image](/uploads/80cdf1e1ef22da80593664f3973dcb0c/image.png)
I do not have premium hooks installed.1.1Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/528Restarting agent service triggers re-registration (sanity checks)2021-09-09T13:42:31ZTomek MrugalskiRestarting agent service triggers re-registration (sanity checks)This was [reported](https://gitlab.isc.org/isc-projects/stork/-/issues/499#note_197272) by @marcin during 0.15 sanity checks:
When the user restarts the agent service:
```
$ systemctl service restart isc-agent-service
```
the registra...This was [reported](https://gitlab.isc.org/isc-projects/stork/-/issues/499#note_197272) by @marcin during 0.15 sanity checks:
When the user restarts the agent service:
```
$ systemctl service restart isc-agent-service
```
the registration starts over. The agent which used to be authorized is now back to unauthorized agents and again requires an approval. This is not right. We should have a way to determine that the agent has been already registered and do not re-register. Maybe simply trying to ping itself via the server would be sufficient to see that the connection can be established.0.20Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/589Kea timeout in system tests2022-06-22T14:51:49ZSlawek FigielKea timeout in system testsDuring resolving #552 we found a rare problem with a Kea timeout.
It looks like the problem occurs when the Kea uses many network configurations.
```
> assert data['total'] == 6912
E KeyError: 'total'
agent = <container...During resolving #552 we found a rare problem with a Kea timeout.
It looks like the problem occurs when the Kea uses many network configurations.
```
> assert data['total'] == 6912
E KeyError: 'total'
agent = <containers.StorkAgentContainer object at 0x7f25865631f0>
data = {'items': None}
i = 29
m = {'address': '10.69.61.73', 'agentPort': 8080, 'agentToken': '79C7F48C7860CA179B14E01CAFC07C383B86E630F3EB3573CE0E83162FFD57F0', 'agentVersion': '0.20.0', ...}
r = <Response [200]>
server = <containers.StorkServerContainer object at 0x7f2586558640>
tests.py:267: KeyError
```
I think that our test shouldn't fail in this case, but wait for the end of processing.
I attach all collected logs to this issue.
[stork-agent-0.log](/uploads/e6327e9188b14fbcb7772e719658363b/stork-agent-0.log)
[stork-server-0.log](/uploads/b8e0b97d42377948223c3f2bf5021519/stork-server-0.log)
[raw.txt](/uploads/e75df69ca44eadc2d0c2943f0de220b3/raw.txt)outstandinghttps://gitlab.isc.org/isc-projects/stork/-/issues/592dockerized kea has reservations outside subnet2021-10-06T15:37:54ZAndrei Pavelandrei@isc.orgdockerized kea has reservations outside subnetWhen authorizing a kea-dhcp6 machine with Kea 2.0.0 as backend:
```
stork-0-agent-kea6-1 | ERROR DHCP6_PARSER_FAIL failed to create or run parser for configuration element subnet6: specified reservation '3001:db8:1:cafe::1' ...When authorizing a kea-dhcp6 machine with Kea 2.0.0 as backend:
```
stork-0-agent-kea6-1 | ERROR DHCP6_PARSER_FAIL failed to create or run parser for configuration element subnet6: specified reservation '3001:db8:1:cafe::1' is not within the IPv6 subnet '3001:db8:1::/64'
stork-0-agent-kea6-1 | 2021-10-06 13:35:13.617 ERROR [kea-dhcp6.dhcp6/81.139893512860800] DHCP6_CONFIG_LOAD_FAIL configuration error using file: /etc/kea/kea-dhcp6.conf, reason: specified reservation '3001:db8:1:cafe::1' is not within the IPv6 subnet '3001:db8:1::/64'
stork-0-agent-kea6-1 | 2021-10-06 13:35:13.617 ERROR [kea-dhcp6.dhcp6/81.139893512860800] DHCP6_INIT_FAIL failed to initialize Kea server: configuration error using file '/etc/kea/kea-dhcp6.conf': specified reservation '3001:db8:1:cafe::1' is not within the IPv6 subnet '3001:db8:1::/64'
```
Out-of-subnet reserved addresses are no longer supported since https://gitlab.isc.org/isc-projects/kea/-/issues/1254.0.21Andrei Pavelandrei@isc.orgAndrei Pavelandrei@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/594Stork displays incorrect agent version2021-10-08T10:22:37ZPeter DaviesStork displays incorrect agent versionAfter upgrading the Stork server and then the agent from 0.20.0 to 0.21.0
Stork reports agent version 0.20.0 instead of 0.21.0
I tried ```Get Latest State``` a few times.
-`Agent Version 0.20.0```After upgrading the Stork server and then the agent from 0.20.0 to 0.21.0
Stork reports agent version 0.20.0 instead of 0.21.0
I tried ```Get Latest State``` a few times.
-`Agent Version 0.20.0```https://gitlab.isc.org/isc-projects/stork/-/issues/604DHCP6 simulator seems not working2022-01-27T18:22:04ZSlawek FigielDHCP6 simulator seems not workingI cannot simulate any DHCPv6 traffic. It looks like a problem with the simulator itself because the `kea_dhcp6_addresses_declined_total` metric has a value equal to 1 when I'm starting the traffic generator.I cannot simulate any DHCPv6 traffic. It looks like a problem with the simulator itself because the `kea_dhcp6_addresses_declined_total` metric has a value equal to 1 when I'm starting the traffic generator.1.1Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/642Assorted ARM fixes before the 1.0 release2021-12-06T16:47:29ZMarcin SiodelskiAssorted ARM fixes before the 1.0 release2.1 Supported Systems
CentOS 7 - should be CentOS 8
macOS 10.5 - should be MacOS 11.3.1
2.3.2.3. Securing Connections
"should be readable/writable only for the user running the Stork Agent"
"should be readable/writable only BY the us...2.1 Supported Systems
CentOS 7 - should be CentOS 8
macOS 10.5 - should be MacOS 11.3.1
2.3.2.3. Securing Connections
"should be readable/writable only for the user running the Stork Agent"
"should be readable/writable only BY the user running the Stork Agent"
"All credentials must to contains the values for 4 keys"
"All credentials must contain the values for 4 keys"
"ip", "port" etc should use the `
2.3.2.10/2.3.2.11
There is an example stork-tool command line. Maybe we should add an alternative using individual switches for password, host etc. There is a known problem when someone uses an invalid URL.
2.4.3.
"There are several components of Stork". Should be: "There are two Stork components"
"By default, all components are installed to the root folder in the current directory." Should be: "By default, all components are installed IN the root folder in the current directory."
2.5. Database Migration Tool
It neglects the tool's cert-export feature.
All chapters:
We use Stork Agent, Stork agent, Stork server and Stork Server. We should probably unify.
3. Using Stork
The first sentence instructs the user to navigate to localhost:8080 without mentioning that the URL varies depending on the configuration.
3.4.2. Deleting a Machine
"The preferred way to achieve that is to issue the killall stork-agent command"
Is it really killall stork-agent?
3.5.8. Kea HA Status
The picture is heavily outdated.
3.5.6.3. Sources of Host Reservations
"This interval is currently not configurable." is not true.
7. Demo
Lacks BIND9_2 container and premium container.
7.2.1
Using kea-1-7 in the example URL for getting the token. It should rather be 2.0.1.0Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/654PrimeNG errors during UI unit tests2022-02-08T12:57:21ZSlawek FigielPrimeNG errors during UI unit testsThe issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/645#note_253008
While running `rake ng_test` I got this error in between some tests:
`TypeError: Cannot read prop...The issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/645#note_253008
While running `rake ng_test` I got this error in between some tests:
`TypeError: Cannot read properties of undefined (reading 'offsetHeight')`1.2Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/669Big logo is changing on hover2022-02-01T18:04:50ZSlawek FigielBig logo is changing on hoverThe issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/650#note_253265
When you mouse over big stork logo it's changing, don't know if that's intentional but looks weird.The issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/650#note_253265
When you mouse over big stork logo it's changing, don't know if that's intentional but looks weird.1.1Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/670Negative utilization for large IPv6 subnets2022-01-28T11:25:30ZSlawek FigielNegative utilization for large IPv6 subnetsThe issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/650#note_253282
The IPv6 utilization is displayed wrong. It has a negative value.
# ToDo
- [x] ~~Handle a specia...The issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/650#note_253282
The IPv6 utilization is displayed wrong. It has a negative value.
# ToDo
- [x] ~~Handle a special `-1` value from Kea that means `too many addresses/prefixes for uint64` on UI~~
- [x] Cast the Kea values to uint64 before processing
- [ ] ~~Restore the total NAS value from the subnet mask~~
- [x] Use `math/big` to calculate utilization of subnets and shared networks1.1Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/671Stork Agent - failed compilation on FreeBSD2022-12-13T12:32:04ZSlawek FigielStork Agent - failed compilation on FreeBSDThe issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/650#note_253285
I was unable to build Stork Agent on FreeBSD. It is about missing protocolbuffers package for that...The issue was found during sanity checks for the 1.0 release.
Source: https://gitlab.isc.org/isc-projects/stork/-/issues/650#note_253285
I was unable to build Stork Agent on FreeBSD. It is about missing protocolbuffers package for that system. There is actually an existing TODO in our Rakefile.backloghttps://gitlab.isc.org/isc-projects/stork/-/issues/676[ISC-support #19985] Fix database migration in Stork 1.0.02023-01-03T13:11:06ZMarcin Siodelski[ISC-support #19985] Fix database migration in Stork 1.0.0The database migration 37, among other things, does this:
```sql
...
DELETE FROM host;
...
-- Add a missing foreign key to host table.
ALTER TABLE local_host
ADD CONSTRAINT local_host_to_host_id FOREIGN KEY (host_id)
REFEREN...The database migration 37, among other things, does this:
```sql
...
DELETE FROM host;
...
-- Add a missing foreign key to host table.
ALTER TABLE local_host
ADD CONSTRAINT local_host_to_host_id FOREIGN KEY (host_id)
REFERENCES host (id) MATCH SIMPLE
ON UPDATE CASCADE
ON DELETE CASCADE;
```
The first statement relies on the presence of the foreign key which is added later. This causes constraint violation issues when people migrate databases that include host reservations. The order of these operations must be swapped.
Current workaround for this issue is to manually run:
```sql
DELETE FROM local_host;
```
using psql.1.1Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/677Issue with updating host reservations when they don't contain IP addresses2021-12-21T10:14:52ZMarcin SiodelskiIssue with updating host reservations when they don't contain IP addressesThe issue can be reproduced with two servers using the same configuration (e.g. HA partners). When they define host reservations without IP addresses, approving the second server registration will cause Stork server to match its host res...The issue can be reproduced with two servers using the same configuration (e.g. HA partners). When they define host reservations without IP addresses, approving the second server registration will cause Stork server to match its host reservations with the existing reservations. As a result, a `dbmodel.UpdateHost()` function is called to update the hosts. It results in the following error:
```
021-12-16 10:51:35.544 UTC [103] ERROR: syntax error at or near ")" at character 101
postgres_1 | 2021-12-16 10:51:35.544 UTC [103] STATEMENT: DELETE FROM "ip_reservation" WHERE (ip_reservation.host_id = 4) AND (ip_reservation.address NOT IN ())
server_1 | ERRO[2021-12-16 10:51:35] statepuller.go:287 cannot store application state: ERROR #42601 syntax error at or near ")"
server_1 | problem with deleting IP reservations for host 4
server_1 | isc.org/stork/server/database/model.UpdateHost
server_1 | /repo/build-root/backend/server/database/model/host.go:191
```
The query fails because of this particular part of the statement: `NOT IN ()`. The empty set is not allowed. We should check if the list is empty and, in this case, do not issue the database statement.1.1Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/682Stork crashes on huge Kea configs2022-03-01T10:56:54ZSlawek FigielStork crashes on huge Kea configsI prepared a Kea DHCP4 configuration file with 16392 subnets and 10 reservations per network.
Stork panics in the `dbmodel.UpdateApp` function with the `makeslice: cap out of range` error during save the configuration to the database.
L...I prepared a Kea DHCP4 configuration file with 16392 subnets and 10 reservations per network.
Stork panics in the `dbmodel.UpdateApp` function with the `makeslice: cap out of range` error during save the configuration to the database.
Last Stork source line: `backend/server/database/model/common.go:50`.
It seems that the error occurred during the serialization of the Kea configuration to the JSON file.
My Kea DHCP4 configuration has 29MB.
Call stack:
```
runtime.fatalpanic (/usr/lib/go/src/runtime/panic.go:1274)
runtime.gopanic (/usr/lib/go/src/runtime/panic.go:1147)
github.com/go-pg/pg/v10.(*Tx).RunInTransaction.func1 (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/tx.go:91)
runtime.gopanic (/usr/lib/go/src/runtime/panic.go:1047)
runtime.panicmakeslicecap (/usr/lib/go/src/runtime/slice.go:31)
runtime.makeslice (/usr/lib/go/src/runtime/slice.go:95)
github.com/vmihailenco/bufpool.(*bufPool).Get (/home/lv/go/pkg/mod/github.com/vmihailenco/bufpool@v0.1.11/buf_pool.go:45)
github.com/vmihailenco/bufpool.Get (/home/lv/go/pkg/mod/github.com/vmihailenco/bufpool@v0.1.11/buf_pool.go:17)
github.com/vmihailenco/bufpool.(*Buffer).grow (/home/lv/go/pkg/mod/github.com/vmihailenco/bufpool@v0.1.11/buffer_ext.go:57)
github.com/vmihailenco/bufpool.(*Buffer).Write (/home/lv/go/pkg/mod/github.com/vmihailenco/bufpool@v0.1.11/buffer.go:121)
encoding/json.(*Encoder).Encode (/usr/lib/go/src/encoding/json/stream.go:231)
github.com/go-pg/pg/v10/types.appendJSONValue (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/types/append_value.go:203)
github.com/go-pg/pg/v10/types.ptrAppenderFunc.func1 (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/types/append_value.go:130)
github.com/go-pg/pg/v10/orm.(*Field).AppendValue (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/field.go:106)
github.com/go-pg/pg/v10/orm.(*UpdateQuery).appendSetStruct (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/update.go:230)
github.com/go-pg/pg/v10/orm.(*UpdateQuery).mustAppendSet (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/update.go:154)
github.com/go-pg/pg/v10/orm.(*UpdateQuery).AppendQuery (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/update.go:78)
github.com/go-pg/pg/v10.appendQuery (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/messages.go:512)
github.com/go-pg/pg/v10.writeQueryMsg (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/messages.go:493)
github.com/go-pg/pg/v10.(*Tx).query (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/tx.go:223)
github.com/go-pg/pg/v10.(*Tx).QueryContext (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/tx.go:211)
github.com/go-pg/pg/v10/orm.(*Query).returningQuery (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/query.go:1163)
github.com/go-pg/pg/v10/orm.(*Query).update (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/query.go:1146)
github.com/go-pg/pg/v10/orm.(*Query).Update (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/orm/query.go:1111)
isc.org/stork/server/database/model.upsertInTransaction (/home/lv/Projects/stork2/backend/server/database/model/common.go:50)
isc.org/stork/server/database/model.updateAppDaemons (/home/lv/Projects/stork2/backend/server/database/model/app.go:130)
isc.org/stork/server/database/model.updateApp (/home/lv/Projects/stork2/backend/server/database/model/app.go:246)
isc.org/stork/server/database/model.UpdateApp (/home/lv/Projects/stork2/backend/server/database/model/app.go:270)
isc.org/stork/server/apps/kea.CommitAppIntoDB.func1 (/home/lv/Projects/stork2/backend/server/apps/kea/appkea.go:653)
github.com/go-pg/pg/v10.(*Tx).RunInTransaction (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/tx.go:95)
github.com/go-pg/pg/v10.(*baseDB).RunInTransaction (/home/lv/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.6/tx.go:74)
isc.org/stork/server/apps/kea.CommitAppIntoDB (/home/lv/Projects/stork2/backend/server/apps/kea/appkea.go:604)
isc.org/stork/server/apps.GetMachineAndAppsState (/home/lv/Projects/stork2/backend/server/apps/statepuller.go:273)
isc.org/stork/server/apps.(*StatePuller).pullData (/home/lv/Projects/stork2/backend/server/apps/statepuller.go:62)
isc.org/stork/server/apps.(*StatePuller).pullData-fm (/home/lv/Projects/stork2/backend/server/apps/statepuller.go:48)
isc.org/stork/util.(*PeriodicExecutor).executorLoop (/home/lv/Projects/stork2/backend/util/periodicexecutor.go:166)
isc.org/stork/util.NewPeriodicExecutor·dwrap·1 (/home/lv/Projects/stork2/backend/util/periodicexecutor.go:64)
runtime.goexit (/usr/lib/go/src/runtime/asm_amd64.s:1581)
```1.2Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/688Broken statistics, subnets, shared networks2022-02-02T12:48:34ZSlawek FigielBroken statistics, subnets, shared networksDuring sanity checks, @marcin [found a problem](https://gitlab.isc.org/isc-projects/stork/-/issues/685#note_264484) related to utilization statistics.
Broken things:
- Utilization bars
- Subnets page
- Shared networks pageDuring sanity checks, @marcin [found a problem](https://gitlab.isc.org/isc-projects/stork/-/issues/685#note_264484) related to utilization statistics.
Broken things:
- Utilization bars
- Subnets page
- Shared networks page1.1Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/701stork-tool should create DB with pgcrypto2022-02-28T21:04:46ZSlawek Figielstork-tool should create DB with pgcryptoThe issue was found during 1.1.0 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/685#note_264816)
The `stork-tool db-create` command creates new postgres database, but does not create pgcrypto extension. The e...The issue was found during 1.1.0 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/685#note_264816)
The `stork-tool db-create` command creates new postgres database, but does not create pgcrypto extension. The extension is created in one of the migrations but apparently it requires super-user role in Postgres. That causes the `isc-stork-server` to fail to start with a freshly created database. It is possible to run the migrations with the Stork tool, e.g.:
```
stork-tool db-up --db-user postgres
```
because `postgres` is a super-user role.
I recommend that we add `CREATE EXTENSION pgcrypto` step to the `db-create` command. In fact, that had been an initial plan but we removed that during the ticket review. Anyway, it is a non-blocking issue because there are tons of workarounds.1.2Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/705Traffic simulator not working2022-02-08T21:44:10ZSlawek FigielTraffic simulator not workingWe discovered today that our traffic simulator stopped working unexpectedly. The UI is not loading.
It worked during sanity checks a few days ago.
![2022-02-08-182200_768x522_scrot](/uploads/b4b4038c90532e695b8da437bebf5e5e/2022-02-08-1...We discovered today that our traffic simulator stopped working unexpectedly. The UI is not loading.
It worked during sanity checks a few days ago.
![2022-02-08-182200_768x522_scrot](/uploads/b4b4038c90532e695b8da437bebf5e5e/2022-02-08-182200_768x522_scrot.png)
![2022-02-08-182213_1903x467_scrot](/uploads/5246effcef81845a6c0e71928b8d8f25/2022-02-08-182213_1903x467_scrot.png)1.2Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/719Broken dependencies in demo deploy2022-03-28T09:02:29ZSlawek FigielBroken dependencies in demo deployDuring building the Kea container I got this message:
[Job with the problem](https://gitlab.isc.org/isc-projects/stork/-/jobs/2378774)
```
Step 5/13 : RUN apt-get update && apt-get install -y --no-install-recommends isc-kea-dhc...During building the Kea container I got this message:
[Job with the problem](https://gitlab.isc.org/isc-projects/stork/-/jobs/2378774)
```
Step 5/13 : RUN apt-get update && apt-get install -y --no-install-recommends isc-kea-dhcp4-server=2.0.1-isc20211214132435 isc-kea-ctrl-agent=2.0.1-isc20211214132435 isc-kea-admin=2.0.1-isc20211214132435 isc-kea-common=2.0.1-isc20211214132435 && mkdir -p /var/run/kea/
---> Running in 7e37112e1f6d
Get:1 https://dl.cloudsmith.io/public/isc/kea-2-0/deb/ubuntu bionic InRelease [5118 B]
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Fetched 5118 B in 1s (5076 B/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
a distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
isc-kea-ctrl-agent : Depends: python3-isc-kea-connector (= 2.0.1-isc20211214132435) but 2.0.2-isc20220227221539 is to be installed
E: Unable to correct problems, you have held broken packages.
The command '/bin/sh -c apt-get update && apt-get install -y --no-install-recommends isc-kea-dhcp4-server=2.0.1-isc20211214132435 isc-kea-ctrl-agent=2.0.1-isc20211214132435 isc-kea-admin=2.0.1-isc20211214132435 isc-kea-common=2.0.1-isc20211214132435 && mkdir -p /var/run/kea/' returned a non-zero code: 100
Service 'agent-kea' failed to build : Build failed
rake aborted!
```1.3Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/915The hook list has invalid hook documentation links (Kea 2.0.2, Kea 2.4.0)2023-12-01T15:42:06ZSlawek FigielThe hook list has invalid hook documentation links (Kea 2.0.2, Kea 2.4.0)The issue was reported during 1.8.0 sanity checks by @slawek. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/910#note_334981)
Hook doc buttons have the wrong labels in the links for Kea 2.0.2 (works well for the latest Kea ...The issue was reported during 1.8.0 sanity checks by @slawek. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/910#note_334981)
Hook doc buttons have the wrong labels in the links for Kea 2.0.2 (works well for the latest Kea version).
Expected:
`https://kea.readthedocs.io/en/kea-2.0.2/arm/hooks.html#stat-cmds-supplemental-statistics-commands`
Actual:
`https://kea.readthedocs.io/en/kea-2.0.2/arm/hooks.html#stat-cmds-statistics-commands-for-supplemental-lease-statistics`
![image](https://gitlab.isc.org/isc-projects/stork/uploads/72448a9b77b60ac69937d31b1138705b/image.png)1.14Piotrek ZadrogaPiotrek Zadrogahttps://gitlab.isc.org/isc-projects/stork/-/issues/726API returns the nil/null instead of empty lists.2023-04-03T10:51:10ZSlawek FigielAPI returns the nil/null instead of empty lists.The Stork Server API returns a nil/null value when the list has no elements.
The Swagger.yaml specifies that the returned type always is a list.
Returning nil breaks the strict type validation.
Additionally, some of our endpoints accept...The Stork Server API returns a nil/null value when the list has no elements.
The Swagger.yaml specifies that the returned type always is a list.
Returning nil breaks the strict type validation.
Additionally, some of our endpoints accept or return non-fulled data (e.g., list machines), but it isn't described in the API files.backloghttps://gitlab.isc.org/isc-projects/stork/-/issues/728Add host reservations form UI improvements2022-07-11T22:55:38ZSlawek FigielAdd host reservations form UI improvementsI have some ideas on how we can improve the host reservation form added in #720 .
- [x] Autofill the IP input with the subnet prefix
- [ ] ~~Use masked input for IP~~
- [x] Add validation labels on input in addition to the red border (t...I have some ideas on how we can improve the host reservation form added in #720 .
- [x] Autofill the IP input with the subnet prefix
- [ ] ~~Use masked input for IP~~
- [x] Add validation labels on input in addition to the red border (the red border doesn't describe what is wrong)
- [x] Add a refresh button to the host reservation list
- [ ] ~~Put the freshly added host reservations on the list but grayed them until fetched from the database. I need \~1 minute to fetch the reservations on my local setup. At this time, I started thinking that something had gone wrong.~~1.5Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/733The unittest:backend_db rake task doesn't use the Docker DB2022-05-20T12:32:46ZSlawek FigielThe unittest:backend_db rake task doesn't use the Docker DBThe `unittest:backend_db` rake task starts the Postgres docker-compose service but the unit tests are executed using the local database.The `unittest:backend_db` rake task starts the Postgres docker-compose service but the unit tests are executed using the local database.1.4Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/709System tests 2.02022-07-04T11:23:35ZSlawek FigielSystem tests 2.0Our system tests are unstable, slow, hard to maintain, not debuggable, and don't work under macOS.
The system tests are powerful utility, and we need them in the Stork. We need to develop a new, improved solution.
[Design](https://gitla...Our system tests are unstable, slow, hard to maintain, not debuggable, and don't work under macOS.
The system tests are powerful utility, and we need them in the Stork. We need to develop a new, improved solution.
[Design](https://gitlab.isc.org/isc-projects/stork/-/wikis/designs/System-Tests)
ToDo:
- [x] Refactor the Rakefile (!415)
- [x] Write the multistage Dockerfile to build and run the Stork. (!423)
- [x] Write the first system test based on pytest (!428)
- [x] Write a framework for system tests (!428)
- [x] Write or rewrite the current tests using the framework (!428)
- [x] Update the developer guide
Moved to separate issues:
- Rewrite the update package system tests (#746)
Abandon
- Improve the system test failure diagnostic
- Write the vscode configuration to integrate with Docker containers
- Write the performance counters for build and system tests commands1.5Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/742The statistic puller accidentally fails2022-05-30T13:44:28ZSlawek FigielThe statistic puller accidentally failsThe issue was found during 1.3 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/732#note_285370)
```
server_1 | ERRO[2022-05-11 08:35:15] periodicexecutor.go:169 Errors were encountered while...The issue was found during 1.3 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/732#note_285370)
```
server_1 | ERRO[2022-05-11 08:35:15] periodicexecutor.go:169 Errors were encountered while pulling data from apps: missing arguments from Lease Stats response {ResponseHeader:{Result:2 Text:'stat-lease4-get' command not supported. Daemon:dhcp4} Arguments:<nil>}
server_1 | isc.org/stork/server/apps/kea.(*StatsPuller).storeDaemonStats
server_1 | /app/backend/server/apps/kea/statspuller.go:203
server_1 | isc.org/stork/server/apps/kea.(*StatsPuller).processAppResponses
server_1 | /app/backend/server/apps/kea/statspuller.go:355
server_1 | isc.org/stork/server/apps/kea.(*StatsPuller).getStatsFromApp
server_1 | /app/backend/server/apps/kea/statspuller.go:329
server_1 | isc.org/stork/server/apps/kea.(*StatsPuller).pullStats
server_1 | /app/backend/server/apps/kea/statspuller.go:61
server_1 | isc.org/stork/util.(*PeriodicExecutor).executorLoop
server_1 | /app/backend/util/periodicexecutor.go:166
server_1 | runtime.goexit
server_1 | /app/tools/golang/go/src/runtime/asm_amd64.s:1581
```
The stats are sometimes pulled from daemons without the stat_cmd hook.
It breaks the stat pulling process.
I found this issue during experiments with the simulator. I generated traffic for the subnets from the `agent-kea` machine. Next, I closed the simulator page and opened the Grafana DHCP6 dashboard. Then, I reopened the simulator and tried to generate traffic for IPv6 networks without success. I couldn't also generate IPv4 traffic again.
The problem may be related to the HA-1 machine.1.4Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/710DHCP dashbord statitistic doubles the values2022-08-16T15:54:59ZSlawek FigielDHCP dashbord statitistic doubles the values[Source](https://lists.isc.org/pipermail/stork-users/2022-February/000093.html)
Hello,
I made the maj to stork 1.1 today and since the values (Addresses and Prefixes) displayed on the Dashboard in the statistics section are doubled.
T...[Source](https://lists.isc.org/pipermail/stork-users/2022-February/000093.html)
Hello,
I made the maj to stork 1.1 today and since the values (Addresses and Prefixes) displayed on the Dashboard in the statistics section are doubled.
Thanks to the correction of the bug [#676] I could add my second server which is in standby mode. But as there is a synchronization of the leases between the master and the standby these are doubled and I think that is where the problem comes from.
So instead of having for example 5000 prefixes that are really assigned, the dashboard displays 10000. However when I add the values obtained via the "host reservations", "subnets" and "shared-network" I have only 5000 prefixes that have been assigned.
I think that the dashboard does not take into account that my two servers are in Hot-Standby and displays the sum of the leases obtained by the two.
Is there any way to get the real value?
Thanks,
PO1.5Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/743Event dump is null2022-05-31T11:32:06ZSlawek FigielEvent dump is nullThe issue was found during 1.3 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/732#note_285373)
Machine dump contains only the `null` value as the content of the events dump.The issue was found during 1.3 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/732#note_285373)
Machine dump contains only the `null` value as the content of the events dump.1.4Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/744Build date is unset2022-06-07T11:59:10ZSlawek FigielBuild date is unsetThe issue was found during 1.3 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/732#note_285376)
![image](https://gitlab.isc.org/isc-projects/stork/uploads/74b9f6e3c07bfd693ff407f2f8555858/image.png)
Hovering ...The issue was found during 1.3 sanity checks. [Source](https://gitlab.isc.org/isc-projects/stork/-/issues/732#note_285376)
![image](https://gitlab.isc.org/isc-projects/stork/uploads/74b9f6e3c07bfd693ff407f2f8555858/image.png)
Hovering the logo displays the "unset" build date.1.5Marcin SiodelskiMarcin Siodelski