ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2021-11-16T14:28:25Zhttps://gitlab.isc.org/isc-projects/stork/-/issues/583Statepuller - only one refresh at a time2021-11-16T14:28:25ZSlawek FigielStatepuller - only one refresh at a timetl;dr: Statepuller rarely return HTTP 500 on update an agent state, but it doesn't corrupt a database.
### Description
We have a hard-to-resolve race problem with updating machine state in the Stork ("statepuller.go", `GetMachineAndApp...tl;dr: Statepuller rarely return HTTP 500 on update an agent state, but it doesn't corrupt a database.
### Description
We have a hard-to-resolve race problem with updating machine state in the Stork ("statepuller.go", `GetMachineAndAppsState`). The problem occurs when the Stork tries refreshing an application state from multiple goroutines at the same time.
Refresh may be triggered by:
* On Stork start
* Periodically
* On user request
Some refresh procedures may be called at the same time. Refresh state looks like this:
1. Get state from an agent
2. Get state from a database
3. Calculate diff
4. Provide changes in the application
5. Provide changes in the subnets and others
Points 2, 4, and 5 are doing in a separate transaction. It may happen that after fetching state from the database (2.) and calculate diffs (3.) in one goroutine, another goroutine modified the application. It causes that the calculated diffs are incorrect. The exception is thrown from point 4. where the unique index constraints are checked.
It crashes on point 4. I try to fix it by handle the unique constraint violation. And it goes next to point 5. There aren't unique indexes - all pass through, but the subnets and other data are duplicated.
### Risk analysis
The problem is quite rare as it occurs only when the state of a new agent is inserted. It shouldn't be also very dangerous as the unique constraints protect the database against duplication of data.
But it may be a problem if somebody wants to use Stork API in the external project (as API unexpectedly returns HTTP 500) or it may interrupt processing in a function that calls the state update.backloghttps://gitlab.isc.org/isc-projects/stork/-/issues/582Retrieve rcodes and qtypes per zone2021-10-19T11:13:42ZalbsgaRetrieve rcodes and qtypes per zoneIt will be interesting to retrieve the data from the json, building an struct with [] to gather the data as it is an array of dicts.
It's is important the format of the Prometheus output, it will be also interesting to have something si...It will be interesting to retrieve the data from the json, building an struct with [] to gather the data as it is an array of dicts.
It's is important the format of the Prometheus output, it will be also interesting to have something similar to:
```
bind_zones_queries_total{type="A", zone="example.com,"view="_default"} 91
bind_zones_queries_total{type="CNAME",zone="example.com,"view="_default"} 32
bind_zones_queries_total{type="A", zone="example3.com,"view="_default"} 3
bind_zones_queries_total{type="CNAME",zone="example3.com,"view="_default"} 22
```
Thanks!!backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2070make GTEST_FILTER filter shell & python tests too2024-02-02T18:12:53ZAndrei Pavelandrei@isc.orgmake GTEST_FILTER filter shell & python tests too```
$ GTEST_FILTER='CtrlChannelDhcpv4SrvTest.getVersion' make check -C src/bin/dhcp4
```
```
[ RUN ] kea-dhcp4.server_pid_file_test
[ OK ] kea-dhcp4.server_pid_file_test
[ RUN ] dhcpv4_srv.dynamic_reconfiguration
[ ...```
$ GTEST_FILTER='CtrlChannelDhcpv4SrvTest.getVersion' make check -C src/bin/dhcp4
```
```
[ RUN ] kea-dhcp4.server_pid_file_test
[ OK ] kea-dhcp4.server_pid_file_test
[ RUN ] dhcpv4_srv.dynamic_reconfiguration
[ OK ] dhcpv4_srv.dynamic_reconfiguration
[ RUN ] dhcpv4.sigterm_test
[ OK ] dhcpv4.sigint_test
[ RUN ] dhcpv4.version
[ OK ] dhcpv4.version
[ RUN ] dhcpv4.variables
[ OK ] dhcpv4.variables
[ RUN ] dhcpv4_srv.lfc_timer_test
[ OK ] dhcpv4_srv.lfc_timer_test
[ RUN ] dhcpv4.syntax_check_success
[ OK ] dhcpv4.syntax_check_success
[ RUN ] dhcpv4.syntax_check_bad_syntax
[ OK ] dhcpv4.syntax_check_bad_syntax
[ RUN ] dhcpv4.syntax_check_bad_values
[ OK ] dhcpv4.syntax_check_bad_values
[ RUN ] dhcpv4.password_redact_test
[ OK ] dhcpv4.password_redact_test
PASS: dhcp4_process_tests.sh
Note: Google Test filter = CtrlChannelDhcpv4SrvTest.getVersion
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from CtrlChannelDhcpv4SrvTest
[ RUN ] CtrlChannelDhcpv4SrvTest.getVersion
ctrl_dhcp4_srv_unittest.cc:1055: Failure
Value of: response.find("GTEST_VERSION") != string::npos
Actual: false
Expected: true
[ FAILED ] CtrlChannelDhcpv4SrvTest.getVersion (3 ms)
[----------] 1 test from CtrlChannelDhcpv4SrvTest (3 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (3 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] CtrlChannelDhcpv4SrvTest.getVersion
```
It should run a single test.backloghttps://gitlab.isc.org/isc-projects/stork/-/issues/575remove references to slices and to slice elements2021-09-14T16:05:19ZAndrei Pavelandrei@isc.orgremove references to slices and to slice elementsToday I learned, while reading this newly published article, that you should never take the address of slices or of slice elements: https://utcc.utoronto.ca/~cks/space/blog/programming/GoSlicesVsPointers
So I greped for any slice elemen...Today I learned, while reading this newly published article, that you should never take the address of slices or of slice elements: https://utcc.utoronto.ca/~cks/space/blog/programming/GoSlicesVsPointers
So I greped for any slice element referencing:
```
$ grep -EIr '&[a-zA-Z]*\[[0-9]'
backend/agent/kea.go: paths := collectKeaAllowedLogs(&responses[0])
backend/server/restservice/machines_test.go: PrimaryCommInterrupted: &commInterrupted[0],
backend/server/restservice/machines_test.go: SecondaryCommInterrupted: &commInterrupted[1],
backend/server/restservice/machines_test.go: PrimaryCommInterrupted: &commInterrupted[0],
backend/server/restservice/hosts_test.go: err = dbmodel.AddAppToHost(db, &hosts[0], &apps[0], "config", 1)
backend/server/restservice/hosts_test.go: err = dbmodel.AddAppToHost(db, &hosts[0], &apps[1], "config", 1)
backend/server/apps/kea/service_test.go: err = dbmodel.AddService(db, &services[0])
backend/server/apps/kea/service_test.go: err = dbmodel.AddService(db, &services[1])
backend/server/apps/kea/statspuller.go: resultSet := &sr[0].Arguments.ResultSet
backend/server/database/model/service_test.go: PrimaryCommInterrupted: &commInterrupted[0],
backend/server/database/model/service_test.go: SecondaryCommInterrupted: &commInterrupted[1],
backend/server/database/model/host_test.go: err := AddAppToHost(db, &hosts[0], apps[0], "api", 1)
backend/server/database/model/host_test.go: err = AddAppToHost(db, &hosts[1], apps[0], "api", 1)
backend/server/database/model/host_test.go: err = AddAppToHost(db, &hosts[2], apps[1], "api", 1)
backend/server/database/model/host_test.go: err = AddAppToHost(db, &hosts[3], apps[1], "api", 1)
backend/server/database/model/host_test.go: err = AddAppToHost(db, &hosts[0], apps[0], "api", 123)
backend/server/database/model/host_test.go: err = AddAppToHost(db, &hosts[1], apps[1], "api", 123)
```
A false positive there in `statspuller.go`. Ignore.
It's a potential problem of out of bounds memory access. The suggestion is to modify code to work with values instead of addresses or at least for a minimal effort to be done to prevent slice referencing in the future.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2037Improve expired leases reclamation query for PostgreSQL2022-11-02T15:10:41ZThomas MarkwalderImprove expired leases reclamation query for PostgreSQLExtended performance testing revealed cyclic slow downs that correspond to expired lease reclamation. The MySQL query was doing full table scans, this was mitigated under #2030. We need to examine PostgreSQL performance and see if it ca...Extended performance testing revealed cyclic slow downs that correspond to expired lease reclamation. The MySQL query was doing full table scans, this was mitigated under #2030. We need to examine PostgreSQL performance and see if it can be improved.backlogMarcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/kea/-/issues/2032RADIUS hook support for expressions in accounting messages2023-10-30T21:17:57ZVicky Riskvicky@isc.orgRADIUS hook support for expressions in accounting messagesThe ARM states that expressions are supported in RADIUS, but apparently they are not supported in accounting messages. Can we add this into the accounting messages?
A user who purchased this hook on-line ran across this limitation.The ARM states that expressions are supported in RADIUS, but apparently they are not supported in accounting messages. Can we add this into the accounting messages?
A user who purchased this hook on-line ran across this limitation.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2002LQ tests too strict (off by 1 failure)2022-11-02T15:10:40ZTomek MrugalskiLQ tests too strict (off by 1 failure)On one case lease query premium tests failed with the following error:
```
08:16:19 [ RUN ] LeaseQueryImpl6ProcessTest.queryByClientIdActiveLeases
08:16:19 ../../../../../../../../premium/src/hooks/dhcp/lease_query/tests/lease_qu...On one case lease query premium tests failed with the following error:
```
08:16:19 [ RUN ] LeaseQueryImpl6ProcessTest.queryByClientIdActiveLeases
08:16:19 ../../../../../../../../premium/src/hooks/dhcp/lease_query/tests/lease_query_impl6_unittest.cc:1585: Failure
08:16:19 Expected equality of these values:
08:16:19 100
08:16:19 cltt_opt->getValue()
08:16:19 Which is: 101
08:16:19 ../../../../../../../../premium/src/hooks/dhcp/lease_query/tests/lease_query_impl6_unittest.cc:401: Failure
08:16:19 Expected equality of these values:
08:16:19 lease->valid_lft_ - elapsed
08:16:19 Which is: 3500
08:16:19 iaaddr_opt->getValid()
08:16:19 Which is: 3499
08:16:19 ../../../../../../../../premium/src/hooks/dhcp/lease_query/tests/lease_query_impl6_unittest.cc:402: Failure
08:16:19 Expected equality of these values:
08:16:19 lease->preferred_lft_ - elapsed
08:16:19 Which is: 3500
08:16:19 iaaddr_opt->getPreferred()
08:16:19 Which is: 3499
08:16:19 ../../../../../../../../premium/src/hooks/dhcp/lease_query/tests/lease_query_impl6_unittest.cc:417: Failure
08:16:19 Expected equality of these values:
08:16:19 lease->valid_lft_ - elapsed
08:16:19 Which is: 3400
08:16:19 iaprefix_opt->getValid()
08:16:19 Which is: 3399
08:16:19 ../../../../../../../../premium/src/hooks/dhcp/lease_query/tests/lease_query_impl6_unittest.cc:418: Failure
08:16:19 Expected equality of these values:
08:16:19 lease->preferred_lft_ - elapsed
08:16:19 Which is: 3400
08:16:19 iaprefix_opt->getPreferred()
08:16:19 Which is: 3399
08:16:19 [ FAILED ] LeaseQueryImpl6ProcessTest.queryByClientIdActiveLeases (1 ms)
```
Details: [jenkins](https://jenkins.aws.isc.org/job/kea-dev/job/distcheck/477/execution/node/187/log/?consoleFull)
This looks like a timing error that, depending on when exactly the code get run, the values may be off by 1.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2000Dynamicaly generated vendor-encapsulated-options-space "data" field saved wit...2023-01-18T08:51:18ZPatrick MackeyDynamicaly generated vendor-encapsulated-options-space "data" field saved with config-write---
name: Bug report
about: Create a report to help us improve
---
**Describe the bug**
Kea DHCP4 generates a hex string (stored in the "data" field) for nested options data. This is saved to the configuration file on a config-write A...---
name: Bug report
about: Create a report to help us improve
---
**Describe the bug**
Kea DHCP4 generates a hex string (stored in the "data" field) for nested options data. This is saved to the configuration file on a config-write API call. Subsequent configuration loads read this field but do not update it, even if the nested options change or are removed.
**To Reproduce**
Steps to reproduce the behavior:
1. Run Kea dhcpv4, with a custom option definition:
```json
"option-def": [
{
"array": true,
"code": 241,
"encapsulate": "",
"name": "wlc-list",
"record-types": "",
"space": "vendor-encapsulated-options-space",
"type": "ipv4-address"
}
```
2. Add the custom option with data (global or subnet scoped).
```json
"option-data": [
{
"name": "vendor-encapsulated-options"
},
{
"always-send": false,
"code": 241,
"csv-format": true,
"data": "1.1.1.1",
"name": "wlc-list",
"space": "vendor-encapsulated-options-space"
}
]
```
3. Use the `config-reload` and then the `config-write` API calls
4. Note that the vendor-encapsulated-options `data` field is populated from the sub-option data.
```json
{
"always-send": false,
"code": 43,
"csv-format": false,
"data": "F10401010101",
"name": "vendor-encapsulated-options",
"space": "dhcp4"
},
{
"always-send": false,
"code": 241,
"csv-format": true,
"data": "1.1.1.1",
"name": "wlc-list",
"space": "vendor-encapsulated-options-space"
}
```
5. Change the 241 sub-option data but leave the option 43 data.
```json
{
"always-send": false,
"code": 241,
"csv-format": true,
"data": "2.2.2.2",
"name": "wlc-list",
"space": "vendor-encapsulated-options-space"
}
```
6. Use the `config-reload` and then the `config-write` API calls
7. Note vendor-encapsulated-options data field has not been updated and hence, the changes will not be reflected in offers to clients.
```json
{
"always-send": false,
"code": 43,
"csv-format": false,
"data": "F10401010101",
"name": "vendor-encapsulated-options",
"space": "dhcp4"
},
{
"always-send": false,
"code": 241,
"csv-format": true,
"data": "2.2.2.2",
"name": "wlc-list",
"space": "vendor-encapsulated-options-space"
}
```
**Expected behavior**
I'd expect a field dynamically generated from dependent configuration wouldn't be written to the configuration file, or at least would be updated when the dependent configuration changes.
**Environment:**
- Kea version: 1.8.2
- OS: Alpine Linux
1.8.2
tarball
linked with:
log4cplus 2.0.6
Botan 2.17.3 (release, dated 20201221, revision git:bcda19704da482c57eb0bce786cebb97f378f146, distribution unspecified)
database:
MySQL backend 9.3, library 10.5.5
PostgreSQL backend 6.1, library 130003
Memfile backend 2.1backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1996Add Gitlab pipeline buttons that regenerate messages and parsers2022-11-02T15:10:40ZAndrei Pavelandrei@isc.orgAdd Gitlab pipeline buttons that regenerate messages and parsersThe buttons would be similar to the deploy buttons in stork.
I want this mainly for parsers because people regenerate with different bison versions. And Kea sometimes reaches a state where different bison versions are used for different...The buttons would be similar to the deploy buttons in stork.
I want this mainly for parsers because people regenerate with different bison versions. And Kea sometimes reaches a state where different bison versions are used for different parser files. And I don't know if it's safe to test Kea like that throughout the development process. And maybe people feel more comfortable clicking a button in Gitlab which adds a commit that regenerates the parsers for them with the bison version that was settled to be used than to keep upgrading the bison on their machine.
But we could add one for messages as well so that we don't have to --enable-generate-messages all the time.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1994automatic YANG translators2023-03-19T15:47:12ZAndrei Pavelandrei@isc.orgautomatic YANG translatorsUniversal/softcoded/automatic translators.
Turn this:
```
ConstElementPtr networks = getSharedNetworks(xpath);
if (networks && !networks->empty()) {
result->set("shared-networks", networks);
}
ConstElementPtr cla...Universal/softcoded/automatic translators.
Turn this:
```
ConstElementPtr networks = getSharedNetworks(xpath);
if (networks && !networks->empty()) {
result->set("shared-networks", networks);
}
ConstElementPtr classes = getClasses(xpath);
if (classes && !classes->empty()) {
result->set("client-classes", classes);
}
ConstElementPtr database = getDatabase(xpath + "/lease-database");
if (database) {
result->set("lease-database", database);
}
[...]
```
into something like this pseudocode:
```
ElementPtr result = Element::createMap();
for (S_Data_Node i : module->dataNodes()) {
result->set(i->xpath(), Element::from(i->valueStr()));
}
```
It would work with any module out-of-the-box and no node would be left out. When a new node gets added in the configuration, on top of the usual bison parser diligences, we would only need to update the YANG module.
The nodes of the Kea YANG modules would be 1:1 with the JSON configuration. For the IETF model, indeed the data would require changing, but at least this automatic translator would get you the YANG data in ElementPtr form and you would start from there.
Benefits:
* makes configuration maintenance easier
* is less error-prone
* improves performance because non-existing nodes are no longer checkedbackloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1992parser-clean is not considering GENERATE_PARSER flag and maintainer-clean-loc...2022-11-02T15:10:40ZRazvan Becheriuparser-clean is not considering GENERATE_PARSER flag and maintainer-clean-local is not considering GENERATE_MESSAGES and GENERATE_PARSER flagsbecause generate messages and parsers can be disabled, removing the generated files should also be disabled by:
GENERATE_PARSER and GENERATE_MESSAGES flags, because there is no way to get the files back.because generate messages and parsers can be disabled, removing the generated files should also be disabled by:
GENERATE_PARSER and GENERATE_MESSAGES flags, because there is no way to get the files back.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1985Sysrepo 1.4: clean up subscribe-notifications2022-11-02T15:10:41ZTomek MrugalskiSysrepo 1.4: clean up subscribe-notificationsThe following discussion from !1329 should be addressed:
- [ ] @andrei started a [discussion](https://gitlab.isc.org/isc-projects/kea/-/merge_requests/1329#note_225561): (+3 comments)
> subscribe notifications is currently always ...The following discussion from !1329 should be addressed:
- [ ] @andrei started a [discussion](https://gitlab.isc.org/isc-projects/kea/-/merge_requests/1329#note_225561): (+3 comments)
> subscribe notifications is currently always true. Would you like a kea-netconf config entry with that?
I (Tomek) think that we should always react to configuration changes. I'm not aware of any deployment that would make sense to ignore changes. Andrei pointed [here](https://gitlab.isc.org/isc-projects/kea/-/merge_requests/1329#note_226816) that we don't have notifications in the models and we currently log an ERROR. We need to figure out a solution that detects changes introduced by an admin. If necessary, maybe we'd need to add notifications capability to the models?backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1971PRNG and pre-allocation2022-11-02T15:10:40ZFrancis DupontPRNG and pre-allocationI think the use of the std::mt19937 PRNG is a potentially security sensitive code should at least be analyzed.I think the use of the std::mt19937 PRNG is a potentially security sensitive code should at least be analyzed.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1969Packet drops seen for 20 sec on every 8th day when perf_dhcp tests are run ag...2022-11-02T15:10:41ZvarsrajaPacket drops seen for 20 sec on every 8th day when perf_dhcp tests are run against Kea DHCP service continuouslyHi All,
We are running performance tests for kea dhcp using the perf-dhcp provided by kea .
- Setup: Dockerized Kea1.6.2 running in well provisioned hardware.
- Runs details: Running perf-dhcp to generate approx 100 requests/ sec , val...Hi All,
We are running performance tests for kea dhcp using the perf-dhcp provided by kea .
- Setup: Dockerized Kea1.6.2 running in well provisioned hardware.
- Runs details: Running perf-dhcp to generate approx 100 requests/ sec , validating all 4 packet handling DHCP discover request etc.
perfdhcp -p 600 -r 100 10.0.0.4 -t 10 this is executed in a loop
- Issue : We observe a 20sec packet drops on requests on every 8th day exactly. The in between days there are no packet losses. There are no restarts of kea-dhcp service. After the packet loss duration, things go back to normal.
- Request: Is there any limits we might be hitting every 8th day of the run? Are there any parameters we should check?
It would be very helpful, if we can determine what causes this packet loss.
Attaching our config for kea-dhcp4.conf and kea-dhcp-ddns.conf[kea-dhcp4.conf](/uploads/a25452141ff09d39ea5457288135c3c9/kea-dhcp4.conf)[kea-dhcp-ddns.conf](/uploads/9651904743e169891229666ee707effb/kea-dhcp-ddns.conf)![dhcp-graph-packet-loss](/uploads/2442b96fb947cf0ccf34e931c326b476/dhcp-graph-packet-loss.png)
![dhcp-test-run](/uploads/2f64b24ab37c54ae447df067455da3c2/dhcp-test-run.png)backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1968postgres back-end does not use mktime to convert to local timezone2022-11-02T15:10:40ZRazvan Becheriupostgres back-end does not use mktime to convert to local timezonepostgres reads data using extract epoch and boost::lexical_cast (UTC) but writes data using localtime_r (timezone time and date).
this causes update queries to fail if kea timezone is different than postgres back-end timezone.
fix shou...postgres reads data using extract epoch and boost::lexical_cast (UTC) but writes data using localtime_r (timezone time and date).
this causes update queries to fail if kea timezone is different than postgres back-end timezone.
fix should be (pseudo code):
‘’’
mktime(gmtime_r(boost::lexical_cast(extrach epoch)))
‘’’backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1951DNS updates over TCP2022-11-02T15:10:41ZTomek MrugalskiDNS updates over TCPAs of June 2021, D2 support sending DNS Updates over UDP only. For the GSS project we may likely need a TCP support. Hence this ticket. Also see #1926.As of June 2021, D2 support sending DNS Updates over UDP only. For the GSS project we may likely need a TCP support. Hence this ticket. Also see #1926.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1945Rebalance debug logging2022-11-02T15:10:41ZTomek MrugalskiRebalance debug loggingWe do have a serious mess with the debug messages. This is a topic that has been signaled on several occasions, with people asking and getting confused in general. Here's a couple pointers:
- see #1916 for specific customer complaint
- ...We do have a serious mess with the debug messages. This is a topic that has been signaled on several occasions, with people asking and getting confused in general. Here's a couple pointers:
- see #1916 for specific customer complaint
- packet drops used to be logged on many levels
- we currently use levels 0, 10, 15, (no 20 or 30), 40, 45, 50 and 55
- the documentation incorrectly states that the debuglevel goes to 99 (the actual value configured can be larger, the highest value we use is 55)
- it is impossible for the user to figure out a logging level of a given message, even if looking at the code. The level constants are defined in .h file, but defined in .cc, then in many cases not used directly, but used to set up other const values that are used. This is not a programming logic an average sysadmin is willing to follow.
- there should be an ability to log the debuglevel similar to loglevel in the logs.
- there should be a way to update the messages doc with the actual debug levels being used. Some sort of a maintenance script could do that. We have `tools/reorder_message_file.py`. Perhaps we could extend it to do something extra for us?
- DBGLVL_COMMAND_DATA (20) is defined, but not used anywhere
- no debug level is defined for 30
This is definitely post 2.0 topic.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1943make Kea link statically with external libraries2022-11-02T15:10:41ZGreg Rabilmake Kea link statically with external librariesIn an attempt to build static Kea binaries, I have set the following configure flags:
--disable-shared
--enable-static-link
However, these settings don't appear to do anything, as shared libraries are created and the binary programs (e...In an attempt to build static Kea binaries, I have set the following configure flags:
--disable-shared
--enable-static-link
However, these settings don't appear to do anything, as shared libraries are created and the binary programs (e.g. kea-dhcp4) are not statically linked. Are these flags expected to work in Kea 1.8.2?backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1942Refactor ClientClassDictionary to provide indexing2022-11-02T15:10:41ZMarcin SiodelskiRefactor ClientClassDictionary to provide indexingI would like to propose refactoring the `ClientClassDictionary` internals to support indexing classes by various parameters. Right now we index by class names and we have an ordered index. In #1836 we are adding a change which matches cl...I would like to propose refactoring the `ClientClassDictionary` internals to support indexing classes by various parameters. Right now we index by class names and we have an ordered index. In #1836 we are adding a change which matches classes with configured server identifiers. Without indexing, such matching is sub-optimal. Perhaps, if we migrate the class collection to multi index container we could easily add additional indexing if necessary.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1939Kea 1.8.2 configure fails when linking to static OpenSSL library2022-11-02T15:10:41ZGreg RabilKea 1.8.2 configure fails when linking to static OpenSSL libraryI am attempting to build a static Kea 1.8.2 binary on CentOS7. I have built a static version of OpenSSL 1.1.1k (./config no-shared). When running configure for Kea 1.8.2 and specifying the --with-openssl directive, it fails with the fo...I am attempting to build a static Kea 1.8.2 binary on CentOS7. I have built a static version of OpenSSL 1.1.1k (./config no-shared). When running configure for Kea 1.8.2 and specifying the --with-openssl directive, it fails with the following:
```
checking OS type... Linux
checking for sa_len in struct sockaddr... no
checking for usuable C++11 regex... no
checking for OpenSSL library... yes
checking OpenSSL version... OpenSSL 1.1.1k 25 Mar 2021
checking support of SHA-2... configure: error: missing EVP entry for SHA-2
```
Attached is the config.log file. [config.log](/uploads/68a099b66729e0f428375ce2fd77a95c/config.log)
As a work around, I am able to force it to configure properly by specifying LDFLAGS and LIBS:
`LDFLAGS="-L/opt/tmp/install/openssl/lib" LIBS="-lcrypto -lpthread"`
Note that this problem does not occur if OpenSSL is built with dynamic libraries.backlog