Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
Kea
Kea
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 415
    • Issues 415
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge Requests 67
    • Merge Requests 67
  • Operations
    • Operations
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
  • ISC Open Source Projects
  • KeaKea
  • Issues
  • #479

Closed
Open
Opened Feb 19, 2019 by Marcin Siodelski@marcinDeveloper

HA peer should drop leases not present on the partner during sync

Let's suppose there are two HA peers A and B. The peer B dies. While the peer B is offline, the admin sends lease4-del command to the A. The peer B starts up and synchronizes its lease database with A. It correctly adds new leases and updates existing leases based on the list received from A. However, it doesn't remove the lease deleted on A while it was offline. The server admin would need to send lease4-del command to B to remove the lease.

In order to address this problem we have to fetch all leases from the B's backend and iterate over them to see if they are also present on A. In order to do so, we will have to keep the local copy of leases received from A. For Memfile, MySQL and Postgres we could do it more efficiently by comparing ranges of leases as they are ordered by IP addresses. After comparing a range of leases we could simply drop the local copy of the lease ranges. However, this won't work for Cassandra which returns leases out of order. In the Cassandra case we will have to collect all leases returned by the peer.

To upload designs, you'll need to enable LFS and have admin enable hashed storage. More information
Assignee
Assign to
Kea1.9-backlog
Milestone
Kea1.9-backlog
Assign milestone
Time tracking
None
Due date
None
Reference: isc-projects/kea#479