Overview
In the future we are planning to introduce sophisticated mechanisms for collecting and processing the logs from the monitored machines. This will most likely require integration with 3rd party tools, such as Logstash. However, one of the requirements for Stork is that it should be able to look into the logs of the monitored application and display the contents of the log file (or the tail of the log file) in the UI. There should be no need to install any additional third party software for such basic functionality.
The details can be found in this GL issue: #52
Logs Location
Monitored applications may use one or more log files. Kea can be configured to output log messages of different kinds (emitted by different loggers) into separate files. Stork must detect the location of the log files when parsing the Kea configuration. The logs locations must be stored in the database for each daemon and presented to the user in the daemon tab in the UI. The user should be able to click on the listed log files to see the contents of the log.
Tailing Logs
The whole new class of gRPC calls should be implemented in the Stork Agent to support tailing the files. The new calls should be generic, i.e. allow for tailing any text file. At least the following arguments should be supported by the call:
- file location,
- offset, whence (see
os.Seek
)
The will be two types of calls implemented:
- GetLogTail()
- GetLogTailAndFollow()
The first call will fetch the tail of the log file and return. The second call will return the gRPC stream with the current tail of the log and then send incremental updates to the log file over the stream.
The following package seems to provide sufficient functionality to implement logs tailing in the Stork Agent: https://godoc.org/github.com/hpcloud/tail.
Server Side Logs Storage
The server should only fetch the logs from the Stork Agent on demand, i.e. when the user selects the log file to be viewed in the UI. The log file contents change frequently and therefore it is not practical to cache the logs in the database. Most of the time it should be enough to cache the logs in the server's memory.
The logs aren't cached when the user views the log file without following the updates. It triggers the REST API call followed by the GetLogTail()
gRPC call to fetch the desired log. The fetched logs are cached in the UI using the ServerDataService
. The data is refreshed when a refresh logs
button is pressed or the page is reloaded.
The data flow gets slightly more complicated when the user chooses to follow the log file. In this case, the server calls the GetLogTailAndFollow()
function which opens a server side (agent side) stream used to pass live log updates to the servers. The server caches the followed log data locally for the given app/filename. All users interested in following the given log file will see the same copy of the data cached on the server. The cache is stored in memory and is lost upon the server restart.
The server monitors subscriptions to the given log file. When all users unsubscribe from following the file, the stream is closed.
Accessing the Logs from UI
There is an open question where we should have links to the log files produced by the daemons. The most reasonable places are the daemon tabs in the app view.
Another question is where the user should be taken when he clicks on one or more links? Should it open yet another tab within the current tab or it should be a new view?