ID: 13376
Title: Add automatic cleanup note if crash report can not be found
Component: Multisite
Level: 1
Class: Bug fix
Version: 2.1.0i1
If a crash report can not be found any more, a note to the error message was
added, that this can be caused by the automatic cleanup job that found more
than 200 crash reports below ~/var/check_mk/crashes. The cleanup job has than
already deleted this report.
The report should be generated again if the related page is called again or the
next check interval for a related check is reached.
ID: 13331
Title: Performance improvements for the mknotifyd
Component: Notifications
Level: 2
Class: Bug fix
Version: 2.1.0i1
This werk fixes some performance regressions in the mknotifyd and can increase
(depending on the setup) the throughput of the mknotifyd. The changes are most
benificial if you use notification plugins with a short running time or if you
forward notifications from remote to central sites. The fixed regressions are:
In distributed setups forwarded messages were always processed in one subprocess
before the actual notification plugins were executed. I.e. even if you used the
option "Maximum concurrent executions" for a notification plugin, it may not
have the desired effect if the processing of forwarded messages was the limiting
factor. You can now use the option "Maximum concurrent executions for forwarded
messages" in the "notification spooler configuration" of the "global settings"
to configure the number of processes. Ideally, the number of processes should
roughly match the concurrent executions of the notification plugins of the
incoming messages. If no value is specified, a default of 1 is used. Note that
higher values lead to a higher CPU load. If e.g. a value of 2 is used and a
notification plugin uses 2 concurrent executions, 4 subprocesses can be started
simultaneously.
The mknotifyd polls for new data with a timeout of 1s. Previously, output of
executed notification plugins was not recognized in the poll. I.e. if no data
was present on a connection or no forwarding was used, the timeout of 1s was
always hit. Now, the mknotifyd recignizes new output of notification plugins and
the poll exits before the 1s timeout is hit.
Incoming connections used blocking sockets and outgoing connections could
occasionally end up with a blocking socket. This degraded the performance of the
mknotifyd if a lot of notifications had to be sent to a remote site and the TCP
queue was already full. It also may lead to occasional disconnects if heartbeats
were missed due to a blocking call and could in the worst case lead to a
deadlock if two sites are in a blocking call.
If a connection had a lot of outgoing data the mknotifyd only sent data and did
not read data from the socket which could leat to missed heartbeats and
disconnects as well.
Before this werk all spoolfiles in the spool and deferred directory were always
processed. If a lot of spoolfiles were present in these directories, this could
lead to up to 2s spent processing these directories per cycle.
Finally, this werk increases the amount of data collected per cycle from a
connection. This resolves the issue of TCP queues when a lot of notifications
had to be forwarded to a remote site.
ID: 13373
Title: Do not log multiple exceptions on failed login of LDAP user
Component: Setup
Level: 1
Class: Bug fix
Version: 2.1.0i1
If a user with LDAP connection entered the wrong credentials on login, multiple
exceptions were written to ~/var/log/web.log.
>From now on, only one log entry per failed login will be created.
ID: 13168
Title: mk_postgres: fix for CentOS 8: "MAIN": not running
Component: Checks & agents
Level: 1
Class: Bug fix
Version: 2.1.0i1
The path for postgres instance "main" is called "data" and not "main" on some platforms.
That could lead to the following error message:
instance "MAIN": not running
In order to support more platforms the instance detection for "main" will additionally fallback
to find "data" in the path if "main" was not found.
The agent must be redeployed in order to apply this bugfix.
ID: 13374
Title: Fix wrong converting of disabled service rules with negate on update
Component: Setup
Level: 1
Class: Bug fix
Version: 2.1.0i1
Since version 2.0.0, if disabled service rules with negated service condition were used, these
rules were wrongly converted with the next 2.0.0p update.
The negate condition was empty after updating, leading to missfunctional rules.
This also happens while updating from 1.6.0 to 2.0.0.
If you used such rules, you have to set the service condition with negate again.
ID: 13467
Title: Allow to configure additional IP addresses regardless of the IP address family
Component: Setup
Level: 1
Class: Bug fix
Version: 2.1.0i1
In the host configuration you can now configure "Additional IPv6 addresses" even if the "IP address family" is set to "IPv4" and vice versa.
The configured additional IP addresses can be used by active checks (such as ICMP ping).
They can be IPv4 and/or IPv6 addresses, regardless whether the host is contacted by the monitoring backend via IPv4, IPv6 or both.
By making these options available independently of the IP address family, the behaviour of the GUI matches that of the REST API, which already allows this kind of configuration.
ID: 13372
Title: Allow usage of host label conditions in service labels ruleset
Component: Setup
Level: 1
Class: Bug fix
Version: 2.1.0i1
Werk #12840 implemented a validation that the rulesets "Host labels" and
"Service labels" can not use host or service labels as conditions. But in the
latter, it should be possible to use host label condition, what is possible
again from now on.
ID: 13350
Title: IPMI sensors via IPMItool: Add option to specify interface
Component: Checks & agents
Level: 1
Class: New feature
Version: 2.1.0i1
The special agent which collects data from IPMI sensors via IPMItool
(ruleset "IPMI Sensors via Freeipmi or IPMItool") now has an
additional configuration option to specify the IPMI interface to be
used with ipmitool. This is necessary for some devices.
ID: 13351
Title: IPMI Sensors via Freeipmi: Bugfixes in configuration flags
Component: Checks & agents
Level: 1
Class: Bug fix
Version: 2.1.0i1
There were two bugs in the configuration of the special agent for IPMI sensors
via freeipmi: The flags "Sensor threshold" and "Suppress not available
sensors" were interchanged (i.e., activating one in the GUI activated
the other one on the command line). Furthermore, unticking a configuration
flag no effect, i.e., the corresponding command line argument was still
active.