ID: 11265
Title: Discovery Page: Do not show last job failures if they are fixed
Component: WATO
Level: 1
Class: Bug fix
Version: 1.7.0i1
Previously failures of last discovery jobs were displayed in a red message box again.
This happened in the following situation: If a hostname was not resolveable the
discovery page displayed {{Failed to lookup IP address of HOST via DNS}}. After
an IP address was configured this message was shown again on the discovery page
but also a discovered services table. This behaviour is misleading.
Now the last error message is put in the current job details but will not be
displayed as a critical message again.
ID: 11260
Title: Fix crash in EC configuration (regression since 1.6.0p14)
Component: Event Console
Level: 1
Class: Bug fix
Version: 1.7.0i1
When there were open events in the Event Console that were created by a rule
that was removed from the Event Console, this resulted in an exception on the
Event Console configuration page (IndexError (list index out of range)).
However, this message was rather a consequence error. The original error
message was found in var/log/mkeventd.log. There you could find messages of
this kind:
C+:
2020-08-27 14:19:42,913 [40] [cmk.mkeventd.StatusServer] Error handling client : 'my_rule'
Traceback (most recent call last):
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 3031, in serve
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 3071, in handle_client
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 3092, in _answer_query
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 2753, in query
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 2871, in _enumerate
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 736, in get_status
row += self._add_event_limit_status()
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 766, in _add_event_limit_status
self.get_rules_with_active_event_limit(),
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 1784, in get_rules_with_active_event_limit
if num_events >= self._get_rule_event_limit(rule_id)[0]:
File "/omd/sites/stable/lib/python/cmk/ec/main.py", line 1885, in _get_rule_event_limit
rule = self._rule_by_id.get(rule_id)
KeyError: 'my_rule'
C-:
ID: 11244
Title: Check_MK Discovery service: Contact SNMP devices for real if 'Perform a full SNMP scan' is enabled
Component: Checks & agents
Level: 1
Class: Bug fix
Version: 1.7.0i1
There are two options {{Perform a full SNMP scan always, detect new check types}}
and {{Just rely on existing check files, detect new items only}} especially for
SNMP devices in the ruleset {{Periodic service discovery}}. The idea behind the
first option is contacting SNMP devices for real and possibly find new check
types. This option had no effect since Checkmk version 1.5.0 and cached data
was always used. The result was that the {{Check_MK Discovery}} service might
not have found new check typs on SNMP devices.
Now if this option is set Checkmk really contacts SNMP devices and may find new
check types and services.
Please note: The execution time of the {{Check_MK Discovery}} service may take
more time as before on hosts with SNMP data sources. In this case you can
increase the regular and retry check interval of this service using the ruleset
{{Normal check interval for service checks}} and {{Retry check interval for service checks}}
in order to prevent timeouts.
ID: 11243
Title: Bulk Dicovery: Align caching options with discovery page
Component: WATO
Level: 1
Class: Bug fix
Version: 1.7.0i1
On the {{Bulk Discovery}} WATO page you could set the options
{{Use cached data if present}} and {{Do full SNMP scan for SNMP devices}}.
The latter one had no effect since Checkmk version 1.5.0. The idea behind the
option {{Do a full SNMP scan}} was contacting the SNMP device and ignoring
existing caches. This behaviour is implemented behind the option
{{Do a full service scan}} on the {{WATO Discovery}} page.
Moreover the {{Bulk Discovery}} page was inconsistent with the {{WATO Discovery}}
page - independent of above options. Thus we removed both options
{{Use cached data if present}} and {{Do full SNMP scan for SNMP devices}} and
add the option {{Do a full service scan}} in order to adapt the behaviour to
the discovery page. Now both pages work the same way.
Please note: With this new option a service scan of SNMP devices may take more
time as before because now this device will be contacted for real.
ID: 11242
Title: Status of the Check_MK services: Individual, per data source configured parameters had no effect
Component: Checks & agents
Level: 1
Class: Bug fix
Version: 1.7.0i1
Within the ruleset {{Status of the Check_MK services}} overall parameters
and/or individual, per data source parameters can be configured. The latter
ones, if configured, overwrite the overall parameters. This has been fixed.
ID: 11460
Title: Windows agent service sets correct access rights in ProgramData directory
Component: Checks & agents
Level: 1
Class: Bug fix
Version: 1.7.0i1
Previously, a standard user can write in ProgramData/checkmk/agent directory,
thereby getting a possibility to modify sensitive information.
With this ix the above mentioned vulnerability is eliminated: a standard user
has a right to read and execute.
ID: 10314
Title: Remove deprecated check_multi plugin
Component: Site Management
Level: 1
Class: New feature
Version: 1.7.0i1
The check_multi plugin is deprecated and not maintained anymore and therefor removed from the Checkmk package.
If you are still using this plugin you can install it manually in your local hierarchy.
ID: 11264
Title: Fix building agent MSI packages on SLES15SP1
Component: agents
Level: 1
Class: Bug fix
Version: 1.7.0i1
Using the Agent bakery on SLES15SP1 Checkmk servers it was not possible to
build MSI packages because the tools (msibuild, msiinfo) were missing on that
platform.
ID: 11361
Title: Reworking of discovery rulesets for network interfaces and switch ports
Component: Checks & agents
Level: 2
Class: New feature
Version: 1.7.0i1
Up to now, the discovery of network interfaces and switch ports was controlled by
two main rules: "Network Interface and Switch Port Discovery" (discovery of single
interfaces) and "Network interface groups" (grouping of interfaces). With this werk,
we integrate "Network interface groups" into "Network Interface and Switch Port
Discovery" and rework the latter. The rule "Network interface groups" is now deprecated
and not applied any more.
The reworked discovery ruleset is split into three parts: the configuration of the
discovery of single interfaces, the configuration of interfaces groups and conditions
which determine to which interfaces this rule applies. In the following, we explain the
changes in more detail.
<ul>
<li>In the first part, you can activate or deactivate the discovery of single interfaces.
You can also configure the way monitored interfaces are represented, i.e., by index, by
description or by alias.</li>
<li>The second part offers the option to group interfaces. Here, you can specify the names
of the groups and they way the corresponding services display their members in the service
output (again, by index, by description or by alias). Contrary to before, there is no separate
option any more to define interface groups on clusters, since this option was anyway redundant.
</li>
<li>The third part of the rule determines to which interfaces this rule applies. You can choose
to apply this rule to all interfaces or you can set conditions such as a regular expression
matching the interface description or a set of port types. Each interface will first determine
the set of rules which actually applies to this interface and then merge these rules together,
whereby rules higher up in the hierarchy (e.g. rules in subfolders) overwrite rules further
below.</li>
<li>Note that due to the point above, this rule is a somewhat special case compared to other
rulesets in checkmk. Usually, the conditions for a rule to apply are exclusively configured in
the section "Conditions". However, here, you can set additional, interface-specific conditions,
which offer a finer control over the discovery process.</li>
</ul>
This change is incompatible. It affects the following checks:
<tt>aix_if</tt>, <tt>brocade_optical</tt>, <tt>emc_vplex_if</tt>, <tt>esx_vsphere_counters</tt>,
<tt>fritz</tt>, <tt>hitachi_hnas_fc_if</tt>, <tt>hp_msa_if</tt>, <tt>hpux_if</tt>, <tt>if</tt>,
<tt>if64</tt>, <tt>if64adm</tt>, <tt>if64_tplink</tt>, <tt>if_brocade</tt>, <tt>if_fortigate</tt>,
<tt>if_lancom</tt>, <tt>lnx_if</tt>, <tt>mcdata_fcport</tt>, <tt>netapp_api_if</tt>,
<tt>statgrab_net</tt>, <tt>ucs_bladecenter_if</tt>, <tt>vms_if</tt>, <tt>winperf_if</tt>.
For users monitoring interface groups, this change is definitely incompatible. They have to migrate
their current rules for grouping interfaces from the now deprecated ruleset "Network interface
groups" to the new discovery ruleset. Note that there is no option any more to discover interface
groups <i>instead of</i> the corresponding single interfaces. To reproduce this behavior, configure
your interfaces groups and switch off the discovery of single interfaces for the group members.
After migrating the grouping rules, these users have to re-discover the services of affected hosts.
For all others users monitoring network interfaces, this change might be incompatible. Generally,
any already discovered interface services will continue to work. However, depending on the user-
defined rules from the (now reworked) ruleset "Network Interface and Switch Port Discovery", some
interface services might vanish upon re-discovery or new interface services might appear. In such
cases, users have to adapt the new, reworked versions of their user-defined rules.
Finally, it is worth noting that the new ruleset offers the option to match all interfaces, which
allows for simplifying some rules. In particular, users might be able to simplify rules where all
interface port types and states are selected.