[//]: # (werk v2)
# CPU utilization checking: Alert if utilization is exactly at threshold for too long
key | value
---------- | ---
date | 2024-08-07T07:03:40+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Many CPU utilization checks can be configured to alert if the utilization is too high for too long
(configuration options _Levels over an extended time period on total CPU utilization_ and _Levels
over an extended time period on a single core CPU utilization_).
Before, Checkmk alerted only if the utilization was above the threshold for too long. As of this
werk, Checkmk alerts if the utilization is above or exactly at the threshold for too long. This is
consistent with the general behavior of Checkmk to check against upper thresholds with a "greater
than or equal to" operation.
[//]: # (werk v2)
# Don't show automation secret in the audit log (addresses CVE-2024-28830)
key | value
---------- | ---
date | 2024-06-19T12:10:00+00:00
version | 2.4.0b1
class | security
edition | cre
component | wato
level | 2
compatible | no
By default only admin users are able to see the audit log. Guests and normal
monitoring users do not have acces to the audit log.
Werk #13330 already fixed a problem where passwords were shown in the audit log.
This werk now addresses the problem, that still automation secrets of
automation user were logged in clear text to the audit log, e.g. on change of
the automation secret via REST-API or the user interface.
Existing automation secrets in the audit log should be removed automatically
during the update but please double check that no automation secrets remain in
the log (see next paragraph for details).
A backup of the original audit log (before automation secrets were removed) is
copied to "~/var/check_mk/wato/log/sanitize_backup". If anything goes wrong
during the update, you have to copy the files back to ~var/check_mk/wato/log
and remove the automation secrets manually. If the update works as expected,
you can remove the backup files.
In distributed setups which do not replicate the configuration, automation
secrets are replaced during the update of each site.
In setups which replicate the configuration from central to remote sites no
automation secrets should be present in the logs of the remote site, since only
information about the activation is logged. Only if you switched to a
replicated setup after the upgrade to the 2.0, automation secrets can be
present in the logs. Since automation secrets may be in this scenario as well,
the steps described before also apply.
*Affected Versions*:
* 2.3.0
* 2.2.0
* 2.1.0
* 2.0.0 (EOL)
*Mitigations*:
Remove automation secrets manually within the files located in
~var/check_mk/wato/log.
*Vulnerability Management*:
We have rated the issue with a CVSS Score of <2.7 (Low)> with the following
CVSS vector: `CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N` and assigned CVE
`CVE-2024-28830`.
[//]: # (werk v2)
# Synthetic Monitoring: Avoid crash in scheduler (client side) during RCC setup
key | value
---------- | ---
date | 2024-08-06T05:54:33+00:00
version | 2.4.0b1
class | fix
edition | cee
component | checks
level | 1
compatible | yes
In one specific scenario, the Robotmk scheduler crashed during the RCC setup step. This happened if
if all plans using RCC were configured to run under a specific user instead of under the system
user.
Users have to update the agent on affected hosts to benefit from this werk.
[//]: # (werk v2)
# Custom graphs: Respect "Metrics with all zero values" setting
key | value
---------- | ---
date | 2024-08-05T06:51:13+00:00
version | 2.4.0b1
class | fix
edition | cee
component | multisite
level | 1
compatible | yes
The custom graph setting _Metrics with all zero values_ was not persisted and not taken into account
when rendering the graph.
[//]: # (werk v2)
# omd update: Log "Verifying site configuration"
key | value
---------- | ---
date | 2024-08-05T08:59:03+00:00
version | 2.4.0b1
class | fix
edition | cre
component | omd
level | 1
compatible | yes
If a user runs `omd update`, then the output is written to both `$OMD_ROOT/var/log/update.log` and
stdout. However, the output of site configuration verification
<a href="https://checkmk.com/werk/16408"> Werk #16408</a> was missing. This has been fixed.
[//]: # (werk v2)
# mk_informix: Follow up for Werk 16198
key | value
---------- | ---
date | 2024-07-26T07:18:38+00:00
version | 2.4.0b1
class | security
edition | cre
component | checks
level | 1
compatible | yes
[Werk #16198](https://checkmk.com/werk/16198) addressed potential priviledge escalation by the agent plugin `mk_informix`.
However, a few callsites to the binaries `dbaccess` and `onstat` where missing the safe execution.
Those binaries are now also called in a safe way.
<em>Vulnerability Management</em>:
We have rated the issue with a CVSS Score of 5.2 (Medium) with the following CVSS vector: <code>CVSS:4.0/AV:L/AC:L/AT:P/PR:L/UI:N/VC:L/VI:L/VA:L/SC:H/SI:H/SA:H</code> and assigned CVE <code>CVE-2024-28829</code>.
Werk 16846 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# The custom instances of the MS SQL Server plugin are configured correctly
key | value
---------- | ---
date | 2024-07-19T11:09:57+00:00
version | 2.4.0b1
class | fix
edition | cee
component | checks
level | 2
compatible | yes
Previously, configuring custom instances, WATO used wrong key names:
`conn` instead of a correct `connection` and `auth` instead of
a correct `authentication`.
With this release the problem has been eliminated,
If you are using custom instances you need to bake and deploy new
agent package
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
- # The custom instances of the MS SQL Server plugin are configured correctly
? -
+ # The custom instances of the MS SQL Server plugin are configured correctly
key | value
---------- | ---
date | 2024-07-19T11:09:57+00:00
- version | 2.3.0p12
? ^ ^ -
+ version | 2.4.0b1
? ^ ^
class | fix
edition | cee
component | checks
level | 2
compatible | yes
Previously, configuring custom instances, WATO used wrong key names:
- `conn` instead of a correct `connection` and `auth` instead of
? -
+ `conn` instead of a correct `connection` and `auth` instead of
a correct `authentication`.
With this release the problem has been eliminated,
- If you are using custom instances you need to bake and deploy new
? -
+ If you are using custom instances you need to bake and deploy new
agent package
[//]: # (werk v2)
# mk_job: MK_VARDIR defaults not being set in bakery
key | value
---------- | ---
date | 2024-07-26T05:47:27+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Due to a different way to set in `MK_VARDIR` in `mk_job`, default values would not be baked into `mk_job` and
derivates.
This change adds a replacement rule for the way `MK_VARDIR` gets assigned in `mk_job` and also
separates assignment and export in order to avoid known problems with Solaris.
[//]: # (werk v2)
# Agent Updates: host selection ignores configured host labels
key | value
---------- | ---
date | 2024-08-01T13:39:05+00:00
version | 2.4.0b1
class | fix
edition | cee
component | agents
level | 1
compatible | yes
When configuring the global setting *Automatic agent updates/Activate update only on the selected hosts*,
the selection of host labels under *Match host labels* didn't get recognized.
Technical background: The set of host selection parameters used in above rule comes from a generic ruleset
pattern that's used in some more host rulesets in Checkmk.
Eventually, the option to filter for host labels got introduced to the generic ruleset, but we missed to
evaluate it for the determination of allowed hosts for agent updates.