[//]: # (werk v2)
# Hanging background jobs/frozen site
key | value
---------- | ---
date | 2024-06-26T12:45:00+00:00
version | 2.3.0p8
class | fix
edition | cee
component | multisite
level | 1
compatible | yes
Background jobs could previously hang without ever finishing under the wrong conditions. This could lead to the entire site freezing if the job had acquired crucial file locks (i.e. a lock on licensing files).
This is usually accompanied by the error
```
Bad file descriptor
```
in the logs.
[//]: # (werk v2)
# agent_netapp_ontap: Fix TypeError for SnapVault
key | value
---------- | ---
date | 2024-06-26T11:18:02+00:00
version | 2.3.0p8
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Previously, `agent_netapp_ontap` would crash if the API returned any `SnapMirror` objects:
```
File "/omd/sites/mysite/local/lib/python3/cmk/special_agents/agent_netapp_ontap.py", line 827, in write_sections
write_section("snapvault", fetch_snapmirror(connection), logger)
File "/omd/sites/mysite/local/lib/python3/cmk/special_agents/agent_netapp_ontap.py", line 32, in write_section
writer.append_json(element.model_dump(exclude_unset=True, exclude_none=False))
File "/omd/sites/mysite/lib/python3/cmk/special_agents/v0_unstable/agent_common.py", line 62, in append_json
self.writeline(json.dumps(data, sort_keys=True))
File "/omd/sites/mysite/lib/python3.12/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/omd/sites/mysite/lib/python3.12/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/omd/sites/mysite/lib/python3.12/json/encoder.py", line 258, in iterencode
return _iterencode(o, 0)
File "/omd/sites/mysite/lib/python3.12/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
```
Werk 16434 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# Synthetic Monitoring: Privilege Escalation
key | value
---------- | ---
date | 2024-06-24T14:56:31+00:00
version | 2.3.0p8
class | security
edition | cee
component | agents
level | 1
compatible | yes
The Robotmk scheduler was affected by a privilege escalation issue. This issue affects users, which
have configured the rule `Robotmk scheduler (Windows)`. Specifically, an attacker is able to exploit
the issue, if
1. `Automated environment setup (via RCC)` was configured in the `Robotmk scheduler (Windows)` rule,
2. the same plan was configured without configuring `Execute plan as a specific user`
3. and a user on the host, onto which the scheduler has been deployed, was compromised.
In this event, the attacker could gain SYSTEM privileges on the host. If `Execute plan as a specific
user` _is_ configured, then the attacker could compromise that specific user, rather than SYSTEM.
There is a second similar, but distinct issue. If
- there are two or more plans configured with `Execute plan as a specific user` with distinct users
- and one of the configured users was already compromised.
The attacker could then compromise the other user.
*Background*:
The Robotmk scheduler is started by the Checkmk agent that runs with SYSTEM privileges.
Moreover, Robotmk allows the user to automatically build Python environments via RCC. During setup
the scheduler would enable a RCC feature called `shared holotree usage`. This feature allows all
users on the host to edit these Python environments. Thus, any compromised user on the host is also
able to compromise a user, which executes code from these shared environments.
With this Werk, `shared holotree usage` will no longer be enabled. Affected users will have their
access to the vunerable Python environments revoked. Moreover, the permissions inside of the working
directory of Robotmk have been reworked. Users now only have access to directories, which are
required for their own executions.
Note, you must update both Checkmk and redeploy the latest Robotmk Scheduler.
*Affected Versions*:
* 2.3.0
*Mitigations*:
If updating is not possible:
* Do not use the rule `Automated environment setup (via RCC)`.
* Always use the same user for `Execute plan as a specific user`.
*Vulnerability Management*:
We have rated the issue with a CVSS Score of 7.8 (High) with the following CVSS vector:
`CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H` and requested a CVE.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# Synthetic Monitoring: Privilege Escalation
key | value
---------- | ---
date | 2024-06-24T14:56:31+00:00
version | 2.3.0p8
class | security
edition | cee
component | agents
level | 1
compatible | yes
The Robotmk scheduler was affected by a privilege escalation issue. This issue affects users, which
have configured the rule `Robotmk scheduler (Windows)`. Specifically, an attacker is able to exploit
the issue, if
1. `Automated environment setup (via RCC)` was configured in the `Robotmk scheduler (Windows)` rule,
2. the same plan was configured without configuring `Execute plan as a specific user`
3. and a user on the host, onto which the scheduler has been deployed, was compromised.
In this event, the attacker could gain SYSTEM privileges on the host. If `Execute plan as a specific
user` _is_ configured, then the attacker could compromise that specific user, rather than SYSTEM.
There is a second similar, but distinct issue. If
- there are two or more plans configured with `Execute plan as a specific user` with distinct users
- and one of the configured users was already compromised.
The attacker could then compromise the other user.
*Background*:
The Robotmk scheduler is started by the Checkmk agent that runs with SYSTEM privileges.
Moreover, Robotmk allows the user to automatically build Python environments via RCC. During setup
the scheduler would enable a RCC feature called `shared holotree usage`. This feature allows all
users on the host to edit these Python environments. Thus, any compromised user on the host is also
able to compromise a user, which executes code from these shared environments.
With this Werk, `shared holotree usage` will no longer be enabled. Affected users will have their
access to the vunerable Python environments revoked. Moreover, the permissions inside of the working
directory of Robotmk have been reworked. Users now only have access to directories, which are
required for their own executions.
Note, you must update both Checkmk and redeploy the latest Robotmk Scheduler.
*Affected Versions*:
* 2.3.0
*Mitigations*:
If updating is not possible:
+
* Do not use the rule `Automated environment setup (via RCC)`.
* Always use the same user for `Execute plan as a specific user`.
*Vulnerability Management*:
We have rated the issue with a CVSS Score of 7.8 (High) with the following CVSS vector:
`CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H` and requested a CVE.
Werk 17056 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# Don't show automation secret in the audit log (addresses CVE-2024-28830)
key | value
---------- | ---
date | 2024-06-19T12:10:00+00:00
version | 2.3.0p7
class | security
edition | cre
component | wato
level | 2
compatible | no
By default only admin users are able to see the audit log. Guests and normal
monitoring users do not have acces to the audit log.
Werk #13330 already fixed a problem where passwords were shown in the audit log.
This werk now addresses the problem, that still automation secrets of
automation user were logged in clear text to the audit log, e.g. on change of
the automation secret via REST-API or the user interface.
Existing automation secrets in the audit log should be removed automatically
during the update but please double check that no automation secrets remain in
the log (see next paragraph for details).
A backup of the original audit log (before automation secrets were removed) is
copied to "~/audit_log_backup". If anything goes wrong
during the update, you have to copy the files back to ~var/check_mk/wato/log
and remove the automation secrets manually by running
```
sed -i 's/Value of "automation_secret" changed from "[^"]*" to "[^"]*".\\n//g' ~/var/check_mk/wato/log/wato_audit*
sed -i 's/Attribute "automation_secret" with value "[^"]*" added.\\n//g' ~/var/check_mk/wato/log/wato_audit*
```
If the update works as expected, you can remove the backup files.
In distributed setups which do not replicate the configuration, automation
secrets are replaced during the update of each site.
In setups which replicate the configuration from central to remote sites no
automation secrets should be present in the logs of the remote site, since only
information about the activation is logged. Only if you switched to a
replicated setup after the upgrade to the 2.0, automation secrets can be
present in the logs. Since automation secrets may be in this scenario as well,
the steps described before also apply.
*Affected Versions*:
* 2.3.0
* 2.2.0
* 2.1.0
* 2.0.0 (EOL)
*Mitigations*:
Remove automation secrets manually within the files located in
~var/check_mk/wato/log.
*Vulnerability Management*:
We have rated the issue with a CVSS Score of 2.7 (Low) with the following
CVSS vector: `CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N` and assigned CVE
`CVE-2024-28830`.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# Don't show automation secret in the audit log (addresses CVE-2024-28830)
key | value
---------- | ---
date | 2024-06-19T12:10:00+00:00
version | 2.3.0p7
class | security
edition | cre
component | wato
level | 2
compatible | no
By default only admin users are able to see the audit log. Guests and normal
monitoring users do not have acces to the audit log.
Werk #13330 already fixed a problem where passwords were shown in the audit log.
This werk now addresses the problem, that still automation secrets of
automation user were logged in clear text to the audit log, e.g. on change of
the automation secret via REST-API or the user interface.
Existing automation secrets in the audit log should be removed automatically
during the update but please double check that no automation secrets remain in
the log (see next paragraph for details).
A backup of the original audit log (before automation secrets were removed) is
copied to "~/audit_log_backup". If anything goes wrong
during the update, you have to copy the files back to ~var/check_mk/wato/log
and remove the automation secrets manually by running
```
sed -i 's/Value of "automation_secret" changed from "[^"]*" to "[^"]*".\\n//g' ~/var/check_mk/wato/log/wato_audit*
sed -i 's/Attribute "automation_secret" with value "[^"]*" added.\\n//g' ~/var/check_mk/wato/log/wato_audit*
```
If the update works as expected, you can remove the backup files.
In distributed setups which do not replicate the configuration, automation
secrets are replaced during the update of each site.
In setups which replicate the configuration from central to remote sites no
automation secrets should be present in the logs of the remote site, since only
information about the activation is logged. Only if you switched to a
replicated setup after the upgrade to the 2.0, automation secrets can be
present in the logs. Since automation secrets may be in this scenario as well,
the steps described before also apply.
*Affected Versions*:
* 2.3.0
* 2.2.0
* 2.1.0
* 2.0.0 (EOL)
*Mitigations*:
Remove automation secrets manually within the files located in
~var/check_mk/wato/log.
*Vulnerability Management*:
- We have rated the issue with a CVSS Score of <2.7 (Low)> with the following
? - -
+ We have rated the issue with a CVSS Score of 2.7 (Low) with the following
CVSS vector: `CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N` and assigned CVE
`CVE-2024-28830`.
Werk 16667 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# discovery: fix writing of autochecks file for nodes of cluster and aggregation of service labels on clusters
key | value
---------- | ---
date | 2024-04-25T12:29:22+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
For clustered services, the nodes' autocheck files would be written with the
aggregated clustered service information. This is incorrect because individual
nodes should have their own service information.
This could cause the non-clustered services to be overridden when a clustered
service being discovered. And discovering a non-clustered services could also
override the clustered services.
This is now fixed and the individual node information is written to the
autochecks file.
This means that when applying changes to clustered services, non-clustered
services will no longer be affected and vice versa.
As part of this effort, the aggregation for service labels on clusters has also
been fixed. This means that instead of using the labels of the first node, the
aggregated labels of all nodes are being used.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
- # discovery: fix writing of autochecks file for nodes
+ # discovery: fix writing of autochecks file for nodes of cluster and aggregation of service labels on clusters
key | value
---------- | ---
date | 2024-04-25T12:29:22+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
- For clustered services the nodes' autocheck files would be written
+ For clustered services, the nodes' autocheck files would be written with the
? + +++++++++
- with the aggregated clustered service information.
+ aggregated clustered service information. This is incorrect because individual
+ nodes should have their own service information.
- Now, at least for the autodiscovery, this is fixed and the individual
- node information is written.
+ This could cause the non-clustered services to be overridden when a clustered
+ service being discovered. And discovering a non-clustered services could also
+ override the clustered services.
+ This is now fixed and the individual node information is written to the
+ autochecks file.
+ This means that when applying changes to clustered services, non-clustered
+ services will no longer be affected and vice versa.
+
+ As part of this effort, the aggregation for service labels on clusters has also
+ been fixed. This means that instead of using the labels of the first node, the
+ aggregated labels of all nodes are being used.
+
[//]: # (werk v2)
# Graphs with legend in dashboards: Avoid crash if dashlet is too short to contain graph
key | value
---------- | ---
date | 2024-06-27T11:01:47+00:00
version | 2.4.0b1
class | fix
edition | cre
component | multisite
level | 1
compatible | yes
Graph dashlets with activated legend crashed if the dashlet was too short to contain the graph. As
of this werk, the UI instead displays a helpful error message in such cases, hinting the user at
either increasing the dashlet height or disabling the legend.
Werk 16434 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# Synthetic Monitoring: Privilege Escalation
key | value
---------- | ---
date | 2024-06-24T14:56:31+00:00
version | 2.4.0b1
class | security
edition | cee
component | agents
level | 1
compatible | yes
The Robotmk scheduler was affected by a privilege escalation issue. This issue affects users, which
have configured the rule `Robotmk scheduler (Windows)`. Specifically, an attacker is able to exploit
the issue, if
1. `Automated environment setup (via RCC)` was configured in the `Robotmk scheduler (Windows)` rule,
2. the same plan was configured without configuring `Execute plan as a specific user`
3. and a user on the host, onto which the scheduler has been deployed, was compromised.
In this event, the attacker could gain SYSTEM privileges on the host. If `Execute plan as a specific
user` _is_ configured, then the attacker could compromise that specific user, rather than SYSTEM.
There is a second similar, but distinct issue. If
- there are two or more plans configured with `Execute plan as a specific user` with distinct users
- and one of the configured users was already compromised.
The attacker could then compromise the other user.
*Background*:
The Robotmk scheduler is started by the Checkmk agent that runs with SYSTEM privileges.
Moreover, Robotmk allows the user to automatically build Python environments via RCC. During setup
the scheduler would enable a RCC feature called `shared holotree usage`. This feature allows all
users on the host to edit these Python environments. Thus, any compromised user on the host is also
able to compromise a user, which executes code from these shared environments.
With this Werk, `shared holotree usage` will no longer be enabled. Affected users will have their
access to the vunerable Python environments revoked. Moreover, the permissions inside of the working
directory of Robotmk have been reworked. Users now only have access to directories, which are
required for their own executions.
Note, you must update both Checkmk and redeploy the latest Robotmk Scheduler.
*Affected Versions*:
* 2.3.0
*Mitigations*:
If updating is not possible:
* Do not use the rule `Automated environment setup (via RCC)`.
* Always use the same user for `Execute plan as a specific user`.
*Vulnerability Management*:
We have rated the issue with a CVSS Score of 7.8 (High) with the following CVSS vector:
`CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H` and requested a CVE.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# Synthetic Monitoring: Privilege Escalation
key | value
---------- | ---
date | 2024-06-24T14:56:31+00:00
version | 2.4.0b1
class | security
edition | cee
component | agents
level | 1
compatible | yes
The Robotmk scheduler was affected by a privilege escalation issue. This issue affects users, which
have configured the rule `Robotmk scheduler (Windows)`. Specifically, an attacker is able to exploit
the issue, if
1. `Automated environment setup (via RCC)` was configured in the `Robotmk scheduler (Windows)` rule,
2. the same plan was configured without configuring `Execute plan as a specific user`
3. and a user on the host, onto which the scheduler has been deployed, was compromised.
In this event, the attacker could gain SYSTEM privileges on the host. If `Execute plan as a specific
user` _is_ configured, then the attacker could compromise that specific user, rather than SYSTEM.
There is a second similar, but distinct issue. If
- there are two or more plans configured with `Execute plan as a specific user` with distinct users
- and one of the configured users was already compromised.
The attacker could then compromise the other user.
*Background*:
The Robotmk scheduler is started by the Checkmk agent that runs with SYSTEM privileges.
Moreover, Robotmk allows the user to automatically build Python environments via RCC. During setup
the scheduler would enable a RCC feature called `shared holotree usage`. This feature allows all
users on the host to edit these Python environments. Thus, any compromised user on the host is also
able to compromise a user, which executes code from these shared environments.
With this Werk, `shared holotree usage` will no longer be enabled. Affected users will have their
access to the vunerable Python environments revoked. Moreover, the permissions inside of the working
directory of Robotmk have been reworked. Users now only have access to directories, which are
required for their own executions.
Note, you must update both Checkmk and redeploy the latest Robotmk Scheduler.
*Affected Versions*:
* 2.3.0
*Mitigations*:
If updating is not possible:
+
* Do not use the rule `Automated environment setup (via RCC)`.
* Always use the same user for `Execute plan as a specific user`.
*Vulnerability Management*:
We have rated the issue with a CVSS Score of 7.8 (High) with the following CVSS vector:
`CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H` and requested a CVE.
[//]: # (werk v2)
# Enable several host actions no matter the tree depth of existing hosts
key | value
---------- | ---
date | 2024-06-24T11:19:49+00:00
version | 2.4.0b1
class | fix
edition | cre
component | multisite
level | 1
compatible | yes
This Werk fixes the previous [Werk #16638](https://checkmk.com/werk/16638).
For host actions "Run bulk service discovery", "Rename multiple hosts" and "Detect network parent hosts" only hosts in the current folder and in the first level subfolders were taken into account.
This is fixed to the expected recursive behavior: If a host exists in the current folder or any of its subfolders - no matter their tree depth - the host actions are enabled.
[//]: # (werk v2)
# global_settings: enable 'Hide Checkmk version' per default
key | value
---------- | ---
date | 2024-06-26T09:01:06+00:00
version | 2.4.0b1
class | security
edition | cre
component | wato
level | 1
compatible | yes
Displaying the version number on the login screen is generally regarded
as a security risk because it can enable attackers to identify potential
vulnerabilities associated with that specific version. Consequently, we
have changed the default setting to hide the version number. Users who wish
to view the version number can manually enable this option through the
Global Settings. It should be highlighted that users who have previously set
this option to show the version will not be affected by this change.
To aid automated scanning we assign a CVSS score of 0.0 (None) (`CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:N`).
[//]: # (werk v2)
# TrippLite UPS: discover devices with .1.3.6.1.4.1.850.1 as sysObjectID
key | value
---------- | ---
date | 2024-06-26T16:13:34+00:00
version | 2.4.0b1
class | feature
edition | cre
component | checks
level | 1
compatible | yes
TrippLite UPSs use OID .1.3.6.1.4.1.850.1 as sysObjectID.
These devices are currently not discovered and monitored.
This has now been changed and they will be discovered.