[//]: # (werk v2)
# Nutanix agent: resolve verify error when environment REQUEST_CA_BUNDLE is set
key | value
---------- | ---
date | 2024-06-08T22:17:32+00:00
version | 2.3.0p6
class | fix
edition | cee
component | checks
level | 1
compatible | yes
Due to a specific behaviour in the requests Python library, the SSL verify disable
option does not work when the REQUEST_CA_BUNDLE environment variable is set. This
also affected the Nutanix agent. This werk resolves the error and therefore always
respects the no-cert-check flag. In addition, this werk also improves the error
message in case the agent fails when requesting the Nutanix API.
[//]: # (werk v2)
# Add ID to some sections on "Edit role" page
key | value
---------- | ---
compatible | yes
version | 2.4.0b1
date | 2024-06-10T10:28:09+00:00
level | 1
class | feature
component | multisite
edition | cre
Built-in views already showed the ID of the view, this was added now for custom
views and other sections like dashboards, topics and graph collections.
[//]: # (werk v2)
# sql: Allow macros in 'Database user' field
key | value
---------- | ---
date | 2024-06-10T08:27:19+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
With the version 2.3, the usage of macros in the `Database user` field of
the `Check SQL database` was disallowed. With this Werk, it's allowed again.
[//]: # (werk v2)
# NetApp via WebAPI: remove deprecated agent and plugin
key | value
---------- | ---
date | 2024-06-07T08:39:33+00:00
version | 2.4.0b1
class | feature
edition | cre
component | checks
level | 2
compatible | no
This werk impacts users who monitor a NetApp environment with the deprecated special agent "NetApp via WebAPI" (agent_netapp).
As of Checkmk version 2.4.0, the agent "NetApp via WebAPI" and its associated checks and inventory plugins have been removed.
Please configure the new special agent using the "NetApp via Ontap REST API" ruleset and perform a re-discovery.
The following plugins are no longer available:
- NetApp Filer: Cluster-Mode CPU Utilization (_netapp_api_aggr_)
- NetApp Ontap Filer: 7Mode Cluster Status (_netapp_api_cluster_)
- NetApp API Connection (_netapp_api_connection_)
- NetApp Filer: Cluster-Mode CPU Utilization (_netapp_api_cpu_)
- NetApp Clustermode Filer: NVRAM Battery (_netapp_api_cpu_nvram_bat_)
- NetApp Filer: 7Mode Global CPU Utilization (_netapp_api_cpu_utilization_)
- NetApp Filer: Disk (_Summarynetapp_api_disk_summary_)
- NetApp Filer Clustermode: PSU Fault Info (_netapp_api_environment_)
- NetApp Filer Clustermode: System Electrical Current (_netapp_api_environment_current_)
- NetApp Filer Clustermode: Fan Fault Info (_netapp_api_environment_fan_faults_)
- NetApp Filer Clustermode: System Fan Speed (_netapp_api_environment_fans_)
- NetApp Filer Clustermode: System Temperature (_netapp_api_environment_temperature_)
- NetApp Filer Clustermode: System Electrical Voltage (_netapp_api_environment_voltage_)
- NetApp Filer: FANs (_netapp_api_fan_)
- NetApp Filer: FANs Summary (_netapp_api_fan_summary_)
- NetApp Cluster-Mode: State of Fibrechannel Interfaces (_netapp_api_fcp_)
- NetApp Filer: State of Network Interfaces (_netapp_api_if_)
- NetApp Filer: Version Info (_netapp_api_info_)
- NetApp Filer: Used Space of LUNs (_netapp_api_luns_)
- NetApp Filer: Ports (_netapp_api_ports_)
- NetApp Filer 7Mode: Protocols (_netapp_api_protocol_)
- NetApp Filer: Power Supplies (_netapp_api_psu_)
- NetApp Filer: Power Supplies Summary (_netapp_api_psu_summary_)
- NetApp Filer: Used Space of qtrees in Volumes (_netapp_api_qtree_quota_)
- NetApp Filer: Used Space in Snapshots of Volumes (_netapp_api_snapshots_)
- NetApp Filer: Snapvault/Snapmirror Lag-time (_netapp_api_snapvault_)
- NetApp Filer: Overall System Health (_netapp_api_status_)
- NetApp Filer: Systemtime (_netapp_api_systemtime_)
- NetApp Filer: Temperature Sensors (_netapp_api_temp_)
- NetApp Filer 7Mode: vFiler CPU Utilization (_netapp_api_vf_stats_)
- NetApp Filer: vFiler Traffic (_netapp_api_vf_stats_traffic_)
- NetApp Filer: vFiler Status (_netapp_api_vf_status_)
- NetApp Filer: Used Space and Traffic of Volumes (_netapp_api_volumes_)
- NetApp Filer: vServer Status (_netapp_api_vs_status_)
- NetApp Filer: vServer Traffic Summary (_netapp_api_vs_traffic_)
[//]: # (werk v2)
# Changed format of host tag conditions in global setting 'agent_deployment_host_selection'
key | value
---------- | ---
date | 2024-06-10T08:46:08+00:00
version | 2.4.0b1
class | feature
edition | cee
component | setup
level | 1
compatible | yes
The internal data format of the Checkmk global setting "Selection of hosts to
activate agent updates for", normally configured via Setup "Global settings" or
"Automatic agent updates" page, has been changed.
If you only use Setup for configuring Checkmk this change will not be relevant
for you, since the data format will be changed automatically during update to
2.4x.
In case you edit global.mk files manually or via script to define
the <tt>agent_deployment_host_selection</tt> configuration options, you will
likely have to change your scripts or at least the configuration files.
A global setting with it's tag conditions in the old format looks
like this:
F+:global.mk
agent_deployment_host_selection += {"match_hosttags": ["my-host|cmk-agent|prod|lan|piggyback|no-snmp"]}
F-:
The tags that should match are in list format, separated via pipe characters.
There is no information about the tag group the configured tag is related with.
The new structure looks like this:
F+:global.mk:
agent_deployment_host_selection += [
{
"match_hosttags": {
"agent": "cmk-agent",
"criticality": "prod",
"networking": "lan",
"piggyback": "piggyback",
"snmp_ds": "no-snmp",
},
}
]
F-:
In the <tt>match_hosttags</tt> dictionary the keys are the tag groups (as
defined in Setup) and the values are the tags configured for each group.
[//]: # (werk v2)
# Changed format of host tag conditions in alert_handlers.mk configuration file
key | value
---------- | ---
date | 2024-06-10T08:36:41+00:00
version | 2.4.0b1
class | feature
edition | cee
component | setup
level | 1
compatible | yes
The internal data format of the Checkmk alert handler rule definitions,
normally configured via Setup "Alert handlers" page, has been changed.
If you only use Setup for configuring Checkmk this change will not be relevant
for you, since the data format will be changed automatically during update to
2.4x.
In case you edit alert_handlers.mk files manually or via script to define
the <tt>alert_handler_rules</tt> configuration options, you will
likely have to change your scripts or at least the configuration files.
An alert handler rule definition with it's tag conditions in the old format looks
like this:
F+:alert_handlers.mk
alert_handler_rules += [
{
"description": "Rule description",
...,
"match_hosttags": ["my-host|cmk-agent|prod|lan|piggyback|no-snmp"],
...,
}
]
F-:
The tags that should match are in list format, separated via pipe characters.
There is no information about the tag group the configured tag is related with.
The new structure looks like this:
F+:alert_handlers.mk:
alert_handler_rules += [
{
"description": "Rule description",
...,
"match_hosttags": {
"agent": "cmk-agent",
"criticality": "prod",
"networking": "lan",
"piggyback": "piggyback",
"snmp_ds": "no-snmp",
},
...,
}
]
F-:
In the <tt>alert_handler_rules</tt> dictionary the keys are the tag groups (as
defined in Setup) and the values are the tags configured for each group.
[//]: # (werk v2)
# Changed format of host tag conditions in notifications.mk configuration file
key | value
---------- | ---
date | 2024-06-09T12:24:41+00:00
version | 2.4.0b1
class | feature
edition | cre
component | notifications
level | 1
compatible | yes
The internal data format of the Checkmk notification rule definitions,
normally configured via Setup "Notifications" page, has been changed.
If you only use Setup for configuring Checkmk this change will not be relevant
for you, since the data format will be changed automatically during update to
2.4x.
In case you edit notifications.mk files manually or via script to define
the <tt>notification_rules</tt> configuration options, you will
likely have to change your scripts or at least the configuration files.
A notification rule definition with it's tag conditions in the old format looks
like this:
F+:notifications.mk
notification_rules += [
{
"description": "Rule description",
...,
"match_hosttags": ["my-host|cmk-agent|prod|lan|piggyback|no-snmp"],
...,
}
]
F-:
The tags that should match are in list format, separated via pipe characters.
There is no information about the tag group the configured tag is related with.
The new structure looks like this:
F+:notifications.mk:
notification_rules += [
{
"description": "Rule description",
...,
"match_hosttags": {
"agent": "cmk-agent",
"criticality": "prod",
"networking": "lan",
"piggyback": "piggyback",
"snmp_ds": "no-snmp",
},
...,
}
]
F-:
In the <tt>match_hosttags</tt> dictionary the keys are the tag groups (as
defined in Setup) and the values are the tags configured for each group.
Werk 16430 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# veeam_jobs: Always Monitor Result of Last Backup
key | value
---------- | ---
date | 2024-06-05T14:36:09+00:00
version | 2.4.0b1
class | fix
edition | cee
component | checks
level | 1
compatible | yes
Previously, the check plugin `veeam_jobs` would not always check the result of
the last backup job to determine the monitoring state. If the creation time was
an empty string, it would show `item not found`. Moreover, if the last
state of the plugin was `Starting`, `Working` or `Postprocessing`, then the
check would be OK, even if the last backup failed.
The check now shows all the information available unconditionally. Moreover,
* a Success result is OK,
* a Warning result is WARN,
* a Failed result is CRIT,
* a None result is OK or UNKNOWN. There is no change in behaviour in this case.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# veeam_jobs: Always Monitor Result of Last Backup
key | value
---------- | ---
date | 2024-06-05T14:36:09+00:00
- version | 2.3.0p6
? ^ ^^
+ version | 2.4.0b1
? ^ ^^
class | fix
edition | cee
component | checks
level | 1
compatible | yes
Previously, the check plugin `veeam_jobs` would not always check the result of
the last backup job to determine the monitoring state. If the creation time was
an empty string, it would show `item not found`. Moreover, if the last
state of the plugin was `Starting`, `Working` or `Postprocessing`, then the
check would be OK, even if the last backup failed.
The check now shows all the information available unconditionally. Moreover,
* a Success result is OK,
* a Warning result is WARN,
* a Failed result is CRIT,
* a None result is OK or UNKNOWN. There is no change in behaviour in this case.
Werk 16430 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# veeam_jobs: Always Monitor Result of Last Backup
key | value
---------- | ---
date | 2024-06-05T14:36:09+00:00
version | 2.3.0p6
class | fix
edition | cee
component | checks
level | 1
compatible | yes
Previously, the check plugin `veeam_jobs` would not always check the result of
the last backup job to determine the monitoring state. If the creation time was
an empty string, it would show `item not found`. Moreover, if the last
state of the plugin was `Starting`, `Working` or `Postprocessing`, then the
check would be OK, even if the last backup failed.
The check now shows all the information available unconditionally. Moreover,
* a Success result is OK,
* a Warning result is WARN,
* a Failed result is CRIT,
* a None result is OK or UNKNOWN. There is no change in behaviour in this case.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# veeam_jobs: Always Monitor Result of Last Backup
key | value
---------- | ---
date | 2024-06-05T14:36:09+00:00
version | 2.3.0p6
class | fix
edition | cee
component | checks
level | 1
compatible | yes
Previously, the check plugin `veeam_jobs` would not always check the result of
the last backup job to determine the monitoring state. If the creation time was
- an empty string, it would show `item not found` in case. It will now show all
- the information available and the correct monitoring state.
+ an empty string, it would show `item not found`. Moreover, if the last
+ state of the plugin was `Starting`, `Working` or `Postprocessing`, then the
+ check would be OK, even if the last backup failed.
+ The check now shows all the information available unconditionally. Moreover,
+ * a Success result is OK,
+ * a Warning result is WARN,
+ * a Failed result is CRIT,
+ * a None result is OK or UNKNOWN. There is no change in behaviour in this case.
+
[//]: # (werk v2)
# veeam_jobs: Always Monitor Result of Last Backup
key | value
---------- | ---
date | 2024-06-05T14:36:09+00:00
version | 2.3.0p6
class | fix
edition | cee
component | checks
level | 1
compatible | yes
Previously, the check plugin `veeam_jobs` would not always check the result of
the last backup job to determine the monitoring state. If the creation time was
an empty string, it would show `item not found` in case. It will now show all
the information available and the correct monitoring state.