[//]: # (werk v2)
# aws: Fix Cloudwatch alarms fetching
key | value
---------- | ---
compatible | yes
version | 2.3.0b1
date | 2024-02-21T13:16:55+00:00
level | 1
class | fix
component | checks
edition | cre
Cloudwatch alarms weren't fetched properly for environments with a lot
of alarms. It resulted in missing alarms in the 'AWS/CloudWatch Alarms' service.
[//]: # (werk v2)
# aws: Inventorization of EC2 and ELB tags as host labels
key | value
---------- | ---
date | 2024-02-21T11:56:07+00:00
version | 2.3.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
There was a problem during service discovery which prevented Checkmk
from assigning the AWS EC2 and ELB tags delivered by the AWS agent to
their respective piggyback hosts. This werk fixes the discovery process
such that the data is parsed properly and custom tags from AWS will
now show up as host labels on the created piggyback hosts.
[//]: # (werk v2)
# Checkmk Linux agent: ignore \*.dpkg-tmp files in plugin folder
key | value
---------- | ---
date | 2024-02-20T21:25:44+00:00
version | 2.3.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
The Checkmk agents for Linux, Solaris, AIX, OpenWrt and FreeBSD now ignore \*.dpkg-tmp files in the plugins folder.
They previously executed them inadvertedly as plugins.
This mostly fails silently (or even succeedes), but sometimes it is reported by the "Check_MK Agent" service.
[//]: # (werk v2)
# mk_redis: Fix for Werk #16329
key | value
---------- | ---
date | 2024-02-21T10:40:17+00:00
version | 2.3.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
With werk #16329 when a password was set, the plugin did not work.
This has now been fixed and configuring a password shouldn't cause any issues.
[//]: # (werk v2)
# downtimes: Added service_description field to services downtimes
key | value
---------- | ---
date | 2024-02-20T14:52:12+00:00
version | 2.3.0b1
class | feature
edition | cre
component | rest-api
level | 1
compatible | yes
When querying downtimes through the "show all downtimes" endpoint, the service_description field for service downtimes was not included. This werk introduces this field, which is not present in the host downtimes.
Werk 16493 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# netapp_ontap_snapvault: improves lagtime calculation
key | value
---------- | ---
date | 2024-02-16T09:39:38+00:00
version | 2.3.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# netapp_ontap_snapvault: improves lagtime calculation
key | value
---------- | ---
date | 2024-02-16T09:39:38+00:00
version | 2.3.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
- With this new type of calculation, since we don't have a reference for when this lagtime started or ended,
- we always consider a month to be made up of 30 days.
[//]: # (werk v2)
# aws: Add total reservation utilization service
key | value
---------- | ---
date | 2024-02-14T09:35:02+00:00
version | 2.3.0b1
class | feature
edition | cre
component | checks
level | 1
compatible | yes
This werk adds a service to monitor the total utilization of
reserved resources analogous to the reservation utilization graph
in the AWS cost explorer.
This service is discovered as soon as the AWS agent rule to monitor
costs and usage (CE) is enabled.
[//]: # (werk v2)
# netapp_ontap_snapvault: improves lagtime calculation
key | value
---------- | ---
date | 2024-02-16T09:39:38+00:00
version | 2.3.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
With this new type of calculation, since we don't have a reference for when this lagtime started or ended,
we always consider a month to be made up of 30 days.