Title: ups_*: support for NetVision OIDs
Class: fix
Compatible: compat
Component: checks
Date: 1725978045
Edition: cre
Level: 1
Version: 2.2.0p34
Newer firmware for NetVision cards was not supported due to changed SNMP OIDs for UPS entries.
This change adds `.1.3.6.1.4.1.4555.1.1.7` and `.1.3.6.1.4.1.42610.1.4.4` to the detection lists.
Title: Ignore unknown "Disabled checks" during update config
Class: fix
Compatible: compat
Component: checks
Date: 1713961530
Edition: cre
Level: 1
Version: 2.2.0p34
If users had disabled checks that have since been removed or are temporarily unavailable (due to disabled MKPs for instance), they would be prompted with a message like
C+:
WARNING: Invalid rule configuration detected (Ruleset: ignored_checks, Title: Disabled checks, Folder: ,
-| Rule nr: 1, Exception: ifoperstatus is not an allowed value)
C-:
These invalid values are ignored now.
They do no harm and they are dropped upon editing the rule.
Title: mk_jolokia: Add compatibility for / in MBeans
Class: fix
Compatible: compat
Component: checks
Date: 1711037404
Edition: cre
Level: 1
Version: 2.2.0p34
Previously it was not possible to select an MBean that had a path separator. This Werk implements the Jolokia path separator <code>!/</code>.
An example is shown in the following fragment of the jolokia.cfg file:
C+:
...
custom_vars = [('Catalina:J2EEApplication=none,J2EEServer=none,WebModule=*localhost!/docs,j2eeType=Servlet,name=default','requestCount','myspecialmetric',[],False,'number')]]
...
C-:
This will match the entry <code>myinstance,Catalina:J2EEApplication=none,J2EEServer=none,WebModule=//localhost/docs,j2eeType=Servlet,name=defaultmyspecialmetric.requestCount0number</code>
[//]: # (werk v2)
# Fixed site matching for expected regular event console messages
key | value
---------- | ---
date | 2024-09-11T10:57:26+00:00
version | 2.3.0p16
class | fix
edition | cre
component | ec
level | 1
compatible | yes
Due to a regression in 2.2.0, the "Match site" option had no effect for
expected regular messages, i.e. it was effectively ignored in that case.
This has been fixed.
[//]: # (werk v2)
# WATO configures correctly custom imstances for MS SQL Server plugin
key | value
---------- | ---
date | 2024-07-19T11:09:57+00:00
version | 2.3.0p16
class | fix
edition | cee
component | checks
level | 2
compatible | yes
Previously, configuring custom instances, WATO used wrong names:
`conn` instead of a correct `connection` and `auth` instead of
a correct `authentication`.
With this release the problem has been eliminated,
If you are using custom instances you need to bake and deploy new
agent package
Werk 14236 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# Fixed missing hosts on remote sites
key | value
---------- | ---
date | 2024-09-09T07:25:03+00:00
version | 2.3.0p16
class | fix
edition | cme
component | wato
level | 1
compatible | yes
If a folder belonging to a customer contained hosts from several sites of the same customer, there was a risk that these hosts were not monitored at all.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# Fixed missing hosts on remote sites
key | value
---------- | ---
date | 2024-09-09T07:25:03+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | fix
edition | cme
component | wato
level | 1
compatible | yes
If a folder belonging to a customer contained hosts from several sites of the same customer, there was a risk that these hosts were not monitored at all.
Werk 16868 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# azure: Fetch metrics in bulk
key | value
---------- | ---
date | 2024-09-06T15:03:25+00:00
version | 2.3.0p16
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Until now, Azure agent fetched metrics for each resource individually.
This resulted in many requests made to the Azure API.
After the changes in Azure API rate limits it started causing agent timeouts in
large environments.
Now, the Azure agent fetches metrics in bulk per resource type and region.
This way the number of requests is significantly reduced.
Since metrics are no longer fetched per resource, parallel execution is also
removed together with the `Force agent to run in single thread` option.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# azure: Fetch metrics in bulk
key | value
---------- | ---
date | 2024-09-06T15:03:25+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Until now, Azure agent fetched metrics for each resource individually.
This resulted in many requests made to the Azure API.
After the changes in Azure API rate limits it started causing agent timeouts in
large environments.
Now, the Azure agent fetches metrics in bulk per resource type and region.
This way the number of requests is significantly reduced.
Since metrics are no longer fetched per resource, parallel execution is also
removed together with the `Force agent to run in single thread` option.
Werk 16869 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# azure: Don't fetch vNet gateway peerings from another subscription
key | value
---------- | ---
date | 2024-09-06T15:15:56+00:00
version | 2.3.0p16
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Previously, the Azure agent would crash when monitoring vNet gateways which have peerings
from a different subscription.
The agent wouldn't report the crash and the consequence was that the affected vNet gateway
wasn't monitored.
Now, the agent will not monitor vNet gateways peerings from another subscription.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# azure: Don't fetch vNet gateway peerings from another subscription
key | value
---------- | ---
date | 2024-09-06T15:15:56+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Previously, the Azure agent would crash when monitoring vNet gateways which have peerings
from a different subscription.
The agent wouldn't report the crash and the consequence was that the affected vNet gateway
wasn't monitored.
Now, the agent will not monitor vNet gateways peerings from another subscription.
Werk 17033 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# if_fortigate: Show admin state in summary
key | value
---------- | ---
date | 2024-09-05T13:44:50+00:00
version | 2.3.0p16
class | feature
edition | cre
component | checks
level | 1
compatible | yes
With this werk the admin state of the devices will be shown in the summary.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# if_fortigate: Show admin state in summary
key | value
---------- | ---
date | 2024-09-05T13:44:50+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | feature
edition | cre
component | checks
level | 1
compatible | yes
With this werk the admin state of the devices will be shown in the summary.