Werk 16868 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# azure: Fetch metrics in bulk
key | value
---------- | ---
date | 2024-09-06T15:03:25+00:00
version | 2.3.0p16
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Until now, Azure agent fetched metrics for each resource individually.
This resulted in many requests made to the Azure API.
After the changes in Azure API rate limits it started causing agent timeouts in
large environments.
Now, the Azure agent fetches metrics in bulk per resource type and region.
This way the number of requests is significantly reduced.
Since metrics are no longer fetched per resource, parallel execution is also
removed together with the `Force agent to run in single thread` option.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# azure: Fetch metrics in bulk
key | value
---------- | ---
date | 2024-09-06T15:03:25+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Until now, Azure agent fetched metrics for each resource individually.
This resulted in many requests made to the Azure API.
After the changes in Azure API rate limits it started causing agent timeouts in
large environments.
Now, the Azure agent fetches metrics in bulk per resource type and region.
This way the number of requests is significantly reduced.
Since metrics are no longer fetched per resource, parallel execution is also
removed together with the `Force agent to run in single thread` option.
Werk 16869 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# azure: Don't fetch vNet gateway peerings from another subscription
key | value
---------- | ---
date | 2024-09-06T15:15:56+00:00
version | 2.3.0p16
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Previously, the Azure agent would crash when monitoring vNet gateways which have peerings
from a different subscription.
The agent wouldn't report the crash and the consequence was that the affected vNet gateway
wasn't monitored.
Now, the agent will not monitor vNet gateways peerings from another subscription.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# azure: Don't fetch vNet gateway peerings from another subscription
key | value
---------- | ---
date | 2024-09-06T15:15:56+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Previously, the Azure agent would crash when monitoring vNet gateways which have peerings
from a different subscription.
The agent wouldn't report the crash and the consequence was that the affected vNet gateway
wasn't monitored.
Now, the agent will not monitor vNet gateways peerings from another subscription.
Werk 17033 was adapted. The following is the new Werk, a diff is shown at the end of the message.
[//]: # (werk v2)
# if_fortigate: Show admin state in summary
key | value
---------- | ---
date | 2024-09-05T13:44:50+00:00
version | 2.3.0p16
class | feature
edition | cre
component | checks
level | 1
compatible | yes
With this werk the admin state of the devices will be shown in the summary.
------------------------------------<diff>-------------------------------------------
[//]: # (werk v2)
# if_fortigate: Show admin state in summary
key | value
---------- | ---
date | 2024-09-05T13:44:50+00:00
- version | 2.3.0p15
? ^
+ version | 2.3.0p16
? ^
class | feature
edition | cre
component | checks
level | 1
compatible | yes
With this werk the admin state of the devices will be shown in the summary.
[//]: # (werk v2)
# Fixed site matching for expected regular event console messages
key | value
---------- | ---
date | 2024-09-11T10:57:26+00:00
version | 2.4.0b1
class | fix
edition | cre
component | ec
level | 1
compatible | yes
Due to a regression in 2.2.0, the "Match site" option had no effect for
expected regular messages, i.e. it was effectively ignored in that case.
This has been fixed.
[//]: # (werk v2)
# Reduce API requests during gcp list-assets
key | value
---------- | ---
date | 2024-09-10T12:07:04+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
This werk is relevant to you if you've been monitoring gcp assets and experienced exceeded quota torwards the google API.
The special agent will now only acquire data which is really processed by the check plugin which will reduce the requests torwards gcp.
[//]: # (werk v2)
# azure: Don't fetch vNet gateway peerings from another subscription
key | value
---------- | ---
date | 2024-09-06T15:15:56+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Previously, the Azure agent would crash when monitoring vNet gateways which have peerings
from a different subscription.
The agent wouldn't report the crash and the consequence was that the affected vNet gateway
wasn't monitored.
Now, the agent will not monitor vNet gateways peerings from another subscription.
[//]: # (werk v2)
# azure: Fetch metrics in bulk
key | value
---------- | ---
date | 2024-09-06T15:03:25+00:00
version | 2.4.0b1
class | fix
edition | cre
component | checks
level | 1
compatible | yes
Until now, Azure agent fetched metrics for each resource individually.
This resulted in many requests made to the Azure API.
After the changes in Azure API rate limits it started causing agent timeouts in
large environments.
Now, the Azure agent fetches metrics in bulk per resource type and region.
This way the number of requests is significantly reduced.
Since metrics are no longer fetched per resource, parallel execution is also
removed together with the `Force agent to run in single thread` option.
[//]: # (werk v2)
# Fixed missing hosts on remote sites
key | value
---------- | ---
date | 2024-09-09T07:25:03+00:00
version | 2.3.0p15
class | fix
edition | cme
component | wato
level | 1
compatible | yes
If a folder belonging to a customer contained hosts from several sites of the same customer, there was a risk that these hosts were not monitored at all.