Title: Avoid duplicate port allocations within the same site
Class: fix
Compatible: compat
Component: omd
Date: 1700140615
Edition: cre
Level: 1
Version: 2.3.0b1
When configuring ports via <tt>omd config</tt>, Checkmk checks if the configured port is already in
use, both by the same site and by other sites. The check if the port is already in use by another
process of the same site was broken, which could lead to duplicate port allocations within a site.
Title: Fix ignored filter on export of views as PDF
Class: fix
Compatible: compat
Component: reporting
Date: 1700136679
Edition: cee
Level: 1
Version: 2.3.0b1
If filters were set manually in views and the option "Export" -> "This view as
PDF" was used, the filter were ignored on PDF creation.
Title: logwatch_ec: tcp remote forwarding: create one spool file per service
Class: fix
Compatible: compat
Component: checks
Date: 1699863833
Edition: cre
Knowledge: doc
Level: 1
Version: 2.1.0p37
This Werk affects you if you have a logwatch_ec check which forwards events to
a remote syslog hosts and if you activated the option "Create a separate check
for each logfile".
In this case all separate services shared one spoolfile. This lead to the
problem, that one event in the spoolfile was displayed as one event for each
separate service (but it was only sent out once, when the remote was reachable
again).
In some conditions events might been unnoticeable dropped, because the
spoolfile was overwritten by another logwatch service.
Now each logwatch service will have their own spoolfile.
The spoolfiles will be automatically assigned to their logwatch service.
After all your logwatch_ec services sent all their spoolfiles out, you may
manually consult the following folder for <tt>spoolfile.*</tt> files:
<tt>./var/check_mk/logwatch_spool/<hostname></tt>
If there are any spoolfiles in this folder, they could not be assigned to a
logwatch service. If you still want them to be forwarded, move them to one of
the hash folders, otherwise they can be deleted.
<tt>./var/check_mk/logwatch_spool/<hostname>/<<sha1_hash_of_item></tt>
Title: logwatch_ec: remove spool files after reading them
Class: fix
Compatible: compat
Component: checks
Date: 1698764921
Edition: cre
Level: 1
Version: 2.1.0p37
Before this fix spool files were only removed when they were too old or if
there were too many of them.
Spool files that got deleted after reading will be recreated if there was
an error while sending a message.
Title: Saving a formular might produce an "Internal server error"
Class: fix
Compatible: compat
Component: multisite
Date: 1700062613
Edition: cre
Knowledge: doc
Level: 1
Version: 2.1.0p37
This is a followup werk for werk 15939. The limit parameter got increased even more.
Title: Ignore piggybacked host names starting with a period
Class: fix
Compatible: incomp
Component: core
Date: 1699602114
Edition: cre
Knowledge: undoc
Level: 1
State: unknown
Version: 2.2.0p15
We now skip piggybacked data for hosts names starting
with a period. Examples of such invalid names are ".",
".hostname", and ".hostname.domain.com".
Users must rename such hosts if they should remain
in the monitoring.
Title: "Always up" hosts can always notify
Class: fix
Compatible: compat
Component: core
Date: 1699884551
Edition: cee
Level: 1
Version: 2.3.0b1
Do not postpone notifications for "always up" hosts.
The notification logic would wrongly assume that "always up" hosts may,
in fact, be down and erroneously postpone notifications. This has been
fixed, such hosts are never down.
Werk 15984 was deleted. The following Werk is no longer relevant.
Title: Introduce Saas edition werks
Class: feature
Compatible: compat
Component: packages
Date: 1689234864
Edition: cse
Knowledge: undoc
Level: 1
Version: 2.3.0b1
We can write saas edition werks now
Werk 16145 was deleted. The following Werk is no longer relevant.
Title: "Always up" hosts can always notify
Class: fix
Compatible: compat
Component: core
Date: 1699884551
Edition: cee
Level: 1
Version: 2.3.0b1
Do not postpone notifications for "always up" hosts.
The notification logic would wrongly assume that "always up" hosts may,
in fact, be down and erroneously postpone notifications. This has been
fixed, such hosts are never down.
Title: Change factory setting for "Lock user accounts after N logon failures"
Class: feature
Compatible: incomp
Component: wato
Date: 1700034155
Edition: cre
Level: 1
Version: 2.3.0b1
The factory setting for the rule "Lock user accounts after N logon failures" changes from `unset` to `10`.
If enabled local user accounts are locked after N failed login attempts. (LDAP connected users are not affected.)
Previous to this Werk a newly created site set this value to `10` but when reset to the factory setting the option was disabled.
If you disabled this setting via "Reset to default" this setthing is now set to `10` again.