successfully sent X commands" are now working again
Message-ID: <54ace69b.FEyztMeP75ToQt0A%lm(a)mathias-kettner.de>
User-Agent: Heirloom mailx 12.5 6/20/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Module: check_mk
Branch: master
Commit: e06a613e23bc1c4d96d8ffe5a36cc41eb6314a15
URL: http://git.mathias-kettner.de/git/?p=check_mk.git;a=commit;h=e06a613e23bc1c…
Author: Lars Michelsen <lm(a)mathias-kettner.de>
Date: Wed Jan 7 08:44:31 2015 +0100
#1802 FIX Links in messages like "successfully sent X commands" are now working again
The werk #1800 introduced a problem where different HTML tags which are used in
message texts were not interpreted as intended. This change makes the links
working again and also allows <sup> and <p> tags in messages again.
---
.werks/1802 | 12 ++++++++++++
ChangeLog | 1 +
web/htdocs/htmllib.py | 4 +++-
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/.werks/1802 b/.werks/1802
new file mode 100644
index 0000000..637c2e9
--- /dev/null
+++ b/.werks/1802
@@ -0,0 +1,12 @@
+Title: Links in messages like "successfully sent X commands" are now working again
+Level: 1
+Component: multisite
+Class: fix
+Compatible: compat
+State: unknown
+Version: 1.2.7i1
+Date: 1420616484
+
+The werk #1800 introduced a problem where different HTML tags which are used in
+message texts were not interpreted as intended. This change makes the links
+working again and also allows <sup> and <p> tags in messages again.
diff --git a/ChangeLog b/ChangeLog
index 1f1eefb..993e04b 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -63,6 +63,7 @@
* 1799 FIX: Dashboards: Existing views added to dashboards now get a correct title / title_url
* 1800 FIX: Fixed umlauts and HTML tags in exception texts...
* 1796 FIX: Fix filtering in Multisite View BI Boxes...
+ * 1802 FIX: Links in messages like "successfully sent X commands" are now working again...
WATO:
* 1760 Added search form to manual checks page
diff --git a/web/htdocs/htmllib.py b/web/htdocs/htmllib.py
index 12d7912..4b29b9d 100644
--- a/web/htdocs/htmllib.py
+++ b/web/htdocs/htmllib.py
@@ -989,7 +989,9 @@ class html:
# <b>, <tt>, <i> to be part of the exception message. The tags
# are escaped first and then fixed again after attrencode.
msg = self.attrencode(obj)
- msg = re.sub(r'<(/?)(b|tt|i|br|pre)>', r'<\1\2>', msg)
+ msg = re.sub(r'<(/?)(b|tt|i|br|pre|a|sup|p)>', r'<\1\2>', msg)
+ # Also repair link definitions
+ msg = re.sub(r'<a href="(.*)">', r'<a href="\1">', msg)
if self.output_format == "html":
if self.mobile:
Module: check_mk
Branch: master
Commit: bb99da0d078225677896587db4785e4dd22e09af
URL: http://git.mathias-kettner.de/git/?p=check_mk.git;a=commit;h=bb99da0d078225…
Author: Mathias Kettner <mk(a)mathias-kettner.de>
Date: Mon Jan 5 15:15:04 2015 +0100
#1831 diskstat: detect multipath devices and handle them instead of the physical paths
The Linux <i>Disk IO</i> check now uses the information about the Multipathing
if available and the drops single checks for the various <tt>sd</tt><i>XY</i>
devices of the paths and instead adds checks for the resulting multipath
devices.
That drastically reduces the number of services and displays more useful
information at the same time.
---
.werks/1831 | 15 +++++++++++++++
ChangeLog | 1 +
checks/diskstat | 48 ++++++++++++++++++++++++++++++++++--------------
checks/multipath | 23 +++++++++++++++--------
4 files changed, 65 insertions(+), 22 deletions(-)
diff --git a/.werks/1831 b/.werks/1831
new file mode 100644
index 0000000..1d43113
--- /dev/null
+++ b/.werks/1831
@@ -0,0 +1,15 @@
+Title: diskstat: detect multipath devices and handle them instead of the physical paths
+Level: 2
+Component: checks
+Compatible: compat
+Version: 1.2.7i1
+Date: 1420467168
+Class: feature
+
+The Linux <i>Disk IO</i> check now uses the information about the Multipathing
+if available and the drops single checks for the various <tt>sd</tt><i>XY</i>
+devices of the paths and instead adds checks for the resulting multipath
+devices.
+
+That drastically reduces the number of services and displays more useful
+information at the same time.
diff --git a/ChangeLog b/ChangeLog
index 29b760d..1f1eefb 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -22,6 +22,7 @@
* 1460 df_netscaler: new check to monitor filesystem usage on Citrix Netscaler devices
* 1820 mem.linux: new dedicated check for Linux memory management...
NOTE: Please refer to the migration notes!
+ * 1831 diskstat: detect multipath devices and handle them instead of the physical paths...
* 1457 FIX: logins: new check renamed from "users" check...
NOTE: Please refer to the migration notes!
* 1762 FIX: lnx_thermal: Now ignoring trip points with level 0...
diff --git a/checks/diskstat b/checks/diskstat
index 10a0caa..1758f74 100644
--- a/checks/diskstat
+++ b/checks/diskstat
@@ -113,27 +113,43 @@ def diskstat_parse_info(info):
return info_plain, nameinfo
-def diskstat_rewrite_device(nameinfo, linestart):
+def diskstat_rewrite_device(nameinfo, multipath_nameinfo, linestart):
node = linestart[0]
major, minor = map(int, linestart[1:3])
device = linestart[3]
- return nameinfo.get((node, major, minor), device)
+ if device in multipath_nameinfo:
+ return multipath_nameinfo[device]
+ else:
+ return nameinfo.get((node, major, minor), device)
def linux_diskstat_convert(info):
- info, nameinfo = diskstat_parse_info(info)
+ diskstat_info, multipath_info = info
+ info, nameinfo = diskstat_parse_info(diskstat_info)
+
+ # For multipath devices use the entries for dm-?? and rename
+ # them with their multipath UUID/alias - and drop the according
+ # sdXY that belong to the paths.
+ multipath_nameinfo = {}
+ skipped_devices = set([])
+ if multipath_info:
+ for uuid, multipath in multipath_info.items():
+ skipped_devices.update(multipath["paths"])
+ multipath_nameinfo[multipath["device"]] = uuid # map dm-8 to its alias
+
# The generic function takes the following values per line:
- # 0: devname
- # 1: read bytes counter
- # 2: write bytes counter
+ # 0: None or node name
+ # 1: devname
+ # 2: read bytes counter
+ # 3: write bytes counter
# Optional ones:
- # 3: number of reads
- # 4: number of writes
- # 5: timems
- # 6: read queue length *counters*
- # 7: write queue length *counters*
+ # 4: number of reads
+ # 5: number of writes
+ # 6: timems
+ # 7: read queue length *counters*
+ # 8: write queue length *counters*
rewritten = [
( l[0], # node name or None
- diskstat_rewrite_device(nameinfo, l[0:4]),
+ diskstat_rewrite_device(nameinfo, multipath_nameinfo, l[0:4]),
int(l[6]),
int(l[10]),
int(l[4]),
@@ -143,13 +159,16 @@ def linux_diskstat_convert(info):
]
# Remove device mapper devices without a translated name
- return [ line for line in rewritten if not line[1].startswith("dm-") ]
+ return [ line for line in rewritten
+ if not line[1].startswith("dm-")
+ and not line[1] in skipped_devices ]
def inventory_diskstat(info):
return inventory_diskstat_generic(linux_diskstat_convert(info))
def check_diskstat(item, params, info):
- this_time = int(info[0][1])
+ diskstat_info, multipath_info = info
+ this_time = int(diskstat_info[0][1])
return check_diskstat_generic(item, params, this_time, linux_diskstat_convert(info))
@@ -162,5 +181,6 @@ check_info["diskstat"] = {
'group' : 'disk_io',
"node_info" : True, # add first column with actual host name
'includes' : [ "diskstat.include" ],
+ 'extra_sections' : [ "multipath" ],
}
diff --git a/checks/multipath b/checks/multipath
index 7aab5f9..6025113 100644
--- a/checks/multipath
+++ b/checks/multipath
@@ -158,13 +158,14 @@ def parse_multipath(info):
# 0: matching regex
# 1: matched regex-group id of UUID
# 2: matched regex-group id of alias (optional)
+ # 3: matched regex-group id of dm-device (optional)
reg_headers = [
- (get_regex(r"^[0-9a-z]{33}$"), 0, None), # 1. (should be included in 3.)
- (get_regex(r"^([^\s]+)\s\(([0-9A-Za-z_-]+)\)"), 2, 1), # 2.
- (get_regex(r"^[a-zA-Z0-9_]+$"), 0, None), # 3.
- (get_regex(r"^([0-9a-z]{33}|[0-9a-z]{49})\s?dm.+$"), 1, None), # 4.
- (get_regex(r"^[a-zA-Z0-9_]+dm-.+$"), 0, None), # 5. Remove this line in 1.2.0
- (get_regex(r"^([-a-zA-Z0-9_ ]+)\s?dm-[0-9]+.*$"), 1, None), # 6. and 7.
+ (get_regex(r"^[0-9a-z]{33}$"), 0, None, None), # 1. (should be included in 3.)
+ (get_regex(r"^([^\s]+)\s\(([0-9A-Za-z_-]+)\)"), 2, 1, None), # 2.
+ (get_regex(r"^[a-zA-Z0-9_]+$"), 0, None, None), # 3.
+ (get_regex(r"^([0-9a-z]{33}|[0-9a-z]{49})\s?(dm.[0-9]+).*$"), 1, None, 2), # 4.
+ (get_regex(r"^[a-zA-Z0-9_]+(dm-[0-9]+).*$"), 0, None, 1), # 5. Remove this line in 1.2.0
+ (get_regex(r"^([-a-zA-Z0-9_ ]+)\s?(dm-[0-9]+).*$"), 1, None, 2), # 6. and 7.
]
reg_prio = get_regex("[[ ]prio=")
@@ -201,17 +202,22 @@ def parse_multipath(info):
if line[0][0] not in [ '[', '`', '|', '\\' ] and not line[0].startswith("size="):
# Try to match header lines
matchobject = None
- for regex, uuid_pos, alias_pos in reg_headers:
+ for regex, uuid_pos, alias_pos, dm_pos in reg_headers:
matchobject = regex.search(l)
if matchobject:
- uuid = matchobject.group(uuid_pos)
+ uuid = matchobject.group(uuid_pos).strip()
if alias_pos:
alias = matchobject.group(alias_pos)
else:
alias = None
+ if dm_pos:
+ dm_device = matchobject.group(dm_pos)
+ else:
+ dm_device = None
+
break
# No data row and no matching header row
@@ -230,6 +236,7 @@ def parse_multipath(info):
group['uuid'] = uuid
group['state'] = None
group['numpaths'] = 0
+ group['device'] = dm_device
groups[uuid] = group
# If the device has an alias, then extract it
use other sections
Message-ID: <54aa6b76.duYlKV0oaHQVBpik%mk(a)mathias-kettner.de>
User-Agent: Heirloom mailx 12.5 6/20/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Module: check_mk
Branch: master
Commit: 61b435c3c9651ad0c0275798f53d5a15c46c1def
URL: http://git.mathias-kettner.de/git/?p=check_mk.git;a=commit;h=61b435c3c9651a…
Author: Mathias Kettner <mk(a)mathias-kettner.de>
Date: Mon Jan 5 11:46:10 2015 +0100
New experimental feature for checks: use other sections
---
modules/automation.py | 2 +-
modules/check_mk.py | 15 +--------------
modules/check_mk_base.py | 40 +++++++++++++++++++++++++++++++++++-----
3 files changed, 37 insertions(+), 20 deletions(-)
diff --git a/modules/automation.py b/modules/automation.py
index 354708e..d7af29f 100644
--- a/modules/automation.py
+++ b/modules/automation.py
@@ -358,7 +358,7 @@ def automation_try_inventory_node(hostname, leave_no_tcp=False, with_snmp_scan=F
try:
exitcode = None
perfdata = []
- info = get_host_info(hostname, ipaddress, infotype)
+ info = get_info_for_check(hostname, ipaddress, infotype)
# Handle cases where agent does not output data
except MKAgentError, e:
exitcode = 3
diff --git a/modules/check_mk.py b/modules/check_mk.py
index f5b6e1c..fa19744 100755
--- a/modules/check_mk.py
+++ b/modules/check_mk.py
@@ -2859,20 +2859,7 @@ def make_inventory(checkname, hostnamelist, check_only=False, include_state=Fals
checkname_base = checkname.split('.')[0] # make e.g. 'lsi' from 'lsi.arrays'
try:
- info = get_realhost_info(hostname, ipaddress, checkname_base, inventory_max_cachefile_age, True)
- # Add information about nodes if check wants this
- if check_info[checkname]["node_info"]:
- if clusters_of(hostname):
- add_host = hostname
- else:
- add_host = None
- info = [ [add_host] + line for line in info ]
-
- # Convert with parse function if available
- if checkname_base in check_info: # parse function must be define for base check
- parse_function = check_info[checkname_base]["parse_function"]
- if parse_function:
- info = check_info[checkname_base]["parse_function"](info)
+ info = get_info_for_check(hostname, ipaddress, checkname_base, inventory_max_cachefile_age, True)
except MKAgentError, e:
# This special handling is needed for the inventory check. It needs special
diff --git a/modules/check_mk_base.py b/modules/check_mk_base.py
index 2770fdb..bad296b 100644
--- a/modules/check_mk_base.py
+++ b/modules/check_mk_base.py
@@ -278,7 +278,7 @@ def submit_check_mk_aggregation(hostname, status, output):
# checks usually use existing cache files, if check_mk is not misconfigured,
# and thus do no network activity at all...
-def get_host_info(hostname, ipaddress, checkname):
+def get_host_info(hostname, ipaddress, checkname, max_cachefile_age, ignore_check_interval=False):
# If the check want's the node info, we add an additional
# column (as the first column) with the name of the node
# or None (in case of non-clustered nodes). On problem arises,
@@ -299,7 +299,9 @@ def get_host_info(hostname, ipaddress, checkname):
# try the other nodes.
try:
ipaddress = lookup_ipaddress(node)
- new_info = get_realhost_info(node, ipaddress, checkname, cluster_max_cachefile_age)
+ new_info = get_realhost_info(node, ipaddress, checkname,
+ max_cachefile_age == None and cluster_max_cachefile_age or max_cache_age,
+ ignore_check_interval)
if new_info != None:
if add_nodeinfo:
new_info = [ [node] + line for line in new_info ]
@@ -323,9 +325,15 @@ def get_host_info(hostname, ipaddress, checkname):
raise MKAgentError(", ".join(exception_texts))
else:
- info = get_realhost_info(hostname, ipaddress, checkname, check_max_cachefile_age)
+ info = get_realhost_info(hostname, ipaddress, checkname,
+ max_cachefile_age == None and check_max_cachefile_age or max_cachefile_age,
+ ignore_check_interval)
if info != None and add_nodeinfo:
- info = [ [ None ] + line for line in info ]
+ if clusters_of(hostname):
+ add_host = hostname
+ else:
+ add_host = None
+ info = [ [add_host] + line for line in info ]
# Now some check types define a parse function. In that case the
# info is automatically being parsed by that function - on the fly.
@@ -1149,6 +1157,7 @@ def convert_check_info():
"default_levels_variable" : check_default_levels.get(check_type),
"node_info" : False,
"parse_function" : None,
+ "extra_sections" : [],
}
else:
# Check does already use new API. Make sure that all keys are present,
@@ -1160,6 +1169,7 @@ def convert_check_info():
info.setdefault("snmp_scan_function", None)
info.setdefault("default_levels_variable", None)
info.setdefault("node_info", False)
+ info.setdefault("extra_sections", [])
# Include files are related to the check file (= the basename),
# not to the (sub-)check. So we keep them in check_includes.
@@ -1284,11 +1294,12 @@ def do_all_checks_on_host(hostname, ipaddress, only_check_types = None):
if infotype in parsed_infos:
info = parsed_infos[infotype]
else:
- info = get_host_info(hostname, ipaddress, infotype)
+ info = get_info_for_check(hostname, ipaddress, infotype)
parsed_infos[infotype] = info
except MKSkipCheck, e:
continue
+
except MKSNMPError, e:
if str(e):
problems.append(str(e))
@@ -1399,6 +1410,25 @@ def do_all_checks_on_host(hostname, ipaddress, only_check_types = None):
return agent_version, num_success, error_sections, ", ".join(problems)
+# Collect information needed for one check. In case the check uses
+# extra sections (new feature since 1.2.7i1) only the main section
+# raises exceptions. Error in extra sections are silently ignored
+# and the info is replaced with None.
+def get_info_for_check(hostname, ipaddress, infotype, max_cachefile_age=None, ignore_check_interval=False):
+
+ if infotype in check_info:
+ extra_sections = check_info[infotype]["extra_sections"]
+ if extra_sections:
+ info = [ get_host_info(hostname, ipaddress, infotype, max_cachefile_age, ignore_check_interval) ]
+ for es in extra_sections:
+ try:
+ info.append(get_host_info(hostname, ipaddress, es, max_cachefile_age, ignore_check_interval=False))
+ except:
+ info.append(None)
+ return info
+
+ return get_host_info(hostname, ipaddress, infotype, max_cachefile_age, ignore_check_interval)
+
def open_checkresult_file():
global checkresult_file_fd