diff --git a/README.md b/README.md
index 121baa64..5478a263 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,14 @@
# IPFIXcol framework
+> **:warning: [IPFIXcol2](https://github.com/CESNET/ipfixcol2) has been released!**
+>
+> The next generation of the collector is more stable, up to 2x faster, and adds support
+> for new features (e.g. biflow, structured data types, etc.). The code
+> was completely rewritten and some plugins might not be available.
+>
+> Since the release of the new collector, this old framework is **not** supported anymore!
+> Please, consider upgrading to the [new release](https://github.com/CESNET/ipfixcol2).
+
-
+This section describes parts of the XML configuration file. Example configuration for an email
+subprofile can be found in a section below.
+Keep in mind that profile and channel names must match a format that corresponds to the variables
+in the C language. In other words, a name can have letters (both uppercase and lowercase), digits
+and underscore only. The first letter of a name should be either a letter or an underscore.
+Consequently, the names must match regular expression `[a-zA-Z_][a-zA-Z0-9_]*`.
+
+Warning: Using complicated filters and multiple nested profiles have a significant impact on
+the throughput of the collector!
+
+### Profile (\)
+```xml
+
+ ...
+ ...
+ ...
+ ...
```
+Each profile has a name attribute (i.e. ``) for identification among other profiles.
+The attribute must be unique only among a group of profiles those belong to the common
+parent profile. In the definition of each profile must be exactly one definition of following
+elements:
+
+- `` \
+ Profile type ("normal" or "shadow"). Normal profile means that IPFIXcol plugins should
+ store all valuable data (usually flow records and metadata). On the other hand,
+ for shadow profiles, the plugins should store only metadata. For example: In case of
+ the lnfstore plugin, only flows of normal profiles are stored. Others are ignored.
+- `` \
+ The absolute path to a directory. All data records and metadata that belong to the profile
+ and its channels will be stored here. The directory MUST be unique for each profile!
+- `` \
+ List of one or more channels (see the section below about channels)
+
+Optionally, each profile can contain up to one definition of an element:
+- `` \
+ List of subprofiles that belongs to the profile. The list can be empty.
+
+### Channel \
+```xml
+
+
+ ...
+ ...
+ ...
+
+ ...
+
+```
+
+Each channel has a name attribute (i.e. ``) for unique identification amongst other
+channels within a profile. Each channel must have exactly one definition of:
+- `` \
+ List of flow sources. In this case, a source of records should not be confused with
+ an IPFIX/NetFlow exporter. The source is basically a channel from a parent profile
+ from which this channel will receive flow records. Each source must be specified in
+ an element ``. \
+ If the channel receives data from all parent channels, the list of channels can be replaced
+ with only one source: `*`. Channels in the "live"
+ profile always have to use this notation.
+
+Each channel within any "shadow" profile must receive from all channels from a parent profile
+i.e. it must always use only '*' source! It is due to the fact that for later evaluation of
+queries over shadow profiles (for example by fdistdump or other tools) the information about
+parent channels that belong to every flow has been already lost.
+
+Optionally, each channel may contain one:
+- `` \
+ A flow filter expression that will be applied to flow records received from sources.
+ The records that satisfy the specified filter expression will be labeled with this
+ channel and the profile to which this channel belongs.
+ If the filter is not defined, the expression is always evaluated as true.
+
+Warning: User must always make sure that intersection of records that belong to multiple
+channels of the same profile is always empty! Otherwise, the record can be stored multiple
+times (in case of lnfstore) or added to the summary statistic of the profile multiple times
+(in case of profilestats).
+
+## Example configuration
+
+Following configuration is based on the hierarchy mentioned earlier but few parts have been
+simplified.
+
+```xml
+
+ normal
+ /some/directory/live/
+
+
+ *
+ odid 10
+
+
+ *
+ odid 20
+
+
+
+
+
+ normal
+ /some/directory/emails/
+
+
+
+
+ ch1
+ ch2
+
+
+ port in [110, 995]
+
+
+
+
+
+
+
+
+
+```
+
+### Tips
+
+If you need to distinguish individual flow exporters, we highly recommend configuring each exporter
+to use unique Observation Domain ID (ODID) (IPFIX only), configure each channel of the "live"
+profile to represent one exporter and use filter keyword "odid" (see the filter syntax for more
+details and limitations). If the ODID method is not applicable in your situation, you can also
+use "exporterip", "exporterport", etc. keywords, but be aware, this doesn't
+make sense in case of the distributed collector architecture because all flows are sent to the one
+active proxy collector and then redistributed by the proxy to subcollectors that perform profiling.
+From the point of view of any subcollector the proxy is the exporter, therefore these ODID
+replacements don't work as expected. On the other hand, the ODID always works.
+
+If you want to make sure that your configuration file is ready to use, you can use a tool called
+"ipfixcol-profiles-check". Use "-h" to show all available parameters.
+
+## Filter syntax
+
+The filter syntax is based on the well-known nfdump tool. Although keywords must be written with
+lowercase letters. Any filter consists of one or more expressions `expr`.
+Any number of `expr` can be linked together: `expr and expr, expr or expr, not expr and (expr).`
+
+An expression primitive usually consists of a keyword (a name of an Information Element),
+optional comparator, and a value. By default, if the comparator is omitted, equality
+operator `=` will be used. Numeric values can use scaling of following supported scaling
+factor: k, m, g. The factor is 1000.
+
+Following comparators `comp` are supported:
+- equals sign (`=`, `==` or `eq`)
+- less than (`<` or `lt`)
+- more than (`>` or `gt`)
+- like/binary and (`&`);
+
+Below is the list of the most frequently used filter primitives that are universally supported.
+If you cannot find the primitive you are looking for, try to use the corresponding *nfdump* expression
+or just use the name of IPFIX Information Element. If you need to preserve compatibility with
+*fdistdump*, you have to use only nfdump expressions!
+
+- _IP version_ \
+ `ipv4` or `inet4` for IPv4 \
+ `ipv6` or `inet6` for IPv6
+
+- _Protocol_ \
+ `proto ` \
+ `proto ` \
+ where `` is known protocol such as tcp, udp, icmp, icmp6, etc. or a valid
+ protocol number: 6, 17 etc.
+
+- _IP address_ \
+ `[src|dst] ip ` \
+ `[src|dst] host ` \
+ with `` as any valid IPv4 or IPv6 address. To check if an IP address is in a known IP
+ list, use: \
+ `[src|dst] ip in [ ]` \
+ `[src|dst] host in [ ]` \
+ where `` is a space or comma separated list of individual ``.
+
+ IP addresses, networks, ports, AS number etc. can be specifically selected by using a direction
+ qualifier, such as `src` or `dst`.
+
+- _Network_ \
+ `[src|dst] net a.b.c.d m.n.r.s` \
+ Select the IPv4 network a.b.c.d with netmask m.n.r.s. \
+ \
+ `[src|dst] net /` \
+ with `` as a valid IPv4 or IPv6 network and `` as mask bits. The number of mask bits
+ must match the appropriate address family in IPv4 or IPv6. Networks may be abbreviated such
+ as 172.16/16 if they are unambiguous.
+
+- _Port_ \
+ `[src|dst] port [comp] ` \
+ with as any valid port number. If *comp* is omitted, '=' is assumed. \
+ `[src|dst] port in [ ]` \
+ A port can be compared against a know list, where `` is a space or comma separated
+ list of individual port numbers.
+
+- _Flags_ \
+ `flags ` \
+ with `` as a combination of: \
+ A - ACK \
+ S - SYN \
+ F - FIN \
+ R - Reset \
+ P - Push \
+ U - Urgent \
+ X - All flags on \
+ The ordering of the flags is not relevant. Flags not mentioned are treated as don't care.
+ In order to get those flows with only the SYN flag set, use the
+ syntax `flags S and not flags AFRPU`.
+
+- _Packets_ \
+ `packets [comp] [scale]` \
+ To filter for records with a specific packet count. \
+ Example: `packets > 1k`
+
+- _Bytes_ \
+ `bytes [comp] [scale]` \
+ To filter for records with a specific byte count. \
+ Example: `bytes 46` or `bytes > 100 and bytes < 200`
+
+- _Packets per second_ (calculated value) \
+ `pps [comp] num [scale]` \
+ To filter for flows with specific packets per second.
+
+- _Duration_ (calculate value) \
+ `duration [comp] num` \
+ To filter for flows with specific duration in milliseconds.
+
+- _Bits per second_ (calculated value) \
+ `bps [comp] num [scale]` \
+ To filter for flows with specific bytes per second.
+
+- _Bytes per packet_ (calculated value) \
+ `bpp [comp] num [scale]` \
+ To filter for flows with specific bytes per packet.
+
+Following expressions are available only for processing live IPFIX records and therefore are
+not supported by fdistdump.
+
+- _Observation Domain ID (ODID)_ \
+ `odid [comp] ` \
+ To filter IPFIX records with a specific Observation Domain ID.
+
+- _Exporter IP_ \
+ `exporterip ` \
+ `exporterip in [ ]` \
+ To filter for exporters connected with specified IP addresses.
+
+- _Exporter port_ \
+ `exporterport ` \
+ `exporterport in [ ]` \
+ To filter for exporters connected with specified port.
+
+- _Collector IP_ \
+ `collectorip ` \
+ `collectorip in [ ]` \
+ To filter for an input IP address of a running collector.
+
+- _Collector port_ \
+ `collectorport ` \
+ `collectorport in [ ]` \
+ To filter for an input port of a running collector.
+
+Instead of the identifiers above you can also use any IPFIX Information Element (IE) that is
+supported by IPFIXcol. These IEs can be easily added to the configuration file of the collector
+so even Private Enterprise IEs can be also used for filtering. See its manual page for more
+information. Just keep in mind that these identifiers are not supported by fdistdump right now.
+
+For example, IPFIX IE for the source port in the transport header is called "sourceTransportPort"
+and essentially corresponds to the filter expression "src port". Therefore, the expressions
+`sourceTransportPort 80` and `src port 80` represent the same filter.
+
+### Filter examples
+
+To dump all records of host 192.168.1.2: \
+`ip 192.168.1.2`
+
+To dump all record of network 172.16.0.0/16: \
+`net 172.16.0.0/16`
+
+To dump all port 80 IPv6 connections to any web server: \
+`inet6 and proto tcp and ( src port > 1024 and dst port 80 )`
+
+### Use-case example
+
+Let's say that we would like to filter only POP3 flows. We know that ports of POP3 communication
+are 110 (unencrypted) or 995 (encrypted). So we can write: \
+`sourceTransportPort == 110 or sourceTransportPort == 995 or destinationTransportPort == 110 or destinationTransportPort == 995`
+
+Instead of IPFIX IEs, we can use universal identifiers: \
+`src port 110 or src port 995 or dst port 110 or dst port 995`
-* **profile** - Profile definition with options, channels and subprofiles.
+Source and destination port can be merged: \
+`port 110 or port 995`
- * **type** - Specifies the type of a profile - normal/shadow. _normal_ profile means that IPFIXcol plugins should store all valuable data. _shadow_ profile means that IPFIXcol plugins should store only statistics.
- * **directory** - Directory for data store of valuable data and statistics. Must be unique for each profile.
- * **channelList** - List of channels that belong to the profile. At least one channel must be specified. A number of channels are unlimited.
- * **subprofileList** - List of subprofiles that belong to the profile. This item is optional. A number of subprofiles are unlimited.
+This expression still can be simplified: \
+`port in [110, 995]`
-* **channel** - Channel structure for profile's data filtering.
- * **sourceList** - List of sources from which channel will receive data. Sources are channels from parent's profile (except top level channels). If a profile receive data from all parent's channels only one source with '\*' can by used. _shadow_ profiles must always use only '\*' source!
- * **filter** - Filter applied on data records, specifying whether it belongs to the profile. It uses the same syntax as filtering intermediate plugin. Except data fields, profile filter can contain elements from IP and IPFIX header. Supported fields are: odid, srcaddr, dstaddr, srcport, dstport.
+All examples above represent the same filter.
[Back to Top](#top)
diff --git a/plugins/intermediate/profiler/configure.ac b/plugins/intermediate/profiler/configure.ac
index 094c9693..cf278b8f 100644
--- a/plugins/intermediate/profiler/configure.ac
+++ b/plugins/intermediate/profiler/configure.ac
@@ -38,7 +38,7 @@
AC_PREREQ([2.60])
# Process this file with autoconf to produce a configure script.
-AC_INIT([ipfixcol-profiler-inter], [0.0.6])
+AC_INIT([ipfixcol-profiler-inter], [0.0.7])
AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability])
LT_PREREQ([2.2])
LT_INIT([disable-static])
diff --git a/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk b/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk
index 27e1112c..34d8616e 100644
--- a/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk
+++ b/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk
@@ -8,11 +8,19 @@
version="5.0" xml:lang="en">
- 2015
+ 2015-2017CESNET, z.s.p.o.
- 14 January 2015
+ 19 October 2017
+
+
+ Lukas
+ Hutak
+
+ lukas.hutak@cesnet.cz
+ developer
+ Michal
@@ -28,171 +36,685 @@
ipfixcol-profiler-inter1
- profiler plugin for IPFIXcol.
+ Profiler plugin for IPFIXcol.ipfixcol-profiler-inter
- profiler plugin for IPFIXcol.
+ Profiler plugin for IPFIXcol.Description
- The ipfix-profiler-inter plugin is a part of IPFIXcol (IPFIX collector). It profiles IPFIX data records and fills in metadata information according to given
- set of profiles and their channels.
+ The ipfix-profiler-inter plugin is an intermediate plugin for IPFIXcol
+ (IPFIX collector). It profiles IPFIX data records and fills in metadata information
+ according to given set of profiles and their channels.
- Configuration
- The collector must be configured to use profiler plugin in startup.xml configuration.
- The configuration specifies which plugins are used by the collector to process data and provides configuration for the plugins themselves.
+ Introduction to profiling
+
+ The goal of flow profiling is multi-label classification (based on a set of rules) into
+ user-defined groups. These labels can be used for further flow processing. The basic
+ terminology includes profiles and channels.
+
+
+ A profile is a view that represents a subset of data records received by a collector.
+ Consequently, this allows surfacing only the records that a user needs to see. Each profile
+ contains one or more channels, where each channel is represented by a filter and sources of
+ flow records. If a flow satisfies a condition of any channel of a profile, then the flow
+ will be labeled with the profile and the channel. Any flow can have as many labels as
+ possible. In other words, it can belong to multiple channels/profiles at the same time.
+
+
+ For example, let us consider that you store all flows and besides you want to store only
+ flows related to email communications (POP3, IMAP, SMTP). To do this, we can create
+ a profile "emails" with channels "pop3", "imap" and "smtp". When a flow with POP3
+ communication (port 110 or 995) is classified, it will meet the condition of the "pop3"
+ channel and will be labeled as the flow that belongs to the profile "emails" and the
+ channel "pop3".
+
+
+ Example of a profile hierarchy:
- startup.xml profiler example
-
- ...
- /path/to/profiles.xml
-
- ...
-
-
-
- ...
-
- ]]>
+
+
+ The profiles can be nested and create a tree hierarchy (as shown above). A flow source of
+ a channel in a profile can be only one or more channels of the direct parent of the profile
+ to which the channel belongs. For example, the channel "http" in the profile "office" can
+ use only "net1", "net2" or "net3" channels as sources. The exception is the highest level
+ i.e. "live" profile. This profile must be always present, has exactly this name, and its
+ channels will receive all flow records intended for profiling.
+
+
+ How does flow profiling work? In a nutshell, if a record satisfies a filter of a channel,
+ it will be labeled with the channel and the profile to which the channel belongs and will
+ be also sent for evaluation to all subscribers of the channel.
+ For example, let us consider the tree hierarchy above. All flow records will be always sent
+ to all channels of "live" profile as mentioned earlier. If a flow record satisfies the
+ filter of the channel "ch1", the record will be labeled with the profile "live" and the
+ channel "ch1". Because the flow belongs to the channel "ch1" it will be also sent for
+ evaluation to all subscribers of this channel i.e. to the channels of the profiles "emails"
+ and "buildings" that have this channel ("ch1") in their source list.
+ If the record doesn't satisfy the filter, it will not be distributed to the subscribers.
+ However, if the record satisfies the channel "ch2" and the channels of the profiles
+ "emails" and "buildings" are also subscribers of the channel "ch2", the record will be sent
+ them too. Thus, the record can get to any channel in different ways but labeled can be only
+ once.
+
+
+ For now, following plugins support profiling provided by this plugin:
+
+
+
+
+ ipfixcol-lnfstore-output1
+ - Convert and store IPFIX records into NfDump files. For more information see the manual
+ page of the plugin.
+
+
+
+
+ ipfixcol-profilestats-inter1
+ - Create and update RRD statistics per profile and channel. For more information
+ see manual page of the plugin.
+
+
+
+
-
+
+ Plugin configuration
+
+ The collector must be configured to use Profiler plugin in the startup.xml configuration.
+ The profiler plugin must be placed into the intermediate section before any other plugins
+ that use profiling results. Otherwise, no profiling information will be available for these
+ plugins.
+
+
+
+ ...
+ /path/to/profiles.xml
+
+...
+
+ ...
+
+
+ ...
+
+]]>
+
+
+ The plugin does not accept any parameters, but the Collecting process must define parameter
+ ]]> with an absolute path to the profiling configuration.
+
+
+
+
+ Structure of the profiling configuration
+
+ This section describes parts of the XML configuration file. Example configuration for an
+ email subprofile can be found in a section below.
+
+
+ Keep in mind that profile and channel names must match a format that corresponds to the
+ variables in the C language. In other words, a name can have letters (both uppercase and
+ lowercase), digits and underscore only. The first letter of a name should be either
+ a letter or an underscore. Consequently, the names must match regular expression
+ .
+
+
+ Warning: Using complicated filters and multiple nested profiles
+ have a significant impact on the throughput of the collector!
+
+
+
+ Profile (]]>)
+
+
+ ...
+ ...
+ ...
+ ...
+
+]]>
+
+
+ Each profile has a name attribute (i.e. ]]>) for identification among
+ other profiles. The attribute must be unique only among a group of profiles those
+ belong to the common parent profile. In the definition of each profile must be exactly
+ one definition of following elements:
+
- profiles
-
- path to the file with profiles specification
-
+ ]]>
+
+
+ Profile type ("normal" or "shadow"). Normal profile means that IPFIXcol plugins
+ should store all valuable data (usually flow records and metadata). On the other
+ hand, for shadow profiles, the plugins should store only metadata. For example:
+ In case of the lnfstore plugin, only flows of normal profiles are stored. Others
+ are ignored.
+
+
+
+
+
+ ]]>
+
+
+ The absolute path to a directory. All data records and metadata that belong to
+ the profile and its channels will be stored here. The directory MUST be unique
+ for each profile!
+
+
+
+
+
+ ]]>
+
+
+ List of one or more channels (see the section below about channels).
+
+
-
+
+ Optionally, each profile can contain up to one definition of an element:
+
+
+
+ ]]>
+
+
+ List of subprofiles that belongs to the profile. The list can be empty.
+
+
+
+
+
-profile.xml profiler example
-
+
+ Channel (]]>)
+
- normal
- /some/directory/
-
-
-
-
- *
-
- ipVersion = 4
-
-
-
- *
-
- odid != 5
-
-
-
-
-
- normal
- /some/directory/p1/
-
-
-
-
- ch1
- ch2
-
- sourceIPv4Address = 192.168.0.0/16
-
-
-
- ch1
-
- sourceTransportPort == 25
-
-
-
-
-
-
-
+
+
+ ...
+ ...
+ ...
+
+ ...
+
]]>
-
-
-
+
+
+ Each channel has a name attribute (i.e. ]]>) for unique identification
+ amongst other channels within a profile. Each channel must have exactly one definition
+ of:
+
- profile
-
- Profile definition with options, channels and subprofiles.
-
-
-
- type
-
- Specifies the type of a profile - normal/shadow. normal profile means that IPFIXcol plugins should store all valuable data. shadow profile means that IPFIXcol plugins should store only statistics.
-
-
-
-
- directory
-
- Directory for data store of valuable data and statistics. Must be unique for each profile.
-
-
-
-
- channelList
-
- List of channels that belong to the profile. At least one channel must be specified. A number of channels are unlimited.
-
-
-
-
- subprofileList
-
- List of subprofiles that belong to the profile. This item is optional. A number of subprofiles are unlimited.
-
-
-
-
-
+ ]]>
+
+
+ List of flow sources. In this case, a source of records should not be confused
+ with an IPFIX/NetFlow exporter. The source is basically a channel from a parent
+ profile from which this channel will receive flow records. Each source must be
+ specified in an element ]]>. If the channel receives data
+ from all parent channels, the list of channels can be replaced with only one
+ source: *]]>. Channels in the "live" profile always
+ have to use this notation.
+
+
-
+
+
+ Each channel within any "shadow" profile must receive from all channels from a parent
+ profile i.e. it must always use only '*' source! It is due to the fact that for later
+ evaluation of queries over shadow profiles (for example by fdistdump or other tools)
+ the information about parent channels that belong to every flow has been already lost.
+
+
+ Optionally, each channel may contain one:
+
+
- channel
-
- Channel structure for profile's data filtering.
-
-
-
- sourceList
-
- List of sources from which channel will receive data. Sources are channels from parent's profile (except top level channels). If a profile receive data from all parent's channels only one source with '*' can by used. shadow profiles must always use only '*' source!
-
-
-
-
- filter
-
- Filter applied on data records, specifying whether it belongs to the profile. It uses the same syntax as ipfixcol-filter-inter1
- Except data fields, profile filter can contain elements from IP and IPFIX header. Supported fields are: odid, srcaddr, dstaddr, srcport, dstport
-
-
-
-
-
+ ]]>
+
+
+ A flow filter expression that will be applied to flow records received from
+ sources. The records that satisfy the specified filter expression will be
+ labeled with this channel and the profile to which this channel belongs.
+ If the filter is not defined, the expression is always evaluated as true.
+
+
-
+
+ Warning: User must always make sure that intersection of records
+ that belong to multiple channels of the same profile is always empty! Otherwise, the
+ record can be stored multiple times (in case of lnfstore) or added to the summary
+ statistic of the profile multiple times (in case of profilestats).
+
+
+
+
+
+ Example configuration
+
+ Following configuration is based on the hierarchy mentioned earlier but few parts have been
+ simplified.
+
+
+
+ normal
+ /some/directory/live/
+
+
+ *
+ odid 10
+
+
+ *
+ odid 20
+
+
+
+
+
+ normal
+ /some/directory/emails/
+
+
+
+
+ ch1
+ ch2
+
+
+ port in [110, 995]
+
+
+
+
+
+
+
+
+
+]]>
+
+
+ Tips
+
+ If you need to distinguish individual flow exporters, we highly recommend configuring
+ each exporter to use unique Observation Domain ID (ODID) (IPFIX only), configure each
+ channel of the "live" profile to represent one exporter and use filter keyword "odid"
+ (see the filter syntax for more details and limitations). If the ODID method is not
+ applicable in your situation, you can also use "exporterip", "exporterport", etc.
+ keywords, but be aware, this doesn't make sense in case of the distributed collector
+ architecture because all flows are sent to the one active proxy collector and then
+ redistributed by the proxy to subcollectors that perform profiling. From the point of
+ view of any subcollector the proxy is the exporter, therefore these ODID replacements
+ don't work as expected. On the other hand, the ODID always works.
+
+
+ If you want to make sure that your configuration file is ready to use, you can use
+ a tool called ipfixcol-profiles-check. Use "-h" to show all available
+ parameters.
+
+
+
+
+
+ Filter syntax
+
+ The filter syntax is based on the well-known
+ nfdump1
+ tool. Although keywords must be written with lowercase letters. Any filter consists of one
+ or more expressions expr.
+
+
+
+ Any number of expr can be linked together:
+ expr and expr, expr or expr,
+ not expr and ( expr ).
+
+
+
+ An expression primitive usually consists of a keyword (a name of an Information Element),
+ optional comparator, and a value. By default, if the comparator is omitted, equality
+ operator = will be used. Numeric values can use scaling of following
+ supported scaling factor: k, m, g. The factor is 1000.
+
+
+
+ Following comparators comp are supported:
+
+
+
+
+
+ equals sign (=, == or eq)
+
+
+
+
+
+ less than ( or lt)
+
+
+
+
+
+ more than (]]> or gt)
+
+
+
+
+
+ like/binary and ()
+
+
+
+
+
+ Below is the list of the most frequently used filter primitives that are universally
+ supported. If you cannot find the primitive you are looking for, try to use the
+ corresponding
+ nfdump1
+ expression or just use the name of IPFIX Information Element. If you need to preserve
+ compatibility with
+ fdistdump1,
+ you have to use only nfdump expressions!
+
+
+
+
+ IP version
+
+
+ ipv4 or inet4 for IPv4ipv6 or inet6 for IPv6
+
+
+
+
+
+ Protocol
+
+
+ proto ]]>proto ]]>where ]]> is known protocol such as tcp,
+ udp, icmp, icmp6, etc. or a valid protocol number: 6, 17 etc.
+
+
+
+
+
+ IP address
+
+
+ ]]>]]>with ]]> as any valid IPv4 or IPv6 address.
+
+ To check if an IP address is in a known IP list, use: ]]]> ]]]>where ]]> is a space or comma separated list of individual ]]>.
+
+ IP addresses, networks, ports, AS number etc. can be specifically selected by using
+ a direction qualifier, such as src or dst.
+
+
+
+
+
+ Network
+
+
+ Select
+ the IPv4 network a.b.c.d with netmask m.n.r.s.
+
+ /]]>with
+ ]]> as a valid IPv4 or IPv6 network and
+ ]]> as mask bits. The number of mask bits must
+ match the appropriate address family in IPv4 or IPv6. Networks may be abbreviated
+ such as 172.16/16 if they are unambiguous.
+
+
+
+
+
+ Port
+
+
+ ]]>with
+ ]]> as any valid port number. If
+ comp is omitted, '=' is assumed.
+ ]]]>A port can
+ be compared against a know list, where
+ ]]> is a space or comma separated list of
+ individual port numbers.
+
+
+
+
+
+ Flags
+
+
+ ]]>with
+ ]]> as a combination of:
+
+
+ A - ACKS - SYNF - FINR - ResetP - PushU - UrgentX - All flags on
+
+
+ The ordering of the flags is not relevant. Flags not mentioned are treated
+ as don't care. In order to get those flows with only the SYN flag set, use the
+ syntax 'flags S and not flags AFRPU'.
+
+
+
+
+
+ Packets
+
+
+ [scale]]]>To
+ filter for records with a specific packet count.Example: 'packets > 1k'
+
+
+
+
+
+ Bytes
+
+
+ [scale]]]>To
+ filter for records with a specific byte count.Example:
+ 'bytes 46' or ' 100 and bytes < 200]]>'
+
+
+
+
+
+ Packets per second (calculated value)
+
+
+ To
+ filter for flows with specific packets per second.
+
+
+
+
+
+ Duration (calculated value)
+
+
+ To
+ filter for flows with specific duration in milliseconds.
+
+
+
+
+
+ Bits per second (calculated value)
+
+
+ To
+ filter for flows with specific bits per second.
+
+
+
+
+
+ Bytes per packet (calculated value)
+
+
+ To
+ filter for flows with specific bytes per packet.
+
+
+
+
+
+
+ Following expressions are available only for processing live IPFIX records and therefore
+ are not supported by fdistdump.
+
+
+
+
+ Observation Domain ID (ODID)
+
+
+ ]]> ]]]>To
+ filter IPFIX records with a specific Observation Domain ID.
+
+
+
+
+
+ Exporter IP
+
+
+ ]]> ]]]>To
+ filter for exporters connected with specified IP addresses.
+
+
+
+
+
+ Exporter port
+
+
+ ]]> ]]]>To
+ filter for exporters connected with specified port.
+
+
+
+
+
+ Collector IP
+
+
+ ]]> ]]]>To
+ filter for an input IP address of a running collector.
+
+
+
+
+
+ Collector port
+
+
+ ]]> ]]]>To
+ filter for an input port of a running collector.
+
+
+
+
+
+
+ Instead of the identifiers above you can also use any IPFIX Information Element (IE) that is
+ supported by IPFIXcol. These IEs can be easily added to the configuration file of the
+ collector so even Private Enterprise IEs can be also used for filtering. See its manual
+ page for more information. Just keep in mind that these identifiers are not supported by
+ fdistdump right now.
+
+
+ For example, IPFIX IE for the source port in the transport header is called
+ "sourceTransportPort" and essentially corresponds to the filter expression "src port".
+ Therefore, the expressions sourceTransportPort 80 and
+ src port 80 represent the same filter.
+
+
+
+ Filter examples
+
+ To dump all records of host 192.168.1.2:ip 192.168.1.2
+
+
+
+ To dump all record of network 172.16.0.0/16:net 172.16.0.0/16
+
+
+
+ To dump all port 80 IPv6 connections to any web server:inet6 and proto tcp and ( src port > 1024 and dst port 80 )
+
+
+
+
+ Use-case example
+
+ Let's say that we would like to filter only POP3 flows. We know that ports of POP3 communication
+are 110 (unencrypted) or 995 (encrypted). So we can write:sourceTransportPort == 110or sourceTransportPort == 995or destinationTransportPort == 110or destinationTransportPort == 995
+
+
+
+ Instead of IPFIX IEs, we can use universal identifiers:src port 110 or src port 995 or dst port 110 or dst port 995
+
+
+
+ Source and destination port can be merged:port 110 or port 995
+
+
+
+ This expression still can be simplified:port in [110, 995]
+
+
+
+ All examples above represent the same filter.
+
+
@@ -202,7 +724,9 @@
- ipfixcol1
+ ipfixcol1,
+ ipfixcol-lnfstore-output1,
+ ipfixcol-profilestats-inter1Man pages
diff --git a/plugins/intermediate/profiler/m4/lbr_set_distro.m4 b/plugins/intermediate/profiler/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/intermediate/profiler/m4/lbr_set_distro.m4
+++ b/plugins/intermediate/profiler/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/intermediate/stats/m4/lbr_set_distro.m4 b/plugins/intermediate/stats/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/intermediate/stats/m4/lbr_set_distro.m4
+++ b/plugins/intermediate/stats/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/intermediate/uid/m4/lbr_set_distro.m4 b/plugins/intermediate/uid/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/intermediate/uid/m4/lbr_set_distro.m4
+++ b/plugins/intermediate/uid/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/fastbit/config_struct.h b/plugins/storage/fastbit/config_struct.h
index d7a4b8ae..b0a7f845 100644
--- a/plugins/storage/fastbit/config_struct.h
+++ b/plugins/storage/fastbit/config_struct.h
@@ -110,8 +110,12 @@ struct fastbit_config {
/* size of buffer (number of values)*/
int buff_size;
- /* semaphore for index building thread */
- sem_t sem;
+ /* Handler for the index thread */
+ pthread_t index_thread;
+
+ /* Mutex for index building thread */
+ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
+ pthread_cond_t mutex_cond = PTHREAD_COND_INITIALIZER;
};
#endif /* CONFIG_STRUCT_H_ */
diff --git a/plugins/storage/fastbit/fastbit.cpp b/plugins/storage/fastbit/fastbit.cpp
index 808dbcc8..fb95fefa 100644
--- a/plugins/storage/fastbit/fastbit.cpp
+++ b/plugins/storage/fastbit/fastbit.cpp
@@ -46,7 +46,6 @@ extern "C" {
}
#include
-#include
#include
#include
#include
@@ -67,6 +66,8 @@ extern "C" {
#include "fastbit_element.h"
#include "config_struct.h"
+volatile bool terminate = false;
+
void ipv6_addr_non_canonical(char *str, const struct in6_addr *addr)
{
sprintf(str, "%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x",
@@ -137,11 +138,30 @@ void *reorder_index(void *config)
std::string dir;
ibis::part *reorder_part;
ibis::table::stringArray ibis_columns;
+ bool no_dirs = true;
+
+ while (!terminate || !no_dirs) {
+
+ /* Sleep until there is a work to do */
+ pthread_mutex_lock(&conf->mutex);
+ while(!terminate && (*conf->dirs).empty()) {
+ pthread_cond_wait(&conf->mutex_cond, &conf->mutex);
+ }
+
+ /* Nothing to do, start again and check termination flag */
+ if ((*conf->dirs).empty()) {
+ pthread_mutex_unlock(&conf->mutex);
+ no_dirs = true;
+ continue;
+ }
- sem_wait(&(conf->sem));
+ /* Get the dir to process */
+ no_dirs = false;
+ dir = (*conf->dirs).back();
+ (*conf->dirs).pop_back();
+
+ pthread_mutex_unlock(&conf->mutex);
- for (unsigned int i = 0; i < conf->dirs->size(); i++) {
- dir = (*conf->dirs)[i];
/* Reorder partitions */
if (conf->reorder) {
MSG_DEBUG(msg_module, "Reordering: %s", dir.c_str());
@@ -173,7 +193,6 @@ void *reorder_index(void *config)
ibis::fileManager::instance().flushDir(dir.c_str());
}
- sem_post(&(conf->sem));
return NULL;
}
@@ -232,18 +251,16 @@ void update_window_name(struct fastbit_config *conf)
* @param conf Plugin configuration data structure
* @param exporter_ip_addr Exporter IP address, as String
* @param odid Observation domain ID
- * @param close Indicates whether plugin/thread should be closed after flushing all data
*/
void flush_data(struct fastbit_config *conf, std::string exporter_ip_addr, uint32_t odid,
- std::map *templates, bool close)
+ std::map *templates)
{
std::map::iterator table;
- int s;
std::stringstream ss;
- pthread_t index_thread;
std::map*>::iterator exporter_it;
std::map::iterator odid_it;
+ std::string flushed_path;
/* Check whether exporter is listed in data structure */
if ((exporter_it = conf->od_infos->find(exporter_ip_addr)) == conf->od_infos->end()) {
@@ -259,18 +276,17 @@ void flush_data(struct fastbit_config *conf, std::string exporter_ip_addr, uint3
std::string path = odid_it->second.path;
- sem_wait(&(conf->sem));
+ pthread_mutex_lock(&conf->mutex);;
{
- conf->dirs->clear();
-
MSG_DEBUG(msg_module, "Flushing data to disk (exporter: %s, ODID: %u)",
odid_it->second.exporter_ip_addr.c_str(), odid);
MSG_DEBUG(msg_module, " > Exported: %u", odid_it->second.flow_watch.exported_flows());
MSG_DEBUG(msg_module, " > Received: %u", odid_it->second.flow_watch.received_flows());
for (table = templates->begin(); table != templates->end(); table++) {
- conf->dirs->push_back(path + ((*table).second)->name() + "/");
- (*table).second->flush(path);
+ if ((*table).second->flush(path, flushed_path) == 0) {
+ conf->dirs->push_back(flushed_path);
+ }
(*table).second->reset_rows();
}
@@ -280,30 +296,16 @@ void flush_data(struct fastbit_config *conf, std::string exporter_ip_addr, uint3
odid_it->second.flow_watch.reset_state();
}
- sem_post(&(conf->sem));
-
- if ((s = pthread_create(&index_thread, NULL, reorder_index, conf)) != 0) {
- MSG_ERROR(msg_module, "pthread_create");
- }
-
- if (close) {
- if ((s = pthread_join(index_thread, NULL)) != 0) {
- MSG_ERROR(msg_module, "pthread_join");
- }
- } else {
- if ((s = pthread_detach(index_thread)) != 0) {
- MSG_ERROR(msg_module, "pthread_detach");
- }
- }
+ pthread_mutex_unlock(&conf->mutex);
+ pthread_cond_signal(&conf->mutex_cond);
}
/**
* \brief Flushes the data for *all* exporters and ODIDs
*
* @param conf Plugin configuration data structure
- * @param close Indicates whether plugin/thread should be closed after flushing all data
*/
-void flush_all_data(struct fastbit_config *conf, bool close)
+void flush_all_data(struct fastbit_config *conf)
{
std::map*> *od_infos = conf->od_infos;
std::map*>::iterator exporter_it;
@@ -312,7 +314,7 @@ void flush_all_data(struct fastbit_config *conf, bool close)
/* Iterate over all exporters and ODIDs and flush data */
for (exporter_it = od_infos->begin(); exporter_it != od_infos->end(); ++exporter_it) {
for (odid_it = exporter_it->second->begin(); odid_it != exporter_it->second->end(); ++odid_it) {
- flush_data(conf, exporter_it->first, odid_it->first, &(odid_it->second.template_info), close);
+ flush_data(conf, exporter_it->first, odid_it->first, &(odid_it->second.template_info));
}
}
}
@@ -429,11 +431,6 @@ int process_startup_xml(char *params, struct fastbit_config *c)
c->window_dir = c->prefix + "/";
}
-
- if (sem_init(&(c->sem), 0, 1)) {
- MSG_ERROR(msg_module, "Semaphore initialization error");
- return 1;
- }
} else {
return 1;
}
@@ -481,6 +478,12 @@ int storage_init(char *params, void **config)
/* On startup we expect to write to new directory */
c->new_dir = true;
+
+ /* Create index thread */
+ if (pthread_create(&c->index_thread, NULL, reorder_index, c) != 0) {
+ MSG_ERROR(msg_module, "pthread_create");
+ }
+
return 0;
}
@@ -578,7 +581,7 @@ int store_packet(void *config, const struct ipfix_message *ipfix_msg,
old_templates->insert(std::pair(table->first, table->second));
/* Flush data */
- flush_data(conf, exporter_ip_addr, odid, old_templates, false);
+ flush_data(conf, exporter_ip_addr, odid, old_templates);
/* Remove rewritten template */
delete table->second;
@@ -613,7 +616,7 @@ int store_packet(void *config, const struct ipfix_message *ipfix_msg,
if (flush_records || flush_time) {
/* Flush data for all exporters and ODIDs */
- flush_all_data(conf, false);
+ flush_all_data(conf);
/* Time management differs between flush policies (records vs. time) */
if (flush_records) {
@@ -678,7 +681,7 @@ int storage_close(void **config)
for (exporter_it = od_infos->begin(); exporter_it != od_infos->end(); ++exporter_it) {
for (odid_it = exporter_it->second->begin(); odid_it != exporter_it->second->end(); ++odid_it) {
templates = &(odid_it->second.template_info);
- flush_data(conf, exporter_it->first, odid_it->first, templates, true);
+ flush_data(conf, exporter_it->first, odid_it->first, templates);
/* Free templates */
for (table = templates->begin(); table != templates->end(); table++) {
@@ -689,6 +692,16 @@ int storage_close(void **config)
delete (*exporter_it).second;
}
+ /* Tell index thread to terminate */
+ terminate = true;
+ pthread_cond_signal(&conf->mutex_cond);
+
+ MSG_INFO(msg_module, "Waiting for the index thread to finish");
+ if (pthread_join(conf->index_thread, NULL) != 0) {
+ MSG_ERROR(msg_module, "pthread_join");
+ }
+ MSG_INFO(msg_module, "Index thread finished");
+
/* Free config structure */
delete od_infos;
delete conf->index_en_id;
diff --git a/plugins/storage/fastbit/fastbit_table.cpp b/plugins/storage/fastbit/fastbit_table.cpp
index 60073f7d..9e05ce3a 100644
--- a/plugins/storage/fastbit/fastbit_table.cpp
+++ b/plugins/storage/fastbit/fastbit_table.cpp
@@ -263,17 +263,17 @@ int template_table::store(ipfix_data_set *data_set, std::string path, bool new_d
return record_cnt;
}
-void template_table::flush(std::string path)
+int template_table::flush(std::string path, std::string &flushed_path)
{
/* Check whether there is something to flush */
if (_rows_count <= 0) {
- return;
+ return -1;
}
/* Check directory */
_rows_in_window += _rows_count;
if (this->dir_check(path + _name, this->_new_dir) != 0) {
- return;
+ return -2;
}
/* Flush data */
@@ -286,11 +286,15 @@ void template_table::flush(std::string path)
_rows_count = 0;
_rows_in_window = 0;
+ flushed_path = path + _name;
+
/* Data on disk is consistent; try to go back to original name */
if (this->_orig_name[0] != '\0') {
strcpy(this->_name, this->_orig_name);
this->_orig_name[0] = '\0';
}
+
+ return 0;
}
int template_table::parse_template(struct ipfix_template *tmp, struct fastbit_config *config)
diff --git a/plugins/storage/fastbit/fastbit_table.h b/plugins/storage/fastbit/fastbit_table.h
index ff97dc4c..056c5e2b 100644
--- a/plugins/storage/fastbit/fastbit_table.h
+++ b/plugins/storage/fastbit/fastbit_table.h
@@ -128,9 +128,11 @@ class template_table
/**
* \brief Flush data to disk and clean memory
*
- * @param path path to direcotry where should be data flushed
+ * @param path path to directory where should be data flushed
+ * @param flushed_path path to directory where data was actually written
+ * @return 0 on success, negative value otherwise
*/
- void flush(std::string path);
+ int flush(std::string path, std::string &flushed_path);
time_t get_first_transmission() {
return _first_transmission;
diff --git a/plugins/storage/fastbit/m4/lbr_set_distro.m4 b/plugins/storage/fastbit/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/fastbit/m4/lbr_set_distro.m4
+++ b/plugins/storage/fastbit/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4 b/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4
+++ b/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/json/Makefile.am b/plugins/storage/json/Makefile.am
index a9cf1886..a94ac2ca 100644
--- a/plugins/storage/json/Makefile.am
+++ b/plugins/storage/json/Makefile.am
@@ -21,7 +21,8 @@ ipfixcol_json_output_la_SOURCES = \
Sender.cpp Sender.h \
Printer.cpp Printer.h \
Server.cpp Server.h \
- File.cpp File.h
+ File.cpp File.h \
+ branchlut2.h
if NEED_KAFKA
ipfixcol_json_output_la_SOURCES += Kafka.cpp Kafka.h
diff --git a/plugins/storage/json/README.md b/plugins/storage/json/README.md
index d61ae2aa..3cba6837 100644
--- a/plugins/storage/json/README.md
+++ b/plugins/storage/json/README.md
@@ -46,7 +46,7 @@ Default plugin configuration in **internalcfg.xml**
Or as `ipfixconf` output:
```
- Plugin type Name/Format Process/Thread File
+ Plugin type Name/Format Process/Thread File
----------------------------------------------------------------------------
storage json json /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
```
@@ -59,11 +59,13 @@ Here is an example of configuration in **startup.xml**:
jsonno
+ noformattedformattedformattedyesyes
+ noipfix.
+
+ odid
+
+ Add source ODID to the output (yes/no) [default == no].
+
+
+
tcpFlagsConvert TCP flags to formatted style of dots and letters (formatted) or to a number (raw) [default == raw].
-
+
timestamp
@@ -172,6 +181,13 @@
+
+ detailedInfo
+
+ Add detailed info about the IPFIX message (export time, sequence number, ...) to each record under "ipfixcol." prefix. (yes/no) [default == no].
+
+
+
prefix
@@ -250,7 +266,7 @@
timeWindow
- Specifies the time interval in seconds to rotate files [default == 300].
+ Specifies the time interval in seconds to rotate files, minimum is 60 [default == 300].
diff --git a/plugins/storage/json/json.cpp b/plugins/storage/json/json.cpp
index 76c5f460..ec55416a 100644
--- a/plugins/storage/json/json.cpp
+++ b/plugins/storage/json/json.cpp
@@ -63,23 +63,28 @@ IPFIXCOL_API_VERSION;
static const char *msg_module = "json_storage";
-void process_startup_xml(struct json_conf *conf, char *params)
+void process_startup_xml(struct json_conf *conf, char *params)
{
pugi::xml_document doc;
pugi::xml_parse_result result = doc.load(params);
-
+
if (!result) {
throw std::invalid_argument(std::string("Error when parsing parameters: ") + result.description());
}
/* Get configuration */
pugi::xpath_node ie = doc.select_single_node("fileWriter");
-
+
/* Check metadata processing */
std::string meta = ie.node().child_value("metadata");
conf->metadata = (strcasecmp(meta.c_str(), "yes") == 0 || meta == "1" ||
strcasecmp(meta.c_str(), "true") == 0);
+ /* Check ODID processing */
+ std::string odid = ie.node().child_value("odid");
+ conf->odid = (strcasecmp(odid.c_str(), "yes") == 0 || odid == "1" ||
+ strcasecmp(odid.c_str(), "true") == 0);
+
/* Format of TCP flags */
std::string tcpFlags = ie.node().child_value("tcpFlags");
conf->tcpFlags = (strcasecmp(tcpFlags.c_str(), "formated") == 0) ||
@@ -110,6 +115,14 @@ void process_startup_xml(struct json_conf *conf, char *params)
conf->whiteSpaces = false;
}
+ /* Detailed information in records */
+ std::string detailedInfo = ie.node().child_value("detailedInfo");
+ conf->detailedInfo = false;
+ if (strcasecmp(detailedInfo.c_str(), "true") == 0 || detailedInfo == "1" ||
+ strcasecmp(detailedInfo.c_str(), "yes") == 0) {
+ conf->detailedInfo = true;
+ }
+
/* Prefix for IPFIX elements */
/* Set default rpefix */
conf->prefix = "ipfix.";
@@ -153,21 +166,21 @@ void process_startup_xml(struct json_conf *conf, char *params)
/* plugin inicialization */
extern "C"
int storage_init (char *params, void **config)
-{
+{
struct json_conf *conf;
try {
/* Create configuration */
conf = new struct json_conf;
-
+
/* Create storage */
conf->storage = new Storage();
/* Process params */
process_startup_xml(conf, params);
-
+
/* Configure metadata processing */
conf->storage->setMetadataProcessing(conf->metadata);
-
+
/* Save configuration */
*config = conf;
} catch (std::exception &e) {
@@ -180,7 +193,7 @@ int storage_init (char *params, void **config)
return 1;
}
-
+
MSG_DEBUG(msg_module, "initialized");
return 0;
}
@@ -192,7 +205,7 @@ int store_packet (void *config, const struct ipfix_message *ipfix_msg,
{
(void) template_mgr;
struct json_conf *conf = (struct json_conf *) config;
-
+
conf->storage->storeDataSets(ipfix_msg, conf);
return 0;
}
@@ -209,15 +222,15 @@ int storage_close (void **config)
{
MSG_DEBUG(msg_module, "CLOSING");
struct json_conf *conf = (struct json_conf *) *config;
-
+
/* Destroy storage */
delete conf->storage;
-
+
/* Destroy configuration */
delete conf;
-
+
*config = NULL;
-
+
return 0;
}
diff --git a/plugins/storage/json/json.h b/plugins/storage/json/json.h
index 0ffca9c5..74b12bd3 100644
--- a/plugins/storage/json/json.h
+++ b/plugins/storage/json/json.h
@@ -56,12 +56,14 @@ class Storage;
*/
struct json_conf {
bool metadata;
+ bool odid; /**< Add ODID to json output */
Storage *storage;
bool tcpFlags; /**< TCP flags format - true(formatted), false(RAW) */
bool timestamp; /**< timestamp format - true(formatted), false(UNIX) */
bool protocol; /**< protocol format - true(RAW), false(formatted) */
bool ignoreUnknown; /**< Ignore unknown elements */
bool whiteSpaces; /**< Convert white spaces in strings (do not skip) */
+ bool detailedInfo; /**< Add detailed information about to each record */
std::string prefix; /**< Prefix for IPFIX elements */
};
diff --git a/plugins/storage/json/m4/lbr_set_distro.m4 b/plugins/storage/json/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/json/m4/lbr_set_distro.m4
+++ b/plugins/storage/json/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/lnfstore/configure.ac b/plugins/storage/lnfstore/configure.ac
index 6f38b903..021ccd32 100644
--- a/plugins/storage/lnfstore/configure.ac
+++ b/plugins/storage/lnfstore/configure.ac
@@ -1,6 +1,6 @@
AC_PREREQ([2.60])
# Process this file with autoconf to produce a configure script.
-AC_INIT([ipfixcol-lnfstore-output], [0.3.3])
+AC_INIT([ipfixcol-lnfstore-output], [0.3.4])
AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability])
LT_PREREQ([2.2])
LT_INIT([dlopen disable-static])
diff --git a/plugins/storage/lnfstore/m4/lbr_set_distro.m4 b/plugins/storage/lnfstore/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/lnfstore/m4/lbr_set_distro.m4
+++ b/plugins/storage/lnfstore/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/nfdump/m4/lbr_set_distro.m4 b/plugins/storage/nfdump/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/nfdump/m4/lbr_set_distro.m4
+++ b/plugins/storage/nfdump/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/postgres/m4/lbr_set_distro.m4 b/plugins/storage/postgres/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/postgres/m4/lbr_set_distro.m4
+++ b/plugins/storage/postgres/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/statistics/m4/lbr_set_distro.m4 b/plugins/storage/statistics/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/statistics/m4/lbr_set_distro.m4
+++ b/plugins/storage/statistics/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/unirec/configure.ac b/plugins/storage/unirec/configure.ac
index c9784fc4..66fe4c9a 100644
--- a/plugins/storage/unirec/configure.ac
+++ b/plugins/storage/unirec/configure.ac
@@ -38,7 +38,7 @@
AC_PREREQ([2.60])
# Process this file with autoconf to produce a configure script.
-AC_INIT([ipfixcol-unirec-output], [0.2.12])
+AC_INIT([ipfixcol-unirec-output], [0.2.15])
AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability])
LT_PREREQ([2.2])
LT_INIT([disable-static])
diff --git a/plugins/storage/unirec/m4/lbr_set_distro.m4 b/plugins/storage/unirec/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/plugins/storage/unirec/m4/lbr_set_distro.m4
+++ b/plugins/storage/unirec/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/plugins/storage/unirec/unirec-elements.txt b/plugins/storage/unirec/unirec-elements.txt
index 8c820ae7..76245224 100644
--- a/plugins/storage/unirec/unirec-elements.txt
+++ b/plugins/storage/unirec/unirec-elements.txt
@@ -30,23 +30,17 @@ HB_TYPE uint8 1 e8057id700 TLS conten
HB_DIR uint8 1 e8057id701 Heartbeat request/response byte
HB_SIZE_MSG uint16 2 e8057id702 Heartbeat message size
HB_SIZE_PAYLOAD uint16 2 e8057id703 Heartbeat payload size
-HTTP_REQUEST_METHOD_ID uint32 4 e16982id500 HTTP request method id
-HTTP_REQUEST_HOST string -1 e16982id501 HTTP request host
-HTTP_REQUEST_URL string -1 e16982id502 HTTP request url
-HTTP_REQUEST_AGENT_ID uint32 4 e16982id503 HTTP request agent id
-HTTP_REQUEST_AGENT string -1 e16982id504 HTTP request agent
-HTTP_REQUEST_REFERER string -1 e16982id505 HTTP referer
-HTTP_RESPONSE_STATUS_CODE uint32 4 e16982id506 HTTP response status code
-HTTP_RESPONSE_CONTENT_TYPE string -1 e16982id507 HTTP response content type
-HTTP_SDM_REQUEST_METHOD_ID uint32 4 e8057id800 Used method
-HTTP_SDM_REQUEST_HOST string -1 e8057id801 Host
-HTTP_SDM_REQUEST_URL string -1 e8057id802 URL
-HTTP_SDM_REQUEST_REFERER string -1 e8057id803 Referer
-HTTP_SDM_REQUEST_AGENT string -1 e8057id804 User-agent
-HTTP_SDM_REQUEST_RANGE bytes -1 e8057id821 Range
-HTTP_SDM_RESPONSE_STATUS_CODE uint32 4 e8057id805 Status coce converted into integer
-HTTP_SDM_RESPONSE_CONTENT_TYPE string -1 e8057id806 Content-type
-HTTP_SDM_RESPONSE_TIME uint64 8 e8057id807 Application response time
+# HTTP elements from Flowmon HTTP plugin in MUNI PEN, and CESNET sdm-http and sdm-https plugins in CESNET PEN
+HTTP_REQUEST_METHOD_ID uint32 4 e16982id500,e8057id800 HTTP request method id
+HTTP_REQUEST_HOST string -1 e16982id501,e8057id801,e8057id808 HTTP(S) request host
+HTTP_REQUEST_URL string -1 e16982id502,e8057id802 HTTP request url
+HTTP_REQUEST_AGENT_ID uint32 4 e16982id503 HTTP request agent id
+HTTP_REQUEST_AGENT string -1 e16982id504,e8057id804 HTTP request agent
+HTTP_REQUEST_REFERER string -1 e16982id505,e8057id803 HTTP referer
+HTTP_RESPONSE_STATUS_CODE uint32 4 e16982id506,e8057id805 HTTP response status code
+HTTP_RESPONSE_CONTENT_TYPE string -1 e16982id507,e8057id806 HTTP response content type
+HTTP_REQUEST_RANGE bytes -1 e8057id821 HTTP range
+HTTP_RESPONSE_TIME uint64 8 e8057id807,e8057id809 HTTP(S) application response time
IPV6_TUN_TYPE uint8 1 e16982id405 IPv6 tunnel type
SMTP_COMMAND_FLAGS uint32 4 e8057id810 SMTP command flags
SMTP_MAIL_CMD_COUNT uint32 4 e8057id811 SMTP MAIL command count
@@ -89,4 +83,4 @@ SIP_VIA string -1 e8057id105 SIP VIA
SIP_USER_AGENT string -1 e8057id106 SIP user agent
SIP_REQUEST_URI string -1 e8057id107 SIP request uri
SIP_CSEQ string -1 e8057id108 SIP CSeq
-VENOM uint8 1 e8057id1001 Venom rootkit detection
\ No newline at end of file
+VENOM uint8 1 e8057id1001 Venom rootkit detection
diff --git a/plugins/storage/unirec/unirec.c b/plugins/storage/unirec/unirec.c
index 134ef9bd..7cadeea9 100644
--- a/plugins/storage/unirec/unirec.c
+++ b/plugins/storage/unirec/unirec.c
@@ -364,7 +364,8 @@ static uint16_t process_record(char *data_record, struct ipfix_template *templat
break;
case UNIREC_FIELD_DBF:
// Handle DIR_BIT_FIELD
- *(uint8_t*)(conf->ifc[i].buffer + matchField->offset_ar[i]) = ((*(uint16_t*)(data_record + offset + size_length)) >> 8) & 0x1;
+ // Just read the least significant byte directly and use only the least significant bit
+ *(uint8_t*)(conf->ifc[i].buffer + matchField->offset_ar[i]) = (*(uint8_t*)(data_record + offset + (length - 1))) & 0x1;
break;
case UNIREC_FIELD_LBF:
// Handle LINK_BIT_FIELD, is BIG ENDIAN but we are using only LSB
@@ -393,8 +394,12 @@ static uint16_t process_record(char *data_record, struct ipfix_template *templat
}
} else {
// Dynamic element
- matchField->valueSize = length;
matchField->value = (void*) (data_record + offset + size_length);
+ if (matchField->unirec_type == 0) { // string value should be trimmed
+ matchField->valueSize = strnlen(data_record + offset + size_length, length);
+ } else {
+ matchField->valueSize = length;
+ }
matchField->valueFilled = 1;
// Fill required count for Unirec where this element is required
for (int i = 0; i < conf->ifc_count; i++) {
@@ -638,8 +643,8 @@ static int8_t getUnirecFieldTypeFromIpfixId(ipfixElement ipfix_el)
} else if (en == 0 && (id == 152 || id == 153)) {
// Timestamps
return UNIREC_FIELD_TS;
- } else if (en == 0 && id == 10) {
- // DIR_BIT_FIELD
+ } else if (en == 0 && (id == 10 || id == 14)) {
+ // DIR_BIT_FIELD (in/out interface numbers)
return UNIREC_FIELD_DBF;
} else if (en == 0 && id == 405) {
// LINK_BIT_FIELD
diff --git a/tools/fbitconvert/m4/lbr_set_distro.m4 b/tools/fbitconvert/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/tools/fbitconvert/m4/lbr_set_distro.m4
+++ b/tools/fbitconvert/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/tools/fbitdump/changelog.md b/tools/fbitdump/changelog.md
index 4c544f82..099e806f 100644
--- a/tools/fbitdump/changelog.md
+++ b/tools/fbitdump/changelog.md
@@ -1,5 +1,12 @@
**Future release:**
+**Version 0.4.4:**
+* Fixed blob hex output
+* Fixed TCP flags example in man page
+
+**Version 0.4.3:**
+* Fixed configuration for CESNET SIP plugin
+
**Version 0.4.2:**
* Fixed markdown syntax
* Support DocBook XSL Stylesheets v1.79
diff --git a/tools/fbitdump/configure.ac b/tools/fbitdump/configure.ac
index fe561344..8e5d28e0 100644
--- a/tools/fbitdump/configure.ac
+++ b/tools/fbitdump/configure.ac
@@ -39,7 +39,7 @@
AC_PREREQ([2.60])
# Process this file with autoconf to produce a configure script.
-AC_INIT([fbitdump], [0.4.2])
+AC_INIT([fbitdump], [0.4.4])
LT_INIT([dlopen disable-static])
AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability])
diff --git a/tools/fbitdump/fbitdump.xml.template b/tools/fbitdump/fbitdump.xml.template
index 5dd3e2eb..c7c0cc2a 100644
--- a/tools/fbitdump/fbitdump.xml.template
+++ b/tools/fbitdump/fbitdump.xml.template
@@ -1267,84 +1267,84 @@
- SIP Method
- %csipm
+ SIP Msg Type
+ %csipmt9-
- e8057id830
+ e8057id100SIP Status Code
- %csips
+ %csipsc3-
- e8057id831
+ e8057id101
- SIP Request URI
- %csipu
+ SIP Call ID
+ %csipci64-
- e8057id832
+ e8057id102
- SIP From
- %csipf
+ SIP Calling Party
+ %csipsrc64-
- e8057id833
+ e8057id103
- SIP To
- %csipt
+ SIP Called Party
+ %csipdst64-
- e8057id834
+ e8057id104
- SIP Contact
- %csipc
+ SIP Via
+ %csipv64-
- e8057id835
+ e8057id105
- SIP Via
- %csipv
+ SIP User Agent
+ %csipua64-
- e8057id836
+ e8057id106
- SIP Route
- %csipr
+ SIP Request URI
+ %csipru64-
- e8057id837
+ e8057id107
- SIP Record Route
- %csiprr
+ SIP Cseq
+ %csipcseq64-
- e8057id838
+ e8057id108
@@ -1702,7 +1702,7 @@
sip4-cesnet
- %ts %td %pr %sp %dp %sa4 %da4 %pkt %byt %fl %csipm %csipc %csipu %csipf %csipt %csipc %csipv %csipr %csiprr
+ %ts %td %pr %sp %dp %sa4 %da4 %pkt %byt %fl %csipmt %csipsc %csipci %csipsrc %csipdst %csipv %csipua %csipru %csipcseqvoip
@@ -1745,6 +1745,11 @@
@pkgdatadir@/plugins/sip_method.so1
+
+ sip_msg_type
+ @pkgdatadir@/plugins/sip_msg_type.so
+ 1
+ dns_rcode@pkgdatadir@/plugins/dns_rcode.so
diff --git a/tools/fbitdump/m4/lbr_set_distro.m4 b/tools/fbitdump/m4/lbr_set_distro.m4
index 0de818ac..a2f9878c 100644
--- a/tools/fbitdump/m4/lbr_set_distro.m4
+++ b/tools/fbitdump/m4/lbr_set_distro.m4
@@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO],
# Autodetect current distribution
if test -f /etc/redhat-release; then
DISTRO=redhat
-elif test -f /etc/SuSE-release; then
+elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then
DISTRO=suse
elif test -f /etc/mandrake-release; then
DISTRO='mandrake'
diff --git a/tools/fbitdump/man/fbitdump.dbk b/tools/fbitdump/man/fbitdump.dbk
index 21f349e4..5a0a64af 100644
--- a/tools/fbitdump/man/fbitdump.dbk
+++ b/tools/fbitdump/man/fbitdump.dbk
@@ -366,9 +366,9 @@
%column is an element alias prefixed with %. %column element can be a computed value, that is a value that is not directly stored.
Alternately it is possible to specify the element name in e[x]id[y] format, where [x] means enterprise number and [y] element ID.
It is also possible to specify a group of columns (e.g., %port for source and destination port)
- It is also possible to mask %column with binary and (&) or binary or (|). Following example select all flows with SYN flag:
+ It is also possible to mask %column with binary and (&) or binary or (|). Following example selects all flows with SYN flag:
- %flg | S > 0
+ %flg & S > 0cmp is one of '=', '==', '<', '>', '<=', '>=', '!='.
When filtering by string, '%' mark can be automatically inserted to the end or/and beginning of string value according to cmp:
diff --git a/tools/fbitdump/src/Values.cpp b/tools/fbitdump/src/Values.cpp
index 9f0108b3..1cca02b9 100644
--- a/tools/fbitdump/src/Values.cpp
+++ b/tools/fbitdump/src/Values.cpp
@@ -145,7 +145,7 @@ std::string Values::toString(bool plainNumbers) const
/* Convert the value to hexa */
for (uint64_t i=0; i < this->opaque.size(); i++) {
- ss << std::setw(2) << static_cast((this->opaque.address()[i]));
+ ss << std::setw(2) << static_cast((uint8_t)(this->opaque.address()[i]));
}
valStr = ss.str();
diff --git a/tools/fbitdump/src/plugins/Makefile.am b/tools/fbitdump/src/plugins/Makefile.am
index 90a27752..1dcc5ab2 100644
--- a/tools/fbitdump/src/plugins/Makefile.am
+++ b/tools/fbitdump/src/plugins/Makefile.am
@@ -2,7 +2,7 @@ ACLOCAL_AMFLAGS = -I m4
pluginsdir = $(datadir)/fbitdump/plugins
-plugins_LTLIBRARIES = httprt.la http_status_code.la sip_method.la dns_rcode.la tls_csuites.la tls_version.la tls_csuites_array.la voip_type.la voip_rtpcodec.la smtp_statuscode.la smtp_command.la mac.la multiplier.la
+plugins_LTLIBRARIES = httprt.la http_status_code.la sip_method.la sip_msg_type.la dns_rcode.la tls_csuites.la tls_version.la tls_csuites_array.la voip_type.la voip_rtpcodec.la smtp_statuscode.la smtp_command.la mac.la multiplier.la
httprt_la_SOURCES= httprt.c
httprt_la_LDFLAGS= -shared -module -avoid-version
@@ -13,6 +13,9 @@ http_status_code_la_LDFLAGS= -shared -module -avoid-version
sip_method_la_SOURCES= sip_method.c
sip_method_la_LDFLAGS= -shared -module -avoid-version
+sip_msg_type_la_SOURCES= sip_msg_type.c
+sip_msg_type_la_LDFLAGS= -shared -module -avoid-version
+
dns_rcode_la_SOURCES= dns_rcode.c
dns_rcode_la_LDFLAGS= -shared -module -avoid-version
diff --git a/tools/fbitdump/src/plugins/sip_msg_type.c b/tools/fbitdump/src/plugins/sip_msg_type.c
new file mode 100644
index 00000000..f8586bc4
--- /dev/null
+++ b/tools/fbitdump/src/plugins/sip_msg_type.c
@@ -0,0 +1,89 @@
+#define _GNU_SOURCE
+#include
+#include
+#include "plugin_header.h"
+
+typedef struct msg_type_s {
+ int code;
+ char *name;
+} msg_type_t;
+
+static const msg_type_t msg_types[] = {
+ { 0, "Invalid" },
+ { 1, "Invite" },
+ { 2, "Ack" },
+ { 3, "Cancel" },
+ { 4, "Bye" },
+ { 5, "Register" },
+ { 6, "Options" },
+ { 7, "Publish" },
+ { 8, "Notify" },
+ { 9, "Info" },
+ { 10, "Subscribe" },
+ { 99, "Status" },
+ { 100, "Trying" },
+ { 101, "Dial Established" },
+ { 180, "Ringing" },
+ { 183, "Session Progress" },
+ { 200, "OK" },
+ { 400, "Bad Request" },
+ { 401, "Unauthorized" },
+ { 403, "Forbidden" },
+ { 404, "Not Found" },
+ { 407, "Proxy Auth Required" },
+ { 486, "Busy Here" },
+ { 487, "Request Canceled" },
+ { 500, "Internal Error" },
+ { 603, "Decline" },
+ { 999, "Undefined" }
+};
+
+#define MSG_CNT (sizeof(msg_types) / sizeof(msg_types[0]))
+
+char *info()
+{
+ return \
+"Converts SIP message type description to code and vice versa.\n \
+e.g. \"Ringing\" -> 180";
+}
+
+void format(const plugin_arg_t * arg, int plain_numbers, char out[PLUGIN_BUFFER_SIZE], void *conf)
+{
+ char *str = NULL;
+ char num[15];
+ int i, size = MSG_CNT;
+
+ for (i = 0; i < size; ++i) {
+ if (msg_types[i].code == arg->val[0].uint32) {
+ str = msg_types[i].name;
+ break;
+ }
+ }
+
+ if (str == NULL) {
+ snprintf(num, sizeof(num), "%u", arg->val[0].uint32);
+ str = num;
+ }
+
+ snprintf(out, PLUGIN_BUFFER_SIZE, "%s", str);
+}
+
+void parse(char *input, char out[PLUGIN_BUFFER_SIZE], void *conf)
+{
+ int code, i, size = MSG_CNT;
+
+ for (i = 0; i < size; ++i) {
+ if (!strcasecmp(input, msg_types[i].name)) {
+ code = msg_types[i].code;
+ break;
+ }
+ }
+
+ // Return empty string if SIP message type description was not found
+ if (i == size) {
+ snprintf(out, PLUGIN_BUFFER_SIZE, "%s", "");
+ return;
+ }
+
+ snprintf(out, PLUGIN_BUFFER_SIZE, "%d", code);
+}
diff --git a/tools/fbitdump/src/typedefs.h b/tools/fbitdump/src/typedefs.h
index 57eca2c9..74803390 100644
--- a/tools/fbitdump/src/typedefs.h
+++ b/tools/fbitdump/src/typedefs.h
@@ -46,7 +46,7 @@
#include