diff --git a/README.md b/README.md index 121baa64..5478a263 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,14 @@ # IPFIXcol framework +> **:warning: [IPFIXcol2](https://github.com/CESNET/ipfixcol2) has been released!** +> +> The next generation of the collector is more stable, up to 2x faster, and adds support +> for new features (e.g. biflow, structured data types, etc.). The code +> was completely rewritten and some plugins might not be available. +> +> Since the release of the new collector, this old framework is **not** supported anymore! +> Please, consider upgrading to the [new release](https://github.com/CESNET/ipfixcol2). + - +This section describes parts of the XML configuration file. Example configuration for an email +subprofile can be found in a section below. +Keep in mind that profile and channel names must match a format that corresponds to the variables +in the C language. In other words, a name can have letters (both uppercase and lowercase), digits +and underscore only. The first letter of a name should be either a letter or an underscore. +Consequently, the names must match regular expression `[a-zA-Z_][a-zA-Z0-9_]*`. + +Warning: Using complicated filters and multiple nested profiles have a significant impact on +the throughput of the collector! + +### Profile (\) +```xml + + ... + ... + ... + ... ``` +Each profile has a name attribute (i.e. ``) for identification among other profiles. +The attribute must be unique only among a group of profiles those belong to the common +parent profile. In the definition of each profile must be exactly one definition of following +elements: + +- `` \ + Profile type ("normal" or "shadow"). Normal profile means that IPFIXcol plugins should + store all valuable data (usually flow records and metadata). On the other hand, + for shadow profiles, the plugins should store only metadata. For example: In case of + the lnfstore plugin, only flows of normal profiles are stored. Others are ignored. +- `` \ + The absolute path to a directory. All data records and metadata that belong to the profile + and its channels will be stored here. The directory MUST be unique for each profile! +- `` \ + List of one or more channels (see the section below about channels) + +Optionally, each profile can contain up to one definition of an element: +- `` \ + List of subprofiles that belongs to the profile. The list can be empty. + +### Channel \ +```xml + + + ... + ... + ... + + ... + +``` + +Each channel has a name attribute (i.e. ``) for unique identification amongst other +channels within a profile. Each channel must have exactly one definition of: +- `` \ + List of flow sources. In this case, a source of records should not be confused with + an IPFIX/NetFlow exporter. The source is basically a channel from a parent profile + from which this channel will receive flow records. Each source must be specified in + an element ``. \ + If the channel receives data from all parent channels, the list of channels can be replaced + with only one source: `*`. Channels in the "live" + profile always have to use this notation. + +Each channel within any "shadow" profile must receive from all channels from a parent profile +i.e. it must always use only '*' source! It is due to the fact that for later evaluation of +queries over shadow profiles (for example by fdistdump or other tools) the information about +parent channels that belong to every flow has been already lost. + +Optionally, each channel may contain one: +- `` \ + A flow filter expression that will be applied to flow records received from sources. + The records that satisfy the specified filter expression will be labeled with this + channel and the profile to which this channel belongs. + If the filter is not defined, the expression is always evaluated as true. + +Warning: User must always make sure that intersection of records that belong to multiple +channels of the same profile is always empty! Otherwise, the record can be stored multiple +times (in case of lnfstore) or added to the summary statistic of the profile multiple times +(in case of profilestats). + +## Example configuration + +Following configuration is based on the hierarchy mentioned earlier but few parts have been +simplified. + +```xml + + normal + /some/directory/live/ + + + * + odid 10 + + + * + odid 20 + + + + + + normal + /some/directory/emails/ + + + + + ch1 + ch2 + + + port in [110, 995] + + + + + + + + + +``` + +### Tips + +If you need to distinguish individual flow exporters, we highly recommend configuring each exporter +to use unique Observation Domain ID (ODID) (IPFIX only), configure each channel of the "live" +profile to represent one exporter and use filter keyword "odid" (see the filter syntax for more +details and limitations). If the ODID method is not applicable in your situation, you can also +use "exporterip", "exporterport", etc. keywords, but be aware, this doesn't +make sense in case of the distributed collector architecture because all flows are sent to the one +active proxy collector and then redistributed by the proxy to subcollectors that perform profiling. +From the point of view of any subcollector the proxy is the exporter, therefore these ODID +replacements don't work as expected. On the other hand, the ODID always works. + +If you want to make sure that your configuration file is ready to use, you can use a tool called +"ipfixcol-profiles-check". Use "-h" to show all available parameters. + +## Filter syntax + +The filter syntax is based on the well-known nfdump tool. Although keywords must be written with +lowercase letters. Any filter consists of one or more expressions `expr`. +Any number of `expr` can be linked together: `expr and expr, expr or expr, not expr and (expr).` + +An expression primitive usually consists of a keyword (a name of an Information Element), +optional comparator, and a value. By default, if the comparator is omitted, equality +operator `=` will be used. Numeric values can use scaling of following supported scaling +factor: k, m, g. The factor is 1000. + +Following comparators `comp` are supported: +- equals sign (`=`, `==` or `eq`) +- less than (`<` or `lt`) +- more than (`>` or `gt`) +- like/binary and (`&`); + +Below is the list of the most frequently used filter primitives that are universally supported. +If you cannot find the primitive you are looking for, try to use the corresponding *nfdump* expression +or just use the name of IPFIX Information Element. If you need to preserve compatibility with +*fdistdump*, you have to use only nfdump expressions! + +- _IP version_ \ + `ipv4` or `inet4` for IPv4 \ + `ipv6` or `inet6` for IPv6 + +- _Protocol_ \ + `proto ` \ + `proto ` \ + where `` is known protocol such as tcp, udp, icmp, icmp6, etc. or a valid + protocol number: 6, 17 etc. + +- _IP address_ \ + `[src|dst] ip ` \ + `[src|dst] host ` \ + with `` as any valid IPv4 or IPv6 address. To check if an IP address is in a known IP + list, use: \ + `[src|dst] ip in [ ]` \ + `[src|dst] host in [ ]` \ + where `` is a space or comma separated list of individual ``. + + IP addresses, networks, ports, AS number etc. can be specifically selected by using a direction + qualifier, such as `src` or `dst`. + +- _Network_ \ + `[src|dst] net a.b.c.d m.n.r.s` \ + Select the IPv4 network a.b.c.d with netmask m.n.r.s. \ + \ + `[src|dst] net /` \ + with `` as a valid IPv4 or IPv6 network and `` as mask bits. The number of mask bits + must match the appropriate address family in IPv4 or IPv6. Networks may be abbreviated such + as 172.16/16 if they are unambiguous. + +- _Port_ \ + `[src|dst] port [comp] ` \ + with as any valid port number. If *comp* is omitted, '=' is assumed. \ + `[src|dst] port in [ ]` \ + A port can be compared against a know list, where `` is a space or comma separated + list of individual port numbers. + +- _Flags_ \ + `flags ` \ + with `` as a combination of: \ + A - ACK \ + S - SYN \ + F - FIN \ + R - Reset \ + P - Push \ + U - Urgent \ + X - All flags on \ + The ordering of the flags is not relevant. Flags not mentioned are treated as don't care. + In order to get those flows with only the SYN flag set, use the + syntax `flags S and not flags AFRPU`. + +- _Packets_ \ + `packets [comp] [scale]` \ + To filter for records with a specific packet count. \ + Example: `packets > 1k` + +- _Bytes_ \ + `bytes [comp] [scale]` \ + To filter for records with a specific byte count. \ + Example: `bytes 46` or `bytes > 100 and bytes < 200` + +- _Packets per second_ (calculated value) \ + `pps [comp] num [scale]` \ + To filter for flows with specific packets per second. + +- _Duration_ (calculate value) \ + `duration [comp] num` \ + To filter for flows with specific duration in milliseconds. + +- _Bits per second_ (calculated value) \ + `bps [comp] num [scale]` \ + To filter for flows with specific bytes per second. + +- _Bytes per packet_ (calculated value) \ + `bpp [comp] num [scale]` \ + To filter for flows with specific bytes per packet. + +Following expressions are available only for processing live IPFIX records and therefore are +not supported by fdistdump. + +- _Observation Domain ID (ODID)_ \ + `odid [comp] ` \ + To filter IPFIX records with a specific Observation Domain ID. + +- _Exporter IP_ \ + `exporterip ` \ + `exporterip in [ ]` \ + To filter for exporters connected with specified IP addresses. + +- _Exporter port_ \ + `exporterport ` \ + `exporterport in [ ]` \ + To filter for exporters connected with specified port. + +- _Collector IP_ \ + `collectorip ` \ + `collectorip in [ ]` \ + To filter for an input IP address of a running collector. + +- _Collector port_ \ + `collectorport ` \ + `collectorport in [ ]` \ + To filter for an input port of a running collector. + +Instead of the identifiers above you can also use any IPFIX Information Element (IE) that is +supported by IPFIXcol. These IEs can be easily added to the configuration file of the collector +so even Private Enterprise IEs can be also used for filtering. See its manual page for more +information. Just keep in mind that these identifiers are not supported by fdistdump right now. + +For example, IPFIX IE for the source port in the transport header is called "sourceTransportPort" +and essentially corresponds to the filter expression "src port". Therefore, the expressions +`sourceTransportPort 80` and `src port 80` represent the same filter. + +### Filter examples + +To dump all records of host 192.168.1.2: \ +`ip 192.168.1.2` + +To dump all record of network 172.16.0.0/16: \ +`net 172.16.0.0/16` + +To dump all port 80 IPv6 connections to any web server: \ +`inet6 and proto tcp and ( src port > 1024 and dst port 80 )` + +### Use-case example + +Let's say that we would like to filter only POP3 flows. We know that ports of POP3 communication +are 110 (unencrypted) or 995 (encrypted). So we can write: \ +`sourceTransportPort == 110 or sourceTransportPort == 995 or destinationTransportPort == 110 or destinationTransportPort == 995` + +Instead of IPFIX IEs, we can use universal identifiers: \ +`src port 110 or src port 995 or dst port 110 or dst port 995` -* **profile** - Profile definition with options, channels and subprofiles. +Source and destination port can be merged: \ +`port 110 or port 995` - * **type** - Specifies the type of a profile - normal/shadow. _normal_ profile means that IPFIXcol plugins should store all valuable data. _shadow_ profile means that IPFIXcol plugins should store only statistics. - * **directory** - Directory for data store of valuable data and statistics. Must be unique for each profile. - * **channelList** - List of channels that belong to the profile. At least one channel must be specified. A number of channels are unlimited. - * **subprofileList** - List of subprofiles that belong to the profile. This item is optional. A number of subprofiles are unlimited. +This expression still can be simplified: \ +`port in [110, 995]` -* **channel** - Channel structure for profile's data filtering. - * **sourceList** - List of sources from which channel will receive data. Sources are channels from parent's profile (except top level channels). If a profile receive data from all parent's channels only one source with '\*' can by used. _shadow_ profiles must always use only '\*' source! - * **filter** - Filter applied on data records, specifying whether it belongs to the profile. It uses the same syntax as filtering intermediate plugin. Except data fields, profile filter can contain elements from IP and IPFIX header. Supported fields are: odid, srcaddr, dstaddr, srcport, dstport. +All examples above represent the same filter. [Back to Top](#top) diff --git a/plugins/intermediate/profiler/configure.ac b/plugins/intermediate/profiler/configure.ac index 094c9693..cf278b8f 100644 --- a/plugins/intermediate/profiler/configure.ac +++ b/plugins/intermediate/profiler/configure.ac @@ -38,7 +38,7 @@ AC_PREREQ([2.60]) # Process this file with autoconf to produce a configure script. -AC_INIT([ipfixcol-profiler-inter], [0.0.6]) +AC_INIT([ipfixcol-profiler-inter], [0.0.7]) AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability]) LT_PREREQ([2.2]) LT_INIT([disable-static]) diff --git a/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk b/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk index 27e1112c..34d8616e 100644 --- a/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk +++ b/plugins/intermediate/profiler/ipfixcol-profiler-inter.dbk @@ -8,11 +8,19 @@ version="5.0" xml:lang="en"> - 2015 + 2015-2017 CESNET, z.s.p.o. - 14 January 2015 + 19 October 2017 + + + Lukas + Hutak + + lukas.hutak@cesnet.cz + developer + Michal @@ -28,171 +36,685 @@ ipfixcol-profiler-inter 1 - profiler plugin for IPFIXcol. + Profiler plugin for IPFIXcol. ipfixcol-profiler-inter - profiler plugin for IPFIXcol. + Profiler plugin for IPFIXcol. Description - The ipfix-profiler-inter plugin is a part of IPFIXcol (IPFIX collector). It profiles IPFIX data records and fills in metadata information according to given - set of profiles and their channels. + The ipfix-profiler-inter plugin is an intermediate plugin for IPFIXcol + (IPFIX collector). It profiles IPFIX data records and fills in metadata information + according to given set of profiles and their channels. - Configuration - The collector must be configured to use profiler plugin in startup.xml configuration. - The configuration specifies which plugins are used by the collector to process data and provides configuration for the plugins themselves. + Introduction to profiling + + The goal of flow profiling is multi-label classification (based on a set of rules) into + user-defined groups. These labels can be used for further flow processing. The basic + terminology includes profiles and channels. + + + A profile is a view that represents a subset of data records received by a collector. + Consequently, this allows surfacing only the records that a user needs to see. Each profile + contains one or more channels, where each channel is represented by a filter and sources of + flow records. If a flow satisfies a condition of any channel of a profile, then the flow + will be labeled with the profile and the channel. Any flow can have as many labels as + possible. In other words, it can belong to multiple channels/profiles at the same time. + + + For example, let us consider that you store all flows and besides you want to store only + flows related to email communications (POP3, IMAP, SMTP). To do this, we can create + a profile "emails" with channels "pop3", "imap" and "smtp". When a flow with POP3 + communication (port 110 or 995) is classified, it will meet the condition of the "pop3" + channel and will be labeled as the flow that belongs to the profile "emails" and the + channel "pop3". + + + Example of a profile hierarchy: - startup.xml profiler example - - ... - /path/to/profiles.xml - - ... - - - - ... - - ]]> + + + The profiles can be nested and create a tree hierarchy (as shown above). A flow source of + a channel in a profile can be only one or more channels of the direct parent of the profile + to which the channel belongs. For example, the channel "http" in the profile "office" can + use only "net1", "net2" or "net3" channels as sources. The exception is the highest level + i.e. "live" profile. This profile must be always present, has exactly this name, and its + channels will receive all flow records intended for profiling. + + + How does flow profiling work? In a nutshell, if a record satisfies a filter of a channel, + it will be labeled with the channel and the profile to which the channel belongs and will + be also sent for evaluation to all subscribers of the channel. + For example, let us consider the tree hierarchy above. All flow records will be always sent + to all channels of "live" profile as mentioned earlier. If a flow record satisfies the + filter of the channel "ch1", the record will be labeled with the profile "live" and the + channel "ch1". Because the flow belongs to the channel "ch1" it will be also sent for + evaluation to all subscribers of this channel i.e. to the channels of the profiles "emails" + and "buildings" that have this channel ("ch1") in their source list. + If the record doesn't satisfy the filter, it will not be distributed to the subscribers. + However, if the record satisfies the channel "ch2" and the channels of the profiles + "emails" and "buildings" are also subscribers of the channel "ch2", the record will be sent + them too. Thus, the record can get to any channel in different ways but labeled can be only + once. + + + For now, following plugins support profiling provided by this plugin: + + + + + ipfixcol-lnfstore-output1 + - Convert and store IPFIX records into NfDump files. For more information see the manual + page of the plugin. + + + + + ipfixcol-profilestats-inter1 + - Create and update RRD statistics per profile and channel. For more information + see manual page of the plugin. + + + + - + + Plugin configuration + + The collector must be configured to use Profiler plugin in the startup.xml configuration. + The profiler plugin must be placed into the intermediate section before any other plugins + that use profiling results. Otherwise, no profiling information will be available for these + plugins. + + + + ... + /path/to/profiles.xml + +... + + ... + + + ... + +]]> + + + The plugin does not accept any parameters, but the Collecting process must define parameter + ]]> with an absolute path to the profiling configuration. + + + + + Structure of the profiling configuration + + This section describes parts of the XML configuration file. Example configuration for an + email subprofile can be found in a section below. + + + Keep in mind that profile and channel names must match a format that corresponds to the + variables in the C language. In other words, a name can have letters (both uppercase and + lowercase), digits and underscore only. The first letter of a name should be either + a letter or an underscore. Consequently, the names must match regular expression + . + + + Warning: Using complicated filters and multiple nested profiles + have a significant impact on the throughput of the collector! + + + + Profile (<![CDATA[<profile>]]>) + + + ... + ... + ... + ... + +]]> + + + Each profile has a name attribute (i.e. ]]>) for identification among + other profiles. The attribute must be unique only among a group of profiles those + belong to the common parent profile. In the definition of each profile must be exactly + one definition of following elements: + - profiles - - path to the file with profiles specification - + ]]> + + + Profile type ("normal" or "shadow"). Normal profile means that IPFIXcol plugins + should store all valuable data (usually flow records and metadata). On the other + hand, for shadow profiles, the plugins should store only metadata. For example: + In case of the lnfstore plugin, only flows of normal profiles are stored. Others + are ignored. + + + + + + ]]> + + + The absolute path to a directory. All data records and metadata that belong to + the profile and its channels will be stored here. The directory MUST be unique + for each profile! + + + + + + ]]> + + + List of one or more channels (see the section below about channels). + + - + + Optionally, each profile can contain up to one definition of an element: + + + + ]]> + + + List of subprofiles that belongs to the profile. The list can be empty. + + + + + -profile.xml profiler example - + + Channel (<![CDATA[<channel>]]>) + - normal - /some/directory/ - - - - - * - - ipVersion = 4 - - - - * - - odid != 5 - - - - - - normal - /some/directory/p1/ - - - - - ch1 - ch2 - - sourceIPv4Address = 192.168.0.0/16 - - - - ch1 - - sourceTransportPort == 25 - - - - - - - + + + ... + ... + ... + + ... + ]]> - - - + + + Each channel has a name attribute (i.e. ]]>) for unique identification + amongst other channels within a profile. Each channel must have exactly one definition + of: + - profile - - Profile definition with options, channels and subprofiles. - - - - type - - Specifies the type of a profile - normal/shadow. normal profile means that IPFIXcol plugins should store all valuable data. shadow profile means that IPFIXcol plugins should store only statistics. - - - - - directory - - Directory for data store of valuable data and statistics. Must be unique for each profile. - - - - - channelList - - List of channels that belong to the profile. At least one channel must be specified. A number of channels are unlimited. - - - - - subprofileList - - List of subprofiles that belong to the profile. This item is optional. A number of subprofiles are unlimited. - - - - - + ]]> + + + List of flow sources. In this case, a source of records should not be confused + with an IPFIX/NetFlow exporter. The source is basically a channel from a parent + profile from which this channel will receive flow records. Each source must be + specified in an element ]]>. If the channel receives data + from all parent channels, the list of channels can be replaced with only one + source: *]]>. Channels in the "live" profile always + have to use this notation. + + - + + + Each channel within any "shadow" profile must receive from all channels from a parent + profile i.e. it must always use only '*' source! It is due to the fact that for later + evaluation of queries over shadow profiles (for example by fdistdump or other tools) + the information about parent channels that belong to every flow has been already lost. + + + Optionally, each channel may contain one: + + - channel - - Channel structure for profile's data filtering. - - - - sourceList - - List of sources from which channel will receive data. Sources are channels from parent's profile (except top level channels). If a profile receive data from all parent's channels only one source with '*' can by used. shadow profiles must always use only '*' source! - - - - - filter - - Filter applied on data records, specifying whether it belongs to the profile. It uses the same syntax as ipfixcol-filter-inter1 - Except data fields, profile filter can contain elements from IP and IPFIX header. Supported fields are: odid, srcaddr, dstaddr, srcport, dstport - - - - - + ]]> + + + A flow filter expression that will be applied to flow records received from + sources. The records that satisfy the specified filter expression will be + labeled with this channel and the profile to which this channel belongs. + If the filter is not defined, the expression is always evaluated as true. + + - + + Warning: User must always make sure that intersection of records + that belong to multiple channels of the same profile is always empty! Otherwise, the + record can be stored multiple times (in case of lnfstore) or added to the summary + statistic of the profile multiple times (in case of profilestats). + + + + + + Example configuration + + Following configuration is based on the hierarchy mentioned earlier but few parts have been + simplified. + + + + normal + /some/directory/live/ + + + * + odid 10 + + + * + odid 20 + + + + + + normal + /some/directory/emails/ + + + + + ch1 + ch2 + + + port in [110, 995] + + + + + + + + + +]]> + + + Tips + + If you need to distinguish individual flow exporters, we highly recommend configuring + each exporter to use unique Observation Domain ID (ODID) (IPFIX only), configure each + channel of the "live" profile to represent one exporter and use filter keyword "odid" + (see the filter syntax for more details and limitations). If the ODID method is not + applicable in your situation, you can also use "exporterip", "exporterport", etc. + keywords, but be aware, this doesn't make sense in case of the distributed collector + architecture because all flows are sent to the one active proxy collector and then + redistributed by the proxy to subcollectors that perform profiling. From the point of + view of any subcollector the proxy is the exporter, therefore these ODID replacements + don't work as expected. On the other hand, the ODID always works. + + + If you want to make sure that your configuration file is ready to use, you can use + a tool called ipfixcol-profiles-check. Use "-h" to show all available + parameters. + + + + + + Filter syntax + + The filter syntax is based on the well-known + nfdump1 + tool. Although keywords must be written with lowercase letters. Any filter consists of one + or more expressions expr. + + + + Any number of expr can be linked together: + expr and expr, expr or expr, + not expr and ( expr ). + + + + An expression primitive usually consists of a keyword (a name of an Information Element), + optional comparator, and a value. By default, if the comparator is omitted, equality + operator = will be used. Numeric values can use scaling of following + supported scaling factor: k, m, g. The factor is 1000. + + + + Following comparators comp are supported: + + + + + + equals sign (=, == or eq) + + + + + + less than ( or lt) + + + + + + more than (]]> or gt) + + + + + + like/binary and () + + + + + + Below is the list of the most frequently used filter primitives that are universally + supported. If you cannot find the primitive you are looking for, try to use the + corresponding + nfdump1 + expression or just use the name of IPFIX Information Element. If you need to preserve + compatibility with + fdistdump1, + you have to use only nfdump expressions! + + + + + IP version + + + ipv4 or inet4 for IPv4ipv6 or inet6 for IPv6 + + + + + + Protocol + + + proto ]]>proto ]]>where ]]> is known protocol such as tcp, + udp, icmp, icmp6, etc. or a valid protocol number: 6, 17 etc. + + + + + + IP address + + + ]]>]]>with ]]> as any valid IPv4 or IPv6 address. + + To check if an IP address is in a known IP list, use: ]]]> ]]]>where ]]> is a space or comma separated list of individual ]]>. + + IP addresses, networks, ports, AS number etc. can be specifically selected by using + a direction qualifier, such as src or dst. + + + + + + Network + + + Select + the IPv4 network a.b.c.d with netmask m.n.r.s. + + /]]>with + ]]> as a valid IPv4 or IPv6 network and + ]]> as mask bits. The number of mask bits must + match the appropriate address family in IPv4 or IPv6. Networks may be abbreviated + such as 172.16/16 if they are unambiguous. + + + + + + Port + + + ]]>with + ]]> as any valid port number. If + comp is omitted, '=' is assumed. + ]]]>A port can + be compared against a know list, where + ]]> is a space or comma separated list of + individual port numbers. + + + + + + Flags + + + ]]>with + ]]> as a combination of: + + + A - ACKS - SYNF - FINR - ResetP - PushU - UrgentX - All flags on + + + The ordering of the flags is not relevant. Flags not mentioned are treated + as don't care. In order to get those flows with only the SYN flag set, use the + syntax 'flags S and not flags AFRPU'. + + + + + + Packets + + + [scale]]]>To + filter for records with a specific packet count.Example: 'packets > 1k' + + + + + + Bytes + + + [scale]]]>To + filter for records with a specific byte count.Example: + 'bytes 46' or ' 100 and bytes < 200]]>' + + + + + + Packets per second (calculated value) + + + To + filter for flows with specific packets per second. + + + + + + Duration (calculated value) + + + To + filter for flows with specific duration in milliseconds. + + + + + + Bits per second (calculated value) + + + To + filter for flows with specific bits per second. + + + + + + Bytes per packet (calculated value) + + + To + filter for flows with specific bytes per packet. + + + + + + + Following expressions are available only for processing live IPFIX records and therefore + are not supported by fdistdump. + + + + + Observation Domain ID (ODID) + + + ]]> ]]]>To + filter IPFIX records with a specific Observation Domain ID. + + + + + + Exporter IP + + + ]]> ]]]>To + filter for exporters connected with specified IP addresses. + + + + + + Exporter port + + + ]]> ]]]>To + filter for exporters connected with specified port. + + + + + + Collector IP + + + ]]> ]]]>To + filter for an input IP address of a running collector. + + + + + + Collector port + + + ]]> ]]]>To + filter for an input port of a running collector. + + + + + + + Instead of the identifiers above you can also use any IPFIX Information Element (IE) that is + supported by IPFIXcol. These IEs can be easily added to the configuration file of the + collector so even Private Enterprise IEs can be also used for filtering. See its manual + page for more information. Just keep in mind that these identifiers are not supported by + fdistdump right now. + + + For example, IPFIX IE for the source port in the transport header is called + "sourceTransportPort" and essentially corresponds to the filter expression "src port". + Therefore, the expressions sourceTransportPort 80 and + src port 80 represent the same filter. + + + + Filter examples + + To dump all records of host 192.168.1.2:ip 192.168.1.2 + + + + To dump all record of network 172.16.0.0/16:net 172.16.0.0/16 + + + + To dump all port 80 IPv6 connections to any web server:inet6 and proto tcp and ( src port > 1024 and dst port 80 ) + + + + + Use-case example + + Let's say that we would like to filter only POP3 flows. We know that ports of POP3 communication +are 110 (unencrypted) or 995 (encrypted). So we can write:sourceTransportPort == 110or sourceTransportPort == 995or destinationTransportPort == 110or destinationTransportPort == 995 + + + + Instead of IPFIX IEs, we can use universal identifiers:src port 110 or src port 995 or dst port 110 or dst port 995 + + + + Source and destination port can be merged:port 110 or port 995 + + + + This expression still can be simplified:port in [110, 995] + + + + All examples above represent the same filter. + + @@ -202,7 +724,9 @@ - ipfixcol1 + ipfixcol1, + ipfixcol-lnfstore-output1, + ipfixcol-profilestats-inter1 Man pages diff --git a/plugins/intermediate/profiler/m4/lbr_set_distro.m4 b/plugins/intermediate/profiler/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/intermediate/profiler/m4/lbr_set_distro.m4 +++ b/plugins/intermediate/profiler/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/intermediate/stats/m4/lbr_set_distro.m4 b/plugins/intermediate/stats/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/intermediate/stats/m4/lbr_set_distro.m4 +++ b/plugins/intermediate/stats/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/intermediate/uid/m4/lbr_set_distro.m4 b/plugins/intermediate/uid/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/intermediate/uid/m4/lbr_set_distro.m4 +++ b/plugins/intermediate/uid/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/fastbit/config_struct.h b/plugins/storage/fastbit/config_struct.h index d7a4b8ae..b0a7f845 100644 --- a/plugins/storage/fastbit/config_struct.h +++ b/plugins/storage/fastbit/config_struct.h @@ -110,8 +110,12 @@ struct fastbit_config { /* size of buffer (number of values)*/ int buff_size; - /* semaphore for index building thread */ - sem_t sem; + /* Handler for the index thread */ + pthread_t index_thread; + + /* Mutex for index building thread */ + pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; + pthread_cond_t mutex_cond = PTHREAD_COND_INITIALIZER; }; #endif /* CONFIG_STRUCT_H_ */ diff --git a/plugins/storage/fastbit/fastbit.cpp b/plugins/storage/fastbit/fastbit.cpp index 808dbcc8..fb95fefa 100644 --- a/plugins/storage/fastbit/fastbit.cpp +++ b/plugins/storage/fastbit/fastbit.cpp @@ -46,7 +46,6 @@ extern "C" { } #include -#include #include #include #include @@ -67,6 +66,8 @@ extern "C" { #include "fastbit_element.h" #include "config_struct.h" +volatile bool terminate = false; + void ipv6_addr_non_canonical(char *str, const struct in6_addr *addr) { sprintf(str, "%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x", @@ -137,11 +138,30 @@ void *reorder_index(void *config) std::string dir; ibis::part *reorder_part; ibis::table::stringArray ibis_columns; + bool no_dirs = true; + + while (!terminate || !no_dirs) { + + /* Sleep until there is a work to do */ + pthread_mutex_lock(&conf->mutex); + while(!terminate && (*conf->dirs).empty()) { + pthread_cond_wait(&conf->mutex_cond, &conf->mutex); + } + + /* Nothing to do, start again and check termination flag */ + if ((*conf->dirs).empty()) { + pthread_mutex_unlock(&conf->mutex); + no_dirs = true; + continue; + } - sem_wait(&(conf->sem)); + /* Get the dir to process */ + no_dirs = false; + dir = (*conf->dirs).back(); + (*conf->dirs).pop_back(); + + pthread_mutex_unlock(&conf->mutex); - for (unsigned int i = 0; i < conf->dirs->size(); i++) { - dir = (*conf->dirs)[i]; /* Reorder partitions */ if (conf->reorder) { MSG_DEBUG(msg_module, "Reordering: %s", dir.c_str()); @@ -173,7 +193,6 @@ void *reorder_index(void *config) ibis::fileManager::instance().flushDir(dir.c_str()); } - sem_post(&(conf->sem)); return NULL; } @@ -232,18 +251,16 @@ void update_window_name(struct fastbit_config *conf) * @param conf Plugin configuration data structure * @param exporter_ip_addr Exporter IP address, as String * @param odid Observation domain ID - * @param close Indicates whether plugin/thread should be closed after flushing all data */ void flush_data(struct fastbit_config *conf, std::string exporter_ip_addr, uint32_t odid, - std::map *templates, bool close) + std::map *templates) { std::map::iterator table; - int s; std::stringstream ss; - pthread_t index_thread; std::map*>::iterator exporter_it; std::map::iterator odid_it; + std::string flushed_path; /* Check whether exporter is listed in data structure */ if ((exporter_it = conf->od_infos->find(exporter_ip_addr)) == conf->od_infos->end()) { @@ -259,18 +276,17 @@ void flush_data(struct fastbit_config *conf, std::string exporter_ip_addr, uint3 std::string path = odid_it->second.path; - sem_wait(&(conf->sem)); + pthread_mutex_lock(&conf->mutex);; { - conf->dirs->clear(); - MSG_DEBUG(msg_module, "Flushing data to disk (exporter: %s, ODID: %u)", odid_it->second.exporter_ip_addr.c_str(), odid); MSG_DEBUG(msg_module, " > Exported: %u", odid_it->second.flow_watch.exported_flows()); MSG_DEBUG(msg_module, " > Received: %u", odid_it->second.flow_watch.received_flows()); for (table = templates->begin(); table != templates->end(); table++) { - conf->dirs->push_back(path + ((*table).second)->name() + "/"); - (*table).second->flush(path); + if ((*table).second->flush(path, flushed_path) == 0) { + conf->dirs->push_back(flushed_path); + } (*table).second->reset_rows(); } @@ -280,30 +296,16 @@ void flush_data(struct fastbit_config *conf, std::string exporter_ip_addr, uint3 odid_it->second.flow_watch.reset_state(); } - sem_post(&(conf->sem)); - - if ((s = pthread_create(&index_thread, NULL, reorder_index, conf)) != 0) { - MSG_ERROR(msg_module, "pthread_create"); - } - - if (close) { - if ((s = pthread_join(index_thread, NULL)) != 0) { - MSG_ERROR(msg_module, "pthread_join"); - } - } else { - if ((s = pthread_detach(index_thread)) != 0) { - MSG_ERROR(msg_module, "pthread_detach"); - } - } + pthread_mutex_unlock(&conf->mutex); + pthread_cond_signal(&conf->mutex_cond); } /** * \brief Flushes the data for *all* exporters and ODIDs * * @param conf Plugin configuration data structure - * @param close Indicates whether plugin/thread should be closed after flushing all data */ -void flush_all_data(struct fastbit_config *conf, bool close) +void flush_all_data(struct fastbit_config *conf) { std::map*> *od_infos = conf->od_infos; std::map*>::iterator exporter_it; @@ -312,7 +314,7 @@ void flush_all_data(struct fastbit_config *conf, bool close) /* Iterate over all exporters and ODIDs and flush data */ for (exporter_it = od_infos->begin(); exporter_it != od_infos->end(); ++exporter_it) { for (odid_it = exporter_it->second->begin(); odid_it != exporter_it->second->end(); ++odid_it) { - flush_data(conf, exporter_it->first, odid_it->first, &(odid_it->second.template_info), close); + flush_data(conf, exporter_it->first, odid_it->first, &(odid_it->second.template_info)); } } } @@ -429,11 +431,6 @@ int process_startup_xml(char *params, struct fastbit_config *c) c->window_dir = c->prefix + "/"; } - - if (sem_init(&(c->sem), 0, 1)) { - MSG_ERROR(msg_module, "Semaphore initialization error"); - return 1; - } } else { return 1; } @@ -481,6 +478,12 @@ int storage_init(char *params, void **config) /* On startup we expect to write to new directory */ c->new_dir = true; + + /* Create index thread */ + if (pthread_create(&c->index_thread, NULL, reorder_index, c) != 0) { + MSG_ERROR(msg_module, "pthread_create"); + } + return 0; } @@ -578,7 +581,7 @@ int store_packet(void *config, const struct ipfix_message *ipfix_msg, old_templates->insert(std::pair(table->first, table->second)); /* Flush data */ - flush_data(conf, exporter_ip_addr, odid, old_templates, false); + flush_data(conf, exporter_ip_addr, odid, old_templates); /* Remove rewritten template */ delete table->second; @@ -613,7 +616,7 @@ int store_packet(void *config, const struct ipfix_message *ipfix_msg, if (flush_records || flush_time) { /* Flush data for all exporters and ODIDs */ - flush_all_data(conf, false); + flush_all_data(conf); /* Time management differs between flush policies (records vs. time) */ if (flush_records) { @@ -678,7 +681,7 @@ int storage_close(void **config) for (exporter_it = od_infos->begin(); exporter_it != od_infos->end(); ++exporter_it) { for (odid_it = exporter_it->second->begin(); odid_it != exporter_it->second->end(); ++odid_it) { templates = &(odid_it->second.template_info); - flush_data(conf, exporter_it->first, odid_it->first, templates, true); + flush_data(conf, exporter_it->first, odid_it->first, templates); /* Free templates */ for (table = templates->begin(); table != templates->end(); table++) { @@ -689,6 +692,16 @@ int storage_close(void **config) delete (*exporter_it).second; } + /* Tell index thread to terminate */ + terminate = true; + pthread_cond_signal(&conf->mutex_cond); + + MSG_INFO(msg_module, "Waiting for the index thread to finish"); + if (pthread_join(conf->index_thread, NULL) != 0) { + MSG_ERROR(msg_module, "pthread_join"); + } + MSG_INFO(msg_module, "Index thread finished"); + /* Free config structure */ delete od_infos; delete conf->index_en_id; diff --git a/plugins/storage/fastbit/fastbit_table.cpp b/plugins/storage/fastbit/fastbit_table.cpp index 60073f7d..9e05ce3a 100644 --- a/plugins/storage/fastbit/fastbit_table.cpp +++ b/plugins/storage/fastbit/fastbit_table.cpp @@ -263,17 +263,17 @@ int template_table::store(ipfix_data_set *data_set, std::string path, bool new_d return record_cnt; } -void template_table::flush(std::string path) +int template_table::flush(std::string path, std::string &flushed_path) { /* Check whether there is something to flush */ if (_rows_count <= 0) { - return; + return -1; } /* Check directory */ _rows_in_window += _rows_count; if (this->dir_check(path + _name, this->_new_dir) != 0) { - return; + return -2; } /* Flush data */ @@ -286,11 +286,15 @@ void template_table::flush(std::string path) _rows_count = 0; _rows_in_window = 0; + flushed_path = path + _name; + /* Data on disk is consistent; try to go back to original name */ if (this->_orig_name[0] != '\0') { strcpy(this->_name, this->_orig_name); this->_orig_name[0] = '\0'; } + + return 0; } int template_table::parse_template(struct ipfix_template *tmp, struct fastbit_config *config) diff --git a/plugins/storage/fastbit/fastbit_table.h b/plugins/storage/fastbit/fastbit_table.h index ff97dc4c..056c5e2b 100644 --- a/plugins/storage/fastbit/fastbit_table.h +++ b/plugins/storage/fastbit/fastbit_table.h @@ -128,9 +128,11 @@ class template_table /** * \brief Flush data to disk and clean memory * - * @param path path to direcotry where should be data flushed + * @param path path to directory where should be data flushed + * @param flushed_path path to directory where data was actually written + * @return 0 on success, negative value otherwise */ - void flush(std::string path); + int flush(std::string path, std::string &flushed_path); time_t get_first_transmission() { return _first_transmission; diff --git a/plugins/storage/fastbit/m4/lbr_set_distro.m4 b/plugins/storage/fastbit/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/fastbit/m4/lbr_set_distro.m4 +++ b/plugins/storage/fastbit/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4 b/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4 +++ b/plugins/storage/fastbit_compression/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/json/Makefile.am b/plugins/storage/json/Makefile.am index a9cf1886..a94ac2ca 100644 --- a/plugins/storage/json/Makefile.am +++ b/plugins/storage/json/Makefile.am @@ -21,7 +21,8 @@ ipfixcol_json_output_la_SOURCES = \ Sender.cpp Sender.h \ Printer.cpp Printer.h \ Server.cpp Server.h \ - File.cpp File.h + File.cpp File.h \ + branchlut2.h if NEED_KAFKA ipfixcol_json_output_la_SOURCES += Kafka.cpp Kafka.h diff --git a/plugins/storage/json/README.md b/plugins/storage/json/README.md index d61ae2aa..3cba6837 100644 --- a/plugins/storage/json/README.md +++ b/plugins/storage/json/README.md @@ -46,7 +46,7 @@ Default plugin configuration in **internalcfg.xml** Or as `ipfixconf` output: ``` - Plugin type Name/Format Process/Thread File + Plugin type Name/Format Process/Thread File ---------------------------------------------------------------------------- storage json json /usr/share/ipfixcol/plugins/ipfixcol-json-output.so ``` @@ -59,11 +59,13 @@ Here is an example of configuration in **startup.xml**: json no + no formatted formatted formatted yes yes + no ipfix. @@ -104,11 +106,13 @@ Here is an example of configuration in **startup.xml**: ``` * **metadata** - Add record metadata to the output (yes/no) [default == no]. +* **odid** - Add source ODID to the output (yes/no) [default == no]. * **tcpFlags** - Convert TCP flags to formatted style of dots and letters (formatted) or to a number (raw) [default == raw]. * **timestamp** - Convert time to formatted style (formatted) or to a number (unix) [default == unix]. * **protocol** - Convert protocol identification to formatted style (formatted) or to a number (raw) [default == formatted]. * **ignoreUnknown** - Skip elements with unknown semantics (yes/no). Data of unknown elements are formatted as unsigned integer (1, 2, 4, 8 bytes length) or binary values. Names will have format 'eXXidYY' where XX is enterprise number and YY is element ID [default == yes]. * **nonPrintableChar** - Convert non-printable characters (control characters, tab, newline, etc.) from IPFIX fields with data type of a string. (yes/no) [default == yes]. +* **detailedInfo** - Add detailed info about the IPFIX message (export time, sequence number, ...) to each record under "ipfixcol." prefix. (yes/no) [default == no]. * **prefix** - Prefix of the IPFIX element names. [default == ipfix.]. * **output** - Specifies JSON data processor. Multiple outputs are supported. @@ -122,7 +126,7 @@ Here is an example of configuration in **startup.xml**: * **path** - The path specifies storage directory for data collected by JSON plugin. Path can contain format specifier for day, month, etc. This allows you to create directory hierarchy based on format specifiers. See "strftime" for conversion specification. * **prefix** - Specifies name prefix for output files. * **dumpInterval** - * **timeWindow** - Specifies the time interval in seconds to rotate files [default == 300]. + * **timeWindow** - Specifies the time interval in seconds to rotate files, minimum is 60 [default == 300]. * **timeAlignment** - Align file rotation with next N minute interval [default == yes]. * **output : server** - Sends data over the network to connected clients. * **port** - Local port number. diff --git a/plugins/storage/json/Storage.cpp b/plugins/storage/json/Storage.cpp index c3adc9d9..a202d567 100644 --- a/plugins/storage/json/Storage.cpp +++ b/plugins/storage/json/Storage.cpp @@ -56,6 +56,7 @@ extern "C" { #include #include #include +#include "branchlut2.h" static const char *msg_module = "json_storage"; @@ -102,7 +103,7 @@ void Storage::storeDataSets(const ipfix_message* ipfix_msg, struct json_conf * c { /* Iterate through all data records */ for (int i = 0; i < ipfix_msg->data_records_count; ++i) { - storeDataRecord(&(ipfix_msg->metadata[i]), config); + storeDataRecord(&(ipfix_msg->metadata[i]), ipfix_msg, config); } } @@ -115,16 +116,16 @@ uint16_t Storage::realLength(uint16_t length, uint8_t *data_record, uint16_t &of if (length != VAR_IE_LENGTH) { return length; } - + /* Variable length */ length = static_cast(read8(data_record + offset)); offset++; - + if (length == 255) { length = ntohs(read16(data_record + offset)); offset += 2; } - + return length; } @@ -185,12 +186,14 @@ const char* Storage::rawName(uint32_t en, uint16_t id) const /** * \brief Store data record */ -void Storage::storeDataRecord(struct metadata *mdata, struct json_conf * config) +void Storage::storeDataRecord(struct metadata *mdata, const struct ipfix_message *ipfix_msg, struct json_conf *config) { const char *element_name = NULL; ELEMENT_TYPE element_type; offset = 0; + uint16_t trans_len = 0; + const char *trans_str = NULL; record.clear(); STR_APPEND(record, "{\"@type\": \"ipfix.entry\", "); @@ -204,12 +207,12 @@ void Storage::storeDataRecord(struct metadata *mdata, struct json_conf * config) id = templ->fields[index].ie.id; length = templ->fields[index].ie.length; enterprise = 0; - + if (id & 0x8000) { id &= 0x7fff; enterprise = templ->fields[++index].enterprise_number; } - + /* Get element informations */ const ipfix_element_t * element = get_element_by_id(id, enterprise); if (element != NULL) { @@ -240,23 +243,28 @@ void Storage::storeDataRecord(struct metadata *mdata, struct json_conf * config) case ET_UNSIGNED_8: case ET_UNSIGNED_16: case ET_UNSIGNED_32: - case ET_UNSIGNED_64: - record += translator.toUnsigned(&length, data_record, offset, + case ET_UNSIGNED_64:{ + trans_str = translator.toUnsigned(length, &trans_len, data_record, offset, element, config); + record.append(trans_str, trans_len); + } break; case ET_SIGNED_8: case ET_SIGNED_16: case ET_SIGNED_32: case ET_SIGNED_64: - record += translator.toSigned(&length, data_record, offset); + trans_str = translator.toSigned(length, &trans_len, data_record, offset); + record.append(trans_str, trans_len); break; case ET_FLOAT_32: case ET_FLOAT_64: - record += translator.toFloat(&length, data_record, offset); + trans_str = translator.toFloat(length, &trans_len, data_record, offset); + record.append(trans_str, trans_len); break; case ET_IPV4_ADDRESS: record += '"'; - record += translator.formatIPv4(read32(data_record + offset)); + trans_str = translator.formatIPv4(read32(data_record + offset), &trans_len); + record.append(trans_str, trans_len); record += '"'; break; case ET_IPV6_ADDRESS: @@ -293,7 +301,7 @@ void Storage::storeDataRecord(struct metadata *mdata, struct json_conf * config) config); break; case ET_BOOLEAN: - case ET_UNASSIGNED: + case ET_UNASSIGNED: default: readRawData(length, data_record, offset); break; @@ -302,14 +310,50 @@ void Storage::storeDataRecord(struct metadata *mdata, struct json_conf * config) offset += length; added++; } - + /* Store metadata */ if (processMetadata) { - STR_APPEND(record, ", \"ipfix.metadata\": {"); + STR_APPEND(record, ", \""); + record += config->prefix; + STR_APPEND(record, "metadata\": {"); storeMetadata(mdata); STR_APPEND(record, "}"); } - + + /* Store ODID */ + if (config->odid) { + /* Temporary buffer for the ODID, must be as big as UINT_MAX converted to string */ + char odid_buf[sizeof("4294967295")]; + + STR_APPEND(record, ", \""); + record += config->prefix; + STR_APPEND(record, "odid\": "); + /* Convert ODID efficiently */ + char *odid_buf_pos = u32toa_branchlut2(ipfix_msg->input_info->odid, odid_buf); + record.append(odid_buf, odid_buf_pos - odid_buf); + } + + /* Store Detailed Information */ + if (config->detailedInfo) { + char conv_buf[sizeof("4294967295")], *conv_buf_pos = NULL; + + STR_APPEND(record, ", \"ipfixcol.packet_length\": "); + conv_buf_pos = u32toa_branchlut2(ntohs(ipfix_msg->pkt_header->length), conv_buf); + record.append(conv_buf, conv_buf_pos - conv_buf); + + STR_APPEND(record, ", \"ipfixcol.export_time\": "); + conv_buf_pos = u32toa_branchlut2(ntohl(ipfix_msg->pkt_header->export_time), conv_buf); + record.append(conv_buf, conv_buf_pos - conv_buf); + + STR_APPEND(record, ", \"ipfixcol.sequence_number\": "); + conv_buf_pos = u32toa_branchlut2(ntohl(ipfix_msg->pkt_header->sequence_number), conv_buf); + record.append(conv_buf, conv_buf_pos - conv_buf); + + STR_APPEND(record, ", \"ipfixcol.template_id\": "); + conv_buf_pos = u32toa_branchlut2(templ->original_id, conv_buf); + record.append(conv_buf, conv_buf_pos - conv_buf); + } + STR_APPEND(record, "}\n"); sendData(); } @@ -320,7 +364,7 @@ void Storage::storeDataRecord(struct metadata *mdata, struct json_conf * config) void Storage::storeMetadata(metadata* mdata) { std::stringstream ss; - + /* Geolocation info */ ss << "\"srcAS\": \"" << mdata->srcAS << "\", "; ss << "\"dstAS\": \"" << mdata->dstAS << "\", "; @@ -331,7 +375,7 @@ void Storage::storeMetadata(metadata* mdata) record += ss.str(); - + /* Profiles */ STR_APPEND(record, "\"profiles\": ["); if (mdata->channels) { diff --git a/plugins/storage/json/Storage.h b/plugins/storage/json/Storage.h index 706b3d8c..5e3cf5d0 100644 --- a/plugins/storage/json/Storage.h +++ b/plugins/storage/json/Storage.h @@ -64,7 +64,7 @@ class Storage { */ Storage(); ~Storage(); - + /** * \brief Add new output processor * \param[in] output @@ -72,19 +72,19 @@ class Storage { void addOutput(Output *output) { outputs.push_back(output); } bool hasSomeOutput() { return !outputs.empty(); } - + /** * \brief Store IPFIX message - * + * * @param msg IPFIX message */ void storeDataSets(const struct ipfix_message *msg, struct json_conf * config); - + /** * \brief Set metadata processing enabled/disabled - * - * @param enabled - */ + * + * @param enabled + */ void setMetadataProcessing(bool enabled) { processMetadata = enabled; } /** @@ -93,64 +93,64 @@ class Storage { * \param enabled */ void setPrintOnly(bool enabled) { printOnly = enabled; } - + private: Translator translator; /**< number -> string translator */ - + /** * \brief Get real field length - * + * * @param length length from template * @param data data record * @param offset field offset * @return real length; */ uint16_t realLength(uint16_t length, uint8_t *data, uint16_t &offset) const; - + /** * \brief Read string field - * + * * @param length length from template * @param data data record * @param offset field offset */ void readString(uint16_t &length, uint8_t *data, uint16_t &offset); - + /** * \brief Read raw data from record on given offset - * + * * @param field_len length from template * @param data data record * @param offset field offset (will be changed) */ void readRawData(uint16_t &length, uint8_t *data, uint16_t &offset); - + /** * \brief Store data record - * + * * @param mdata Data record's metadata */ - void storeDataRecord(struct metadata *mdata, struct json_conf * config); + void storeDataRecord(struct metadata *mdata, const struct ipfix_message *ipfix_msg, struct json_conf *config); /** * \brief Store metadata - * + * * @param mdata Data record's metadata */ void storeMetadata(struct metadata *mdata); - + /** * \brief Create raw name for unknown elements */ const char* rawName(uint32_t en, uint16_t id) const; - - + + /** * \brief Send JSON data to output processors */ void sendData() const; - + bool processMetadata{false}; /**< Metadata processing enabled */ bool printOnly{false}; uint8_t addr6[IPV6_LEN]; diff --git a/plugins/storage/json/Translator.cpp b/plugins/storage/json/Translator.cpp index a0b4c0e9..9ff6dc6e 100644 --- a/plugins/storage/json/Translator.cpp +++ b/plugins/storage/json/Translator.cpp @@ -44,6 +44,8 @@ #include #include "Storage.h" +// #include "itostr.h" +#include "branchlut2.h" /** * \brief Format flags 16bits @@ -86,11 +88,21 @@ Translator::~Translator() } /** - * \brief Format IPv6 + * \brief Format IPv4 */ -const char *Translator::formatIPv4(uint32_t addr) +const char *Translator::formatIPv4(uint32_t addr, uint16_t *ret_len) { - inet_ntop(AF_INET, &addr, buffer, INET_ADDRSTRLEN); + char *ret = NULL; + + ret = u32toa_branchlut2(((uint8_t *) &addr)[0], buffer); + ret++[0] = '.'; + ret = u32toa_branchlut2(((uint8_t *) &addr)[1], ret); + ret++[0] = '.'; + ret = u32toa_branchlut2(((uint8_t *) &addr)[2], ret); + ret++[0] = '.'; + ret = u32toa_branchlut2(((uint8_t *) &addr)[3], ret); + + *ret_len = ret - buffer; return buffer; } @@ -126,9 +138,15 @@ const char *Translator::formatProtocol(uint8_t proto) * \brief Format timestamp */ const char *Translator::formatTimestamp(uint64_t tstamp, t_units units, struct json_conf * config) -{ - if(config->timestamp) { +{ + /* Convert to host byte order. Seconds are stored in uint32_t */ + if (units == t_units::SEC) { + tstamp = ntohl(tstamp); + } else { tstamp = be64toh(tstamp); + } + + if(config->timestamp) { /* Convert to milliseconds */ switch (units) { @@ -153,7 +171,6 @@ const char *Translator::formatTimestamp(uint64_t tstamp, t_units units, struct j /* append miliseconds */ sprintf(&(buffer[20]), ".%03u\"", (const unsigned int) msec); } else { - tstamp = be64toh(tstamp); sprintf(buffer, "%" PRIu64 , tstamp); } @@ -163,79 +180,97 @@ const char *Translator::formatTimestamp(uint64_t tstamp, t_units units, struct j /** * \brief Conversion of unsigned int */ -const char *Translator::toUnsigned(uint16_t *length, uint8_t *data_record, +const char *Translator::toUnsigned(uint16_t length, uint16_t *ret_len, uint8_t *data_record, uint16_t offset, const ipfix_element_t * element, struct json_conf * config) { - if(*length == BYTE1) { + // Calculate the length of returned string here to avoid unnecessary strlen later + const char *ret = NULL; + *ret_len = 0; + + if(length == BYTE1) { // 1 byte if(element->en == 0 && element->id == 6 && config->tcpFlags) { // Formated TCP flags - return formatFlags8(read8(data_record + offset)); + ret = formatFlags8(read8(data_record + offset)); + *ret_len = 8; + return ret; } else if (element->en == 0 && element->id == 4 && !config->protocol) { // Formated protocol identification (TCP, UDP, ICMP,...) - return (formatProtocol(read8(data_record + offset))); + ret = (formatProtocol(read8(data_record + offset))); + *ret_len = strlen(ret); + return ret; } else { // Other elements - snprintf(buffer, BUFF_SIZE, "%" PRIu8, read8(data_record + offset)); + ret = u32toa_branchlut2(read8(data_record + offset), buffer); } - } else if(*length == BYTE2) { + } else if(length == BYTE2) { // 2 bytes if (element->en == 0 && element->id == 6 && config->tcpFlags) { // Formated TCP flags - return formatFlags16(read16(data_record + offset)); + ret = formatFlags16(read16(data_record + offset)); + *ret_len = 8; + return ret; } else { // Other elements - snprintf(buffer, BUFF_SIZE, "%" PRIu16, ntohs(read16(data_record + offset))); + ret = u32toa_branchlut2(ntohs(read16(data_record + offset)), buffer); } - } else if(*length == BYTE4) { + } else if(length == BYTE4) { // 4 bytes - snprintf(buffer, BUFF_SIZE, "%" PRIu32, ntohl(read32(data_record + offset))); - } else if(*length == BYTE8) { + ret = u32toa_branchlut2(ntohl(read32(data_record + offset)), buffer); + } else if(length == BYTE8) { // 8 bytes - snprintf(buffer, BUFF_SIZE, "%" PRIu64, be64toh(read64(data_record + offset))); + ret = u64toa_branchlut2(be64toh(read64(data_record + offset)), buffer); } else { // Other sizes - snprintf(buffer, BUFF_SIZE, "%s", "\"unknown\""); + *ret_len = snprintf(buffer, BUFF_SIZE, "%s", "\"unknown\""); + return buffer; } + *ret_len = ret - buffer; return buffer; } /** * \brief Conversion of signed int */ -const char *Translator::toSigned(uint16_t *length, uint8_t *data_record, uint16_t offset) +const char *Translator::toSigned(uint16_t length, uint16_t *ret_len, uint8_t *data_record, uint16_t offset) { - if(*length == BYTE1) { + // Calculate the length of returned string here to avoid unnecessary strlen later + const char *ret = NULL; + *ret_len = 0; + + if(length == BYTE1) { // 1 byte - snprintf(buffer, BUFF_SIZE, "%" PRId8, (int8_t) read8(data_record + offset)); - } else if(*length == BYTE2) { + ret = i32toa_branchlut2(read8(data_record + offset), buffer); + } else if(length == BYTE2) { // 2 bytes - snprintf(buffer, BUFF_SIZE, "%" PRId16, (int16_t) ntohs(read16(data_record + offset))); - } else if(*length == BYTE4) { + ret = i32toa_branchlut2(ntohs(read16(data_record + offset)), buffer); + } else if(length == BYTE4) { // 4 bytes - snprintf(buffer, BUFF_SIZE, "%" PRId32, (int32_t) ntohl(read32(data_record + offset))); - } else if(*length == BYTE8) { + ret = i32toa_branchlut2(ntohl(read32(data_record + offset)), buffer); + } else if(length == BYTE8) { // 8 bytes - snprintf(buffer, BUFF_SIZE, "%" PRId64, (int64_t) be64toh(read64(data_record + offset))); + ret = i64toa_branchlut2(be64toh(read64(data_record + offset)), buffer); } else { - snprintf(buffer, BUFF_SIZE, "\"unknown\""); + *ret_len = snprintf(buffer, BUFF_SIZE, "\"unknown\""); + return buffer; } + *ret_len = ret - buffer; return buffer; } /** * \brief Conversion of float */ -const char *Translator::toFloat(uint16_t *length, uint8_t *data_record, uint16_t offset) -{ - if(*length == BYTE4) - snprintf(buffer, BUFF_SIZE, "%f", (float) ntohl(read32(data_record + offset))); - else if(*length == BYTE8) - snprintf(buffer, BUFF_SIZE, "%lf", (double) be64toh(read64(data_record + offset))); +const char *Translator::toFloat(uint16_t length, uint16_t *ret_len, uint8_t *data_record, uint16_t offset) +{ + if(length == BYTE4) + *ret_len = snprintf(buffer, BUFF_SIZE, "%f", (float) ntohl(read32(data_record + offset))); + else if(length == BYTE8) + *ret_len = snprintf(buffer, BUFF_SIZE, "%lf", (double) be64toh(read64(data_record + offset))); else - snprintf(buffer, BUFF_SIZE, "\"unknown\""); + *ret_len = snprintf(buffer, BUFF_SIZE, "\"unknown\""); return buffer; } diff --git a/plugins/storage/json/Translator.h b/plugins/storage/json/Translator.h index 749f7272..840cc1b5 100644 --- a/plugins/storage/json/Translator.h +++ b/plugins/storage/json/Translator.h @@ -78,9 +78,10 @@ class Translator { * \brief Format IPv4 address into dotted format * * @param addr address + * @param ret_len pointer to return length of generated string * @return formatted address */ - const char *formatIPv4(uint32_t addr); + const char *formatIPv4(uint32_t addr, uint16_t *ret_len); /** * \brief Format IPv6 address @@ -135,33 +136,36 @@ class Translator { * \brief Checks, if real length of record is the same as its data type. If not, converts to real length. * * @param length length of record + * @param ret_len pointer to return length of generated string * @param data_record pointer to head of data record * @param offset offset since head of data record * @param element pointer to actuall element in record * @param config pointer to configuration structure * @return value of Unsigned int */ - const char *toUnsigned(uint16_t *length, uint8_t *data_record, uint16_t offset, const ipfix_element_t * element, struct json_conf * config); + const char *toUnsigned(uint16_t length, uint16_t *ret_len, uint8_t *data_record, uint16_t offset, const ipfix_element_t * element, struct json_conf * config); /** * \brief Checks, if real length of record is the same as its data type. If not, converts to real length. * * @param length length of record + * @param ret_len pointer to return length of generated string * @param data_record pointer to head of data record * @param offset offset since head of data record * @return value of Signed int */ - const char *toSigned(uint16_t *length, uint8_t *data_record, uint16_t offset); + const char *toSigned(uint16_t length, uint16_t *ret_len, uint8_t *data_record, uint16_t offset); /** * \brief Checks, if real length of record is the same as its data type. If not, converts to real length. * * @param length length of record + * @param ret_len pointer to return length of generated string * @param data_record pointer to head of data record * @param offset offset since head of data record * @return value of Float */ - const char *toFloat(uint16_t *length, uint8_t *data_record, uint16_t offset); + const char *toFloat(uint16_t length, uint16_t *ret_len, uint8_t *data_record, uint16_t offset); /** * \brief Convert string to JSON format diff --git a/plugins/storage/json/branchlut2.h b/plugins/storage/json/branchlut2.h new file mode 100644 index 00000000..09a22273 --- /dev/null +++ b/plugins/storage/json/branchlut2.h @@ -0,0 +1,104 @@ +#ifndef BRANCHLUT_H +#define BRANCHLUT_H + +/** + * Code taken from https://github.com/miloyip/itoa-benchmark + */ + +#include + +const char gDigitsLut[200] = { + '0','0','0','1','0','2','0','3','0','4','0','5','0','6','0','7','0','8','0','9', + '1','0','1','1','1','2','1','3','1','4','1','5','1','6','1','7','1','8','1','9', + '2','0','2','1','2','2','2','3','2','4','2','5','2','6','2','7','2','8','2','9', + '3','0','3','1','3','2','3','3','3','4','3','5','3','6','3','7','3','8','3','9', + '4','0','4','1','4','2','4','3','4','4','4','5','4','6','4','7','4','8','4','9', + '5','0','5','1','5','2','5','3','5','4','5','5','5','6','5','7','5','8','5','9', + '6','0','6','1','6','2','6','3','6','4','6','5','6','6','6','7','6','8','6','9', + '7','0','7','1','7','2','7','3','7','4','7','5','7','6','7','7','7','8','7','9', + '8','0','8','1','8','2','8','3','8','4','8','5','8','6','8','7','8','8','8','9', + '9','0','9','1','9','2','9','3','9','4','9','5','9','6','9','7','9','8','9','9' +}; + +#define BEGIN2(n) \ + do { \ + int t = (n); \ + if(t < 10) *p++ = '0' + t; \ + else { \ + t *= 2; \ + *p++ = gDigitsLut[t]; \ + *p++ = gDigitsLut[t + 1]; \ + } \ + } while(0) +#define MIDDLE2(n) \ + do { \ + int t = (n) * 2; \ + *p++ = gDigitsLut[t]; \ + *p++ = gDigitsLut[t + 1]; \ + } while(0) +#define BEGIN4(n) \ + do { \ + int t4 = (n); \ + if(t4 < 100) BEGIN2(t4); \ + else { BEGIN2(t4 / 100); MIDDLE2(t4 % 100); } \ + } while(0) +#define MIDDLE4(n) \ + do { \ + int t4 = (n); \ + MIDDLE2(t4 / 100); MIDDLE2(t4 % 100); \ + } while(0) +#define BEGIN8(n) \ + do { \ + uint32_t t8 = (n); \ + if(t8 < 10000) BEGIN4(t8); \ + else { BEGIN4(t8 / 10000); MIDDLE4(t8 % 10000); } \ + } while(0) +#define MIDDLE8(n) \ + do { \ + uint32_t t8 = (n); \ + MIDDLE4(t8 / 10000); MIDDLE4(t8 % 10000); \ + } while(0) +#define MIDDLE16(n) \ + do { \ + uint64_t t16 = (n); \ + MIDDLE8(t16 / 100000000); MIDDLE8(t16 % 100000000); \ + } while(0) + +static inline char * +u32toa_branchlut2(uint32_t x, char* p) +{ + if(x < 100000000) BEGIN8(x); + else { BEGIN2(x / 100000000); MIDDLE8(x % 100000000); } + *p = 0; + return p; +} + +static inline char * +i32toa_branchlut2(int32_t x, char* p) +{ + uint64_t t; + if(x >= 0) t = x; + else *p++ = '-', t = -uint32_t(x); + return u32toa_branchlut2(t, p); +} + +static inline char * +u64toa_branchlut2(uint64_t x, char* p) +{ + if(x < 100000000) BEGIN8(x); + else if(x < 10000000000000000) { BEGIN8(x / 100000000); MIDDLE8(x % 100000000); } + else { BEGIN4(x / 10000000000000000); MIDDLE16(x % 10000000000000000); } + *p = 0; + return p; +} + +static inline char * +i64toa_branchlut2(int64_t x, char* p) +{ + uint64_t t; + if(x >= 0) t = x; + else *p++ = '-', t = -uint64_t(x); + return u64toa_branchlut2(t, p); +} + +#endif /* BRANCHLUT_H */ diff --git a/plugins/storage/json/configure.ac b/plugins/storage/json/configure.ac index fc10ea04..8295e175 100644 --- a/plugins/storage/json/configure.ac +++ b/plugins/storage/json/configure.ac @@ -38,7 +38,7 @@ AC_PREREQ([2.60]) # Process this file with autoconf to produce a configure script. -AC_INIT([ipfixcol-json-output], [1.2.3]) +AC_INIT([ipfixcol-json-output], [1.2.6]) AC_CONFIG_MACRO_DIR([m4]) AC_CONFIG_SRCDIR([json.cpp]) diff --git a/plugins/storage/json/ipfixcol-json-output.dbk b/plugins/storage/json/ipfixcol-json-output.dbk index c5bafda8..b4ce98df 100644 --- a/plugins/storage/json/ipfixcol-json-output.dbk +++ b/plugins/storage/json/ipfixcol-json-output.dbk @@ -1,7 +1,7 @@ -ipfixcol-json-output JSON output plugin for IPFIXcol. - + Description @@ -64,8 +64,8 @@ Configuration - The collector must be configured to use json output plugin in startup.xml configuration (/etc/ipfixcol/startup.xml). - The configuration specifies which plugins (destinations) are used by the collector to store data and provides configuration for the plugins themselves. + The collector must be configured to use json output plugin in startup.xml configuration (/etc/ipfixcol/startup.xml). + The configuration specifies which plugins (destinations) are used by the collector to store data and provides configuration for the plugins themselves. startup.xml json example @@ -75,11 +75,13 @@ json no + no formatted formatted formatted yes yes + no ipfix. @@ -137,13 +139,20 @@ + + odid + + Add source ODID to the output (yes/no) [default == no]. + + + tcpFlags Convert TCP flags to formatted style of dots and letters (formatted) or to a number (raw) [default == raw]. - + timestamp @@ -172,6 +181,13 @@ + + detailedInfo + + Add detailed info about the IPFIX message (export time, sequence number, ...) to each record under "ipfixcol." prefix. (yes/no) [default == no]. + + + prefix @@ -250,7 +266,7 @@ timeWindow - Specifies the time interval in seconds to rotate files [default == 300]. + Specifies the time interval in seconds to rotate files, minimum is 60 [default == 300]. diff --git a/plugins/storage/json/json.cpp b/plugins/storage/json/json.cpp index 76c5f460..ec55416a 100644 --- a/plugins/storage/json/json.cpp +++ b/plugins/storage/json/json.cpp @@ -63,23 +63,28 @@ IPFIXCOL_API_VERSION; static const char *msg_module = "json_storage"; -void process_startup_xml(struct json_conf *conf, char *params) +void process_startup_xml(struct json_conf *conf, char *params) { pugi::xml_document doc; pugi::xml_parse_result result = doc.load(params); - + if (!result) { throw std::invalid_argument(std::string("Error when parsing parameters: ") + result.description()); } /* Get configuration */ pugi::xpath_node ie = doc.select_single_node("fileWriter"); - + /* Check metadata processing */ std::string meta = ie.node().child_value("metadata"); conf->metadata = (strcasecmp(meta.c_str(), "yes") == 0 || meta == "1" || strcasecmp(meta.c_str(), "true") == 0); + /* Check ODID processing */ + std::string odid = ie.node().child_value("odid"); + conf->odid = (strcasecmp(odid.c_str(), "yes") == 0 || odid == "1" || + strcasecmp(odid.c_str(), "true") == 0); + /* Format of TCP flags */ std::string tcpFlags = ie.node().child_value("tcpFlags"); conf->tcpFlags = (strcasecmp(tcpFlags.c_str(), "formated") == 0) || @@ -110,6 +115,14 @@ void process_startup_xml(struct json_conf *conf, char *params) conf->whiteSpaces = false; } + /* Detailed information in records */ + std::string detailedInfo = ie.node().child_value("detailedInfo"); + conf->detailedInfo = false; + if (strcasecmp(detailedInfo.c_str(), "true") == 0 || detailedInfo == "1" || + strcasecmp(detailedInfo.c_str(), "yes") == 0) { + conf->detailedInfo = true; + } + /* Prefix for IPFIX elements */ /* Set default rpefix */ conf->prefix = "ipfix."; @@ -153,21 +166,21 @@ void process_startup_xml(struct json_conf *conf, char *params) /* plugin inicialization */ extern "C" int storage_init (char *params, void **config) -{ +{ struct json_conf *conf; try { /* Create configuration */ conf = new struct json_conf; - + /* Create storage */ conf->storage = new Storage(); /* Process params */ process_startup_xml(conf, params); - + /* Configure metadata processing */ conf->storage->setMetadataProcessing(conf->metadata); - + /* Save configuration */ *config = conf; } catch (std::exception &e) { @@ -180,7 +193,7 @@ int storage_init (char *params, void **config) return 1; } - + MSG_DEBUG(msg_module, "initialized"); return 0; } @@ -192,7 +205,7 @@ int store_packet (void *config, const struct ipfix_message *ipfix_msg, { (void) template_mgr; struct json_conf *conf = (struct json_conf *) config; - + conf->storage->storeDataSets(ipfix_msg, conf); return 0; } @@ -209,15 +222,15 @@ int storage_close (void **config) { MSG_DEBUG(msg_module, "CLOSING"); struct json_conf *conf = (struct json_conf *) *config; - + /* Destroy storage */ delete conf->storage; - + /* Destroy configuration */ delete conf; - + *config = NULL; - + return 0; } diff --git a/plugins/storage/json/json.h b/plugins/storage/json/json.h index 0ffca9c5..74b12bd3 100644 --- a/plugins/storage/json/json.h +++ b/plugins/storage/json/json.h @@ -56,12 +56,14 @@ class Storage; */ struct json_conf { bool metadata; + bool odid; /**< Add ODID to json output */ Storage *storage; bool tcpFlags; /**< TCP flags format - true(formatted), false(RAW) */ bool timestamp; /**< timestamp format - true(formatted), false(UNIX) */ bool protocol; /**< protocol format - true(RAW), false(formatted) */ bool ignoreUnknown; /**< Ignore unknown elements */ bool whiteSpaces; /**< Convert white spaces in strings (do not skip) */ + bool detailedInfo; /**< Add detailed information about to each record */ std::string prefix; /**< Prefix for IPFIX elements */ }; diff --git a/plugins/storage/json/m4/lbr_set_distro.m4 b/plugins/storage/json/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/json/m4/lbr_set_distro.m4 +++ b/plugins/storage/json/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/lnfstore/configure.ac b/plugins/storage/lnfstore/configure.ac index 6f38b903..021ccd32 100644 --- a/plugins/storage/lnfstore/configure.ac +++ b/plugins/storage/lnfstore/configure.ac @@ -1,6 +1,6 @@ AC_PREREQ([2.60]) # Process this file with autoconf to produce a configure script. -AC_INIT([ipfixcol-lnfstore-output], [0.3.3]) +AC_INIT([ipfixcol-lnfstore-output], [0.3.4]) AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability]) LT_PREREQ([2.2]) LT_INIT([dlopen disable-static]) diff --git a/plugins/storage/lnfstore/m4/lbr_set_distro.m4 b/plugins/storage/lnfstore/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/lnfstore/m4/lbr_set_distro.m4 +++ b/plugins/storage/lnfstore/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/nfdump/m4/lbr_set_distro.m4 b/plugins/storage/nfdump/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/nfdump/m4/lbr_set_distro.m4 +++ b/plugins/storage/nfdump/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/postgres/m4/lbr_set_distro.m4 b/plugins/storage/postgres/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/postgres/m4/lbr_set_distro.m4 +++ b/plugins/storage/postgres/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/statistics/m4/lbr_set_distro.m4 b/plugins/storage/statistics/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/statistics/m4/lbr_set_distro.m4 +++ b/plugins/storage/statistics/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/unirec/configure.ac b/plugins/storage/unirec/configure.ac index c9784fc4..66fe4c9a 100644 --- a/plugins/storage/unirec/configure.ac +++ b/plugins/storage/unirec/configure.ac @@ -38,7 +38,7 @@ AC_PREREQ([2.60]) # Process this file with autoconf to produce a configure script. -AC_INIT([ipfixcol-unirec-output], [0.2.12]) +AC_INIT([ipfixcol-unirec-output], [0.2.15]) AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability]) LT_PREREQ([2.2]) LT_INIT([disable-static]) diff --git a/plugins/storage/unirec/m4/lbr_set_distro.m4 b/plugins/storage/unirec/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/plugins/storage/unirec/m4/lbr_set_distro.m4 +++ b/plugins/storage/unirec/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/plugins/storage/unirec/unirec-elements.txt b/plugins/storage/unirec/unirec-elements.txt index 8c820ae7..76245224 100644 --- a/plugins/storage/unirec/unirec-elements.txt +++ b/plugins/storage/unirec/unirec-elements.txt @@ -30,23 +30,17 @@ HB_TYPE uint8 1 e8057id700 TLS conten HB_DIR uint8 1 e8057id701 Heartbeat request/response byte HB_SIZE_MSG uint16 2 e8057id702 Heartbeat message size HB_SIZE_PAYLOAD uint16 2 e8057id703 Heartbeat payload size -HTTP_REQUEST_METHOD_ID uint32 4 e16982id500 HTTP request method id -HTTP_REQUEST_HOST string -1 e16982id501 HTTP request host -HTTP_REQUEST_URL string -1 e16982id502 HTTP request url -HTTP_REQUEST_AGENT_ID uint32 4 e16982id503 HTTP request agent id -HTTP_REQUEST_AGENT string -1 e16982id504 HTTP request agent -HTTP_REQUEST_REFERER string -1 e16982id505 HTTP referer -HTTP_RESPONSE_STATUS_CODE uint32 4 e16982id506 HTTP response status code -HTTP_RESPONSE_CONTENT_TYPE string -1 e16982id507 HTTP response content type -HTTP_SDM_REQUEST_METHOD_ID uint32 4 e8057id800 Used method -HTTP_SDM_REQUEST_HOST string -1 e8057id801 Host -HTTP_SDM_REQUEST_URL string -1 e8057id802 URL -HTTP_SDM_REQUEST_REFERER string -1 e8057id803 Referer -HTTP_SDM_REQUEST_AGENT string -1 e8057id804 User-agent -HTTP_SDM_REQUEST_RANGE bytes -1 e8057id821 Range -HTTP_SDM_RESPONSE_STATUS_CODE uint32 4 e8057id805 Status coce converted into integer -HTTP_SDM_RESPONSE_CONTENT_TYPE string -1 e8057id806 Content-type -HTTP_SDM_RESPONSE_TIME uint64 8 e8057id807 Application response time +# HTTP elements from Flowmon HTTP plugin in MUNI PEN, and CESNET sdm-http and sdm-https plugins in CESNET PEN +HTTP_REQUEST_METHOD_ID uint32 4 e16982id500,e8057id800 HTTP request method id +HTTP_REQUEST_HOST string -1 e16982id501,e8057id801,e8057id808 HTTP(S) request host +HTTP_REQUEST_URL string -1 e16982id502,e8057id802 HTTP request url +HTTP_REQUEST_AGENT_ID uint32 4 e16982id503 HTTP request agent id +HTTP_REQUEST_AGENT string -1 e16982id504,e8057id804 HTTP request agent +HTTP_REQUEST_REFERER string -1 e16982id505,e8057id803 HTTP referer +HTTP_RESPONSE_STATUS_CODE uint32 4 e16982id506,e8057id805 HTTP response status code +HTTP_RESPONSE_CONTENT_TYPE string -1 e16982id507,e8057id806 HTTP response content type +HTTP_REQUEST_RANGE bytes -1 e8057id821 HTTP range +HTTP_RESPONSE_TIME uint64 8 e8057id807,e8057id809 HTTP(S) application response time IPV6_TUN_TYPE uint8 1 e16982id405 IPv6 tunnel type SMTP_COMMAND_FLAGS uint32 4 e8057id810 SMTP command flags SMTP_MAIL_CMD_COUNT uint32 4 e8057id811 SMTP MAIL command count @@ -89,4 +83,4 @@ SIP_VIA string -1 e8057id105 SIP VIA SIP_USER_AGENT string -1 e8057id106 SIP user agent SIP_REQUEST_URI string -1 e8057id107 SIP request uri SIP_CSEQ string -1 e8057id108 SIP CSeq -VENOM uint8 1 e8057id1001 Venom rootkit detection \ No newline at end of file +VENOM uint8 1 e8057id1001 Venom rootkit detection diff --git a/plugins/storage/unirec/unirec.c b/plugins/storage/unirec/unirec.c index 134ef9bd..7cadeea9 100644 --- a/plugins/storage/unirec/unirec.c +++ b/plugins/storage/unirec/unirec.c @@ -364,7 +364,8 @@ static uint16_t process_record(char *data_record, struct ipfix_template *templat break; case UNIREC_FIELD_DBF: // Handle DIR_BIT_FIELD - *(uint8_t*)(conf->ifc[i].buffer + matchField->offset_ar[i]) = ((*(uint16_t*)(data_record + offset + size_length)) >> 8) & 0x1; + // Just read the least significant byte directly and use only the least significant bit + *(uint8_t*)(conf->ifc[i].buffer + matchField->offset_ar[i]) = (*(uint8_t*)(data_record + offset + (length - 1))) & 0x1; break; case UNIREC_FIELD_LBF: // Handle LINK_BIT_FIELD, is BIG ENDIAN but we are using only LSB @@ -393,8 +394,12 @@ static uint16_t process_record(char *data_record, struct ipfix_template *templat } } else { // Dynamic element - matchField->valueSize = length; matchField->value = (void*) (data_record + offset + size_length); + if (matchField->unirec_type == 0) { // string value should be trimmed + matchField->valueSize = strnlen(data_record + offset + size_length, length); + } else { + matchField->valueSize = length; + } matchField->valueFilled = 1; // Fill required count for Unirec where this element is required for (int i = 0; i < conf->ifc_count; i++) { @@ -638,8 +643,8 @@ static int8_t getUnirecFieldTypeFromIpfixId(ipfixElement ipfix_el) } else if (en == 0 && (id == 152 || id == 153)) { // Timestamps return UNIREC_FIELD_TS; - } else if (en == 0 && id == 10) { - // DIR_BIT_FIELD + } else if (en == 0 && (id == 10 || id == 14)) { + // DIR_BIT_FIELD (in/out interface numbers) return UNIREC_FIELD_DBF; } else if (en == 0 && id == 405) { // LINK_BIT_FIELD diff --git a/tools/fbitconvert/m4/lbr_set_distro.m4 b/tools/fbitconvert/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/tools/fbitconvert/m4/lbr_set_distro.m4 +++ b/tools/fbitconvert/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/tools/fbitdump/changelog.md b/tools/fbitdump/changelog.md index 4c544f82..099e806f 100644 --- a/tools/fbitdump/changelog.md +++ b/tools/fbitdump/changelog.md @@ -1,5 +1,12 @@ **Future release:** +**Version 0.4.4:** +* Fixed blob hex output +* Fixed TCP flags example in man page + +**Version 0.4.3:** +* Fixed configuration for CESNET SIP plugin + **Version 0.4.2:** * Fixed markdown syntax * Support DocBook XSL Stylesheets v1.79 diff --git a/tools/fbitdump/configure.ac b/tools/fbitdump/configure.ac index fe561344..8e5d28e0 100644 --- a/tools/fbitdump/configure.ac +++ b/tools/fbitdump/configure.ac @@ -39,7 +39,7 @@ AC_PREREQ([2.60]) # Process this file with autoconf to produce a configure script. -AC_INIT([fbitdump], [0.4.2]) +AC_INIT([fbitdump], [0.4.4]) LT_INIT([dlopen disable-static]) AM_INIT_AUTOMAKE([-Wall -Werror foreign -Wno-portability]) diff --git a/tools/fbitdump/fbitdump.xml.template b/tools/fbitdump/fbitdump.xml.template index 5dd3e2eb..c7c0cc2a 100644 --- a/tools/fbitdump/fbitdump.xml.template +++ b/tools/fbitdump/fbitdump.xml.template @@ -1267,84 +1267,84 @@ - SIP Method - %csipm + SIP Msg Type + %csipmt 9 - - e8057id830 + e8057id100 SIP Status Code - %csips + %csipsc 3 - - e8057id831 + e8057id101 - SIP Request URI - %csipu + SIP Call ID + %csipci 64 - - e8057id832 + e8057id102 - SIP From - %csipf + SIP Calling Party + %csipsrc 64 - - e8057id833 + e8057id103 - SIP To - %csipt + SIP Called Party + %csipdst 64 - - e8057id834 + e8057id104 - SIP Contact - %csipc + SIP Via + %csipv 64 - - e8057id835 + e8057id105 - SIP Via - %csipv + SIP User Agent + %csipua 64 - - e8057id836 + e8057id106 - SIP Route - %csipr + SIP Request URI + %csipru 64 - - e8057id837 + e8057id107 - SIP Record Route - %csiprr + SIP Cseq + %csipcseq 64 - - e8057id838 + e8057id108 @@ -1702,7 +1702,7 @@ sip4-cesnet - %ts %td %pr %sp %dp %sa4 %da4 %pkt %byt %fl %csipm %csipc %csipu %csipf %csipt %csipc %csipv %csipr %csiprr + %ts %td %pr %sp %dp %sa4 %da4 %pkt %byt %fl %csipmt %csipsc %csipci %csipsrc %csipdst %csipv %csipua %csipru %csipcseq voip @@ -1745,6 +1745,11 @@ @pkgdatadir@/plugins/sip_method.so 1 + + sip_msg_type + @pkgdatadir@/plugins/sip_msg_type.so + 1 + dns_rcode @pkgdatadir@/plugins/dns_rcode.so diff --git a/tools/fbitdump/m4/lbr_set_distro.m4 b/tools/fbitdump/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/tools/fbitdump/m4/lbr_set_distro.m4 +++ b/tools/fbitdump/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/tools/fbitdump/man/fbitdump.dbk b/tools/fbitdump/man/fbitdump.dbk index 21f349e4..5a0a64af 100644 --- a/tools/fbitdump/man/fbitdump.dbk +++ b/tools/fbitdump/man/fbitdump.dbk @@ -366,9 +366,9 @@ %column is an element alias prefixed with %. %column element can be a computed value, that is a value that is not directly stored. Alternately it is possible to specify the element name in e[x]id[y] format, where [x] means enterprise number and [y] element ID. It is also possible to specify a group of columns (e.g., %port for source and destination port) - It is also possible to mask %column with binary and (&) or binary or (|). Following example select all flows with SYN flag: + It is also possible to mask %column with binary and (&) or binary or (|). Following example selects all flows with SYN flag: - %flg | S > 0 + %flg & S > 0 cmp is one of '=', '==', '<', '>', '<=', '>=', '!='. When filtering by string, '%' mark can be automatically inserted to the end or/and beginning of string value according to cmp: diff --git a/tools/fbitdump/src/Values.cpp b/tools/fbitdump/src/Values.cpp index 9f0108b3..1cca02b9 100644 --- a/tools/fbitdump/src/Values.cpp +++ b/tools/fbitdump/src/Values.cpp @@ -145,7 +145,7 @@ std::string Values::toString(bool plainNumbers) const /* Convert the value to hexa */ for (uint64_t i=0; i < this->opaque.size(); i++) { - ss << std::setw(2) << static_cast((this->opaque.address()[i])); + ss << std::setw(2) << static_cast((uint8_t)(this->opaque.address()[i])); } valStr = ss.str(); diff --git a/tools/fbitdump/src/plugins/Makefile.am b/tools/fbitdump/src/plugins/Makefile.am index 90a27752..1dcc5ab2 100644 --- a/tools/fbitdump/src/plugins/Makefile.am +++ b/tools/fbitdump/src/plugins/Makefile.am @@ -2,7 +2,7 @@ ACLOCAL_AMFLAGS = -I m4 pluginsdir = $(datadir)/fbitdump/plugins -plugins_LTLIBRARIES = httprt.la http_status_code.la sip_method.la dns_rcode.la tls_csuites.la tls_version.la tls_csuites_array.la voip_type.la voip_rtpcodec.la smtp_statuscode.la smtp_command.la mac.la multiplier.la +plugins_LTLIBRARIES = httprt.la http_status_code.la sip_method.la sip_msg_type.la dns_rcode.la tls_csuites.la tls_version.la tls_csuites_array.la voip_type.la voip_rtpcodec.la smtp_statuscode.la smtp_command.la mac.la multiplier.la httprt_la_SOURCES= httprt.c httprt_la_LDFLAGS= -shared -module -avoid-version @@ -13,6 +13,9 @@ http_status_code_la_LDFLAGS= -shared -module -avoid-version sip_method_la_SOURCES= sip_method.c sip_method_la_LDFLAGS= -shared -module -avoid-version +sip_msg_type_la_SOURCES= sip_msg_type.c +sip_msg_type_la_LDFLAGS= -shared -module -avoid-version + dns_rcode_la_SOURCES= dns_rcode.c dns_rcode_la_LDFLAGS= -shared -module -avoid-version diff --git a/tools/fbitdump/src/plugins/sip_msg_type.c b/tools/fbitdump/src/plugins/sip_msg_type.c new file mode 100644 index 00000000..f8586bc4 --- /dev/null +++ b/tools/fbitdump/src/plugins/sip_msg_type.c @@ -0,0 +1,89 @@ +#define _GNU_SOURCE +#include +#include +#include "plugin_header.h" + +typedef struct msg_type_s { + int code; + char *name; +} msg_type_t; + +static const msg_type_t msg_types[] = { + { 0, "Invalid" }, + { 1, "Invite" }, + { 2, "Ack" }, + { 3, "Cancel" }, + { 4, "Bye" }, + { 5, "Register" }, + { 6, "Options" }, + { 7, "Publish" }, + { 8, "Notify" }, + { 9, "Info" }, + { 10, "Subscribe" }, + { 99, "Status" }, + { 100, "Trying" }, + { 101, "Dial Established" }, + { 180, "Ringing" }, + { 183, "Session Progress" }, + { 200, "OK" }, + { 400, "Bad Request" }, + { 401, "Unauthorized" }, + { 403, "Forbidden" }, + { 404, "Not Found" }, + { 407, "Proxy Auth Required" }, + { 486, "Busy Here" }, + { 487, "Request Canceled" }, + { 500, "Internal Error" }, + { 603, "Decline" }, + { 999, "Undefined" } +}; + +#define MSG_CNT (sizeof(msg_types) / sizeof(msg_types[0])) + +char *info() +{ + return \ +"Converts SIP message type description to code and vice versa.\n \ +e.g. \"Ringing\" -> 180"; +} + +void format(const plugin_arg_t * arg, int plain_numbers, char out[PLUGIN_BUFFER_SIZE], void *conf) +{ + char *str = NULL; + char num[15]; + int i, size = MSG_CNT; + + for (i = 0; i < size; ++i) { + if (msg_types[i].code == arg->val[0].uint32) { + str = msg_types[i].name; + break; + } + } + + if (str == NULL) { + snprintf(num, sizeof(num), "%u", arg->val[0].uint32); + str = num; + } + + snprintf(out, PLUGIN_BUFFER_SIZE, "%s", str); +} + +void parse(char *input, char out[PLUGIN_BUFFER_SIZE], void *conf) +{ + int code, i, size = MSG_CNT; + + for (i = 0; i < size; ++i) { + if (!strcasecmp(input, msg_types[i].name)) { + code = msg_types[i].code; + break; + } + } + + // Return empty string if SIP message type description was not found + if (i == size) { + snprintf(out, PLUGIN_BUFFER_SIZE, "%s", ""); + return; + } + + snprintf(out, PLUGIN_BUFFER_SIZE, "%d", code); +} diff --git a/tools/fbitdump/src/typedefs.h b/tools/fbitdump/src/typedefs.h index 57eca2c9..74803390 100644 --- a/tools/fbitdump/src/typedefs.h +++ b/tools/fbitdump/src/typedefs.h @@ -46,7 +46,7 @@ #include #include #include -#include "fastbit/ibis.h" +#include /* Get defines from configure */ #ifdef HAVE_CONFIG_H diff --git a/tools/fbitexpire/m4/lbr_set_distro.m4 b/tools/fbitexpire/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/tools/fbitexpire/m4/lbr_set_distro.m4 +++ b/tools/fbitexpire/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/tools/fbitmerge/m4/lbr_set_distro.m4 b/tools/fbitmerge/m4/lbr_set_distro.m4 index 0de818ac..a2f9878c 100644 --- a/tools/fbitmerge/m4/lbr_set_distro.m4 +++ b/tools/fbitmerge/m4/lbr_set_distro.m4 @@ -26,7 +26,7 @@ AC_DEFUN([LBR_SET_DISTRO], # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' diff --git a/tools/fbitmerge/src/fbitmerge.cpp b/tools/fbitmerge/src/fbitmerge.cpp index bef5c36a..af4b5745 100644 --- a/tools/fbitmerge/src/fbitmerge.cpp +++ b/tools/fbitmerge/src/fbitmerge.cpp @@ -92,16 +92,14 @@ void usage() */ void remove_folder_tree(std::string dir_name) { - DIR *dir; - struct dirent *subdir; - - dir = opendir(dir_name.c_str()); + DIR *dir = opendir(dir_name.c_str()); if (dir == NULL) { std::cerr << "Error while initializing directory '" << dir_name << "'" << std::endl; return; } /* Go through all files and subfolders */ + struct dirent *subdir; while ((subdir = readdir(dir)) != NULL) { if (subdir->d_name[0] == '.') { continue; @@ -195,12 +193,12 @@ void merge_flows_stats(std::string first, std::string second) std::fstream file_f; std::fstream file_s; - file_f.open(first.c_str(), std::fstream::in); + file_f.open(first, std::fstream::in); if (!file_f.is_open()) { std::cerr << "Can't open file '" << first << "' for reading\n"; } - file_s.open(second.c_str(), std::fstream::in); + file_s.open(second, std::fstream::in); if (!file_s.is_open()) { std::cerr << "Can't open file '" << second << "' for reading\n"; file_f.close(); @@ -234,13 +232,13 @@ void merge_flows_stats(std::string first, std::string second) } /* Save data into second file */ - file_s.open(second.c_str(), std::fstream::out | std::fstream::trunc); + file_s.open(second, std::fstream::out | std::fstream::trunc); if (!file_s.is_open()) { std::cerr << "Cannot open file '" << second << "' for writing\n"; } else { - file_s << "Exported flows: " << exported_flows << std::endl; - file_s << "Received flows: " << received_flows << std::endl; - file_s << "Lost flows: " << lost_flows << std::endl; + file_s << "Exported flows: " << exported_flows << '\n'; + file_s << "Received flows: " << received_flows << '\n'; + file_s << "Lost flows: " << lost_flows << '\n'; file_s.close(); } } @@ -254,7 +252,7 @@ void merge_flows_stats(std::string first, std::string second) int merge_dirs(std::string src_dir, std::string dst_dir) { /* Table initialization */ - ibis::part part(dst_dir.c_str(), static_cast(0)); + ibis::part part(dst_dir.c_str(), nullptr); /* If there are no rows, we have nothing to do */ if (part.nRows() == 0) { @@ -271,7 +269,7 @@ int merge_dirs(std::string src_dir, std::string dst_dir) void scan_dir(std::string dir_name, std::string src_dir, DIRMAP *bigMap) { - ibis::part part(src_dir.c_str(), static_cast(0)); + ibis::part part(src_dir.c_str(), nullptr); if (part.nRows() == 0) { return; @@ -285,15 +283,15 @@ void scan_dir(std::string dir_name, std::string src_dir, DIRMAP *bigMap) int same_data(innerDirMap *first, innerDirMap *second) { - if ((*first).size() != (*second).size()) { + if (first->size() != second->size()) { return NOT_OK; } - for (innerDirMap::iterator it = (*first).begin(); it != (*first).end(); it++) { - if ((*second).find((*it).first) == (*second).end()) { + for (innerDirMap::iterator it = first->begin(); it != first->end(); it++) { + if (second->find(it->first) == second->end()) { return NOT_OK; } - if ((*it).second != (*second)[(*it).first]) { + if (it->second != (*second)[it->first]) { return NOT_OK; } } @@ -381,10 +379,10 @@ int merge_couple(std::string src_dir, std::string dst_dir, std::string work_dir) /* Iterate through whole dst_map and src_map and find folders with same data (and data types) */ for (DIRMAP::iterator dst_i = dst_map.begin(); dst_i != dst_map.end(); ++dst_i) { for (DIRMAP::iterator src_i = src_map.begin(); src_i != src_map.end(); ) { - if (same_data(&((*dst_i).second), &((*src_i).second)) == OK) { + if (same_data(&dst_i->second, &src_i->second) == OK) { /* If found, merge it */ - if (merge_dirs(src_dir_path + "/" + (*src_i).first, - dst_dir_path + "/" + (*dst_i).first) != OK) { + if (merge_dirs(src_dir_path + "/" + src_i->first, + dst_dir_path + "/" + dst_i->first) != OK) { closedir(sdir); closedir(ddir); return NOT_OK; @@ -405,21 +403,21 @@ int merge_couple(std::string src_dir, std::string dst_dir, std::string work_dir) char suffix = 'a'; /* Add suffix to the name */ - while (stat((dst_dir_path + "/" + (*src_i).first).c_str(), &st) == 0) { + while (stat((dst_dir_path + "/" + src_i->first).c_str(), &st) == 0) { if (suffix == 'z') { suffix = 'A'; } else if (suffix == 'Z') { /* \TODO do it better */ - std::cerr << "Not enough suffixes for folder '" << (*src_i).first << "'" << std::endl; + std::cerr << "Not enough suffixes for folder '" << src_i->first << "'" << std::endl; break; } else { suffix++; } } - if (rename((src_dir_path + "/" + (*src_i).first).c_str(), - (dst_dir_path + "/" + (*src_i).first).c_str()) != 0) { - std::cerr << "Cannot rename folder '" << (src_dir_path + "/" + (*src_i).first) << "'" << std::endl; + if (rename((src_dir_path + "/" + src_i->first).c_str(), + (dst_dir_path + "/" + src_i->first).c_str()) != 0) { + std::cerr << "Cannot rename folder '" << (src_dir_path + "/" + src_i->first) << "'" << std::endl; } } @@ -431,15 +429,15 @@ int merge_couple(std::string src_dir, std::string dst_dir, std::string work_dir) continue; } - if (same_data(&((*dst_i).second), &((*it).second)) == OK) { - if (merge_dirs(dst_dir_path + "/" + (*it).first, - dst_dir_path + "/" + (*dst_i).first) != OK) { + if (same_data(&dst_i->second, &it->second) == OK) { + if (merge_dirs(dst_dir_path + "/" + it->first, + dst_dir_path + "/" + dst_i->first) != OK) { closedir(sdir); closedir(ddir); return NOT_OK; } - remove_folder_tree((dst_dir_path + "/" + (*it).first).c_str()); + remove_folder_tree((dst_dir_path + "/" + it->first).c_str()); it = dst_map.erase(it); } @@ -450,8 +448,8 @@ int merge_couple(std::string src_dir, std::string dst_dir, std::string work_dir) } /* Finally merge flowsStats.txt files */ - merge_flows_stats(src_dir_path + "/" + "flowsStats.txt", - dst_dir_path + "/" + "flowsStats.txt"); + merge_flows_stats(src_dir_path + "/flowsStats.txt", + dst_dir_path + "/flowsStats.txt"); closedir(sdir); closedir(ddir); @@ -471,8 +469,7 @@ int merge_couple(std::string src_dir, std::string dst_dir, std::string work_dir) */ int merge_all(std::string work_dir, uint16_t key, std::string prefix) { - DIR *dir = NULL; - dir = opendir(work_dir.c_str()); + DIR *dir = opendir(work_dir.c_str()); if (dir == NULL) { std::cerr << "Error while initializing directory '" << work_dir << "'" << std::endl; return NOT_OK; @@ -503,8 +500,6 @@ int merge_all(std::string work_dir, uint16_t key, std::string prefix) std::map dir_map; std::map dir_map_max_mtime; struct dirent *subdir = NULL; - char key_str[size + 1]; - std::string full_subdir_path; while ((subdir = readdir(dir)) != NULL) { if (subdir->d_name[0] == '.') { @@ -512,12 +507,13 @@ int merge_all(std::string work_dir, uint16_t key, std::string prefix) } /* Get key value */ + char key_str[size + 1]; memset(key_str, 0, size + 1); memcpy(key_str, subdir->d_name + prefix.length(), size); uint32_t key_int = atoi(key_str); /* Get mtime */ - full_subdir_path = work_dir + "/" + subdir->d_name; + std::string full_subdir_path = work_dir + "/" + subdir->d_name; time_t dir_mtime = get_file_mtime(full_subdir_path); /* If it is the first occurrence of the key, store it in the map. @@ -545,7 +541,7 @@ int merge_all(std::string work_dir, uint16_t key, std::string prefix) /* Rename folders, if necessary - reset name values after key to 0. Also * update folder mtime. */ - for (std::map::iterator i = dir_map.begin(); i != dir_map.end(); i++) { + for (std::map::iterator i = dir_map.begin(); i != dir_map.end(); ++i) { if (prefix.length() + size > i->second.length()) { std::cerr << "Error while preparing to rename folder '" << i->second << \ "': folder name shorther than expected" << std::endl; @@ -603,16 +599,14 @@ int merge_all(std::string work_dir, uint16_t key, std::string prefix) */ int move_prefixed_dirs(std::string base_dir, std::string work_dir, std::string prefix, int key) { - DIR *dir; - struct dirent *subdir; - - dir = opendir(work_dir.c_str()); + DIR *dir = opendir(work_dir.c_str()); if (dir == NULL) { std::cerr << "Error while initializing directory '" << work_dir << "'" << std::endl; return NOT_OK; } /* Cycle through all files subfolders */ + struct dirent *subdir; while ((subdir = readdir(dir)) != NULL) { if (subdir->d_name[0] == '.') { continue; @@ -696,7 +690,6 @@ int main(int argc, char *argv[]) int moveOnly = 0; std::string base_dir; - std::stringstream ss; std::string prefix; while ((option = getopt_long(argc, argv, OPTSTRING, long_opts, NULL)) != -1) { @@ -717,16 +710,10 @@ int main(int argc, char *argv[]) break; case 'b': - ss << optarg; - base_dir = ss.str(); - ss.str(std::string()); - ss.clear(); + base_dir = optarg; break; case 'p': - ss << optarg; - prefix = ss.str(); - ss.str(std::string()); - ss.clear(); + prefix = optarg; break; case 's': separated = 1; diff --git a/tools/profilesdaemon/m4/lbr_set_distro.m4 b/tools/profilesdaemon/m4/lbr_set_distro.m4 index 07e9879f..a2f9878c 100644 --- a/tools/profilesdaemon/m4/lbr_set_distro.m4 +++ b/tools/profilesdaemon/m4/lbr_set_distro.m4 @@ -10,20 +10,23 @@ # The user option always superseeds other settings. # # Currently the macro recognizes following distributions: +# # redhat -# debian -# mandrake # suse +# mandrake +# debian +# arch # # Author: Petr Velan -# Modified: 2012-05-03 +# Modified: 2015-06-12 # AC_DEFUN([LBR_SET_DISTRO], [m4_ifval([$1],[DISTRO=$1],[DISTRO="redhat"]) + # Autodetect current distribution if test -f /etc/redhat-release; then DISTRO=redhat -elif test -f /etc/SuSE-release; then +elif test -f /etc/SuSE-release -o -f /etc/SUSE-brand; then DISTRO=suse elif test -f /etc/mandrake-release; then DISTRO='mandrake' @@ -32,6 +35,7 @@ elif test -f /etc/debian_version; then elif test -f /etc/arch-release; then DISTRO=arch fi + # Check if distribution was specified manually AC_ARG_WITH([distro], AC_HELP_STRING([--with-distro=DISTRO],[Compile for specific Linux distribution]),