This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Outputs

Set up output(s) and gather metrics

dnsmonster follows a pipeline architecture for each individual packet. After the Capture and filter, each processed packet arrives at the output dispatcher. The dispatcher sends a copy of the output to each individual output module that have been configured to produce output. For instance, if you specify stdoutOutputType=1 and --fileOutputType=1 --fileOutputPath=/dev/stdout, you’ll see each processed output twice in your stdout. One coming from the stdout output type, and the other from the file output type which happens to have the same address (/dev/stdout).

In general, each output has its own configuration section. You can see the sections with “_output” suffix when running dnsmonster --help from the command line. The most important parameter for each output is their “Type”. Each output has 5 different types:

  • Type 0:
  • Type 1: An output module configured as Type 1 will ignore “SkipDomains” and “AllowDomains” and will generate output for all the incoming processed packets. Note that the output types does not nullify input filters since it is applied after capture and early packet filters. Take a look at Filters and Masks to see the order of the filters applied.
  • Type 2: An output module configured as Type 2 will ignore “AllowDomains” and only applies the “SkipDmains” logic to the incoming processed packets.
  • Type 3: An output module configured as Type 3 will ignore “SkipDmains” and only applies the “AllowDomains” logic to the incoming processed packets.
  • Type 4: An output module configured as Type 4 will apply both “SkipDmains” and “AllowDomains” logic to the incoming processed packets.

Other than Type, each output module may require additional configuration parameters. For more information, refer to each module’s documentation.

Output Formats

dnsmonster supports multiple output formats:

  • json: the standard JSON output. The output looks like below sample
{"Timestamp":"2020-08-08T00:19:42.567768Z","DNS":{"Id":54443,"Response":true,"Opcode":0,"Authoritative":false,"Truncated":false,"RecursionDesired":true,"RecursionAvailable":true,"Zero":false,"AuthenticatedData":false,"CheckingDisabled":false,"Rcode":0,"Question":[{"Name":"imap.gmail.com.","Qtype":1,"Qclass":1}],"Answer":[{"Hdr":{"Name":"imap.gmail.com.","Rrtype":1,"Class":1,"Ttl":242,"Rdlength":4},"A":"172.217.194.108"},{"Hdr":{"Name":"imap.gmail.com.","Rrtype":1,"Class":1,"Ttl":242,"Rdlength":4},"A":"172.217.194.109"}],"Ns":null,"Extra":null},"IPVersion":4,"SrcIP":"1.1.1.1","DstIP":"2.2.2.2","Protocol":"udp","PacketLength":64}
  • csv: the CSV output. The fields and headers are non-customizable at the moment. to get a custom output, please look at gotemplate.
Year,Month,Day,Hour,Minute,Second,Ns,Server,IpVersion,SrcIP,DstIP,Protocol,Qr,OpCode,Class,Type,ResponseCode,Question,Size,Edns0Present,DoBit,Id
2020,8,8,0,19,42,567768000,default,4,2050551041,2050598324,17,1,0,1,1,0,imap.gmail.com.,64,0,0,54443
  • csv_no_headers: Looks exactly like the CSV but with no header print at the beginning
  • gotemplate: Customizable template to come up with your own formatting. let’s look at a few examples with the same packet we’ve looked at using JSON and CSV
$ dnsmonster --pcapFile input.pcap --stdoutOutputType=1 --stdoutOutputFormat=gotemplate --stdoutOutputGoTemplate="timestamp=\"{{.Timestamp}}\" id={{.DNS.Id}} question={{(index .DNS.Question 0).Name}}"
timestamp="2020-08-08 00:19:42.567735 +0000 UTC" id=54443 question=imap.gmail.com.

Take a look at the official docs for more info regarding text/template and your various options.

1 - Apache Kafka

Possibly the most versatile output supported by dnsmonster. Kafka output allows you to connect to endless list of supported sinks. It is the recommended output module for enterprise designs since it offers fault tolerance and it can sustain outages to the sink. dnsmonster’s Kafka output supports compression, TLS, and multiple brokers. In order to provide multiple brokers, you need to specify it multiple times.

Configuration Parameters

[kafka_output]
; What should be written to kafka. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
KafkaOutputType = 0

; kafka broker address(es), example: 127.0.0.1:9092. Used if kafkaOutputType is not none
KafkaOutputBroker =

; Kafka topic for logging
KafkaOutputTopic = dnsmonster

; Minimum capacity of the cache array used to send data to Kafka
KafkaBatchSize = 1000

; Kafka connection timeout in seconds
KafkaTimeout = 3

; Interval between sending results to Kafka if Batch size is not filled
KafkaBatchDelay = 1s

; Compress Kafka connection
KafkaCompress = false

; Compression Type[gzip, snappy, lz4, zstd] default is snappy
KafkaCompressiontype = snappy

; Use TLS for kafka connection
KafkaSecure = false

; Path of CA certificate that signs Kafka broker certificate
KafkaCACertificatePath =

; Path of TLS certificate to present to broker
KafkaTLSCertificatePath =

; Path of TLS certificate key
KafkaTLSKeyPath =

2 - Parquet

Parquet output module is designed to send dnsmonster logs to parquet files.

Configuration Parameters

[parquet_output]
; What should be written to parquet file. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
parquetoutputtype = 0

; Path to output folder. Used if parquetoutputtype is not none
parquetoutputpath =

; Number of records to write to parquet file before flushing
parquetflushbatchsize = 10000

; Number of workers to write to parquet file
parquetworkercount = 4

; Size of the write buffer in bytes
parquetwritebuffersize = 256000

3 - ClickHouse

ClickHouse is a time-series database engine developed by Yandex. It uses a column-oriented design which makes it a good candidate to store hundreds of thousands of DNS queries per second with extremely good compression ratio as well as fast retrieval of data.

Currently, dnsmonster’s implementation requires the table name to be set to DNS_LOG. An SQL schema file is provided by the repository under the clickhouse directory. The Grafana dashboard and configuration set provided by dnsmonster also corresponds with the ClickHouse schema and can be used to visualize the data.

configuration parameters

  • --clickhouseAddress: Address of the ClickHouse database to save the results (default: localhost:9000)
  • --clickhouseUsername: Username to connect to the ClickHouse database (default: empty)
  • --clickhousePassword: Password to connect to the ClickHouse database (default: empty)
  • --clickhouseDatabase: Database to connect to the ClickHouse database (default: default)
  • --clickhouseDelay: Interval between sending results to ClickHouse (default: 1s)
  • --clickhouseDebug: Debug ClickHouse connection (default: false)
  • --clickhouseCompress: Compress ClickHouse connection (default: false)
  • --clickhouseSecure: Use TLS for ClickHouse connection (default: false)
  • --clickhouseSaveFullQuery: Save full packet query and response in JSON format. (default: false)
  • --clickhouseOutputType: ClickHouse output type. Options: (default: 0)
    • 0: Disable Output
    • 1: Enable Output without any filters
    • 2: Enable Output and apply skipdomains logic
    • 3: Enable Output and apply allowdomains logic
    • 4: Enable Output and apply both skip and allow domains logic
  • --clickhouseBatchSize: Minimum capacity of the cache array used to send data to clickhouse. Set close to the queries per second received to prevent allocations (default: 100000)
  • --clickhouseWorkers: Number of ClickHouse output Workers (default: 1)
  • --clickhouseWorkerChannelSize: Channel Size for each ClickHouse Worker (default: 100000)

Note: the general option --skipTLSVerification applies to this module as well.

Retention Policy

The default retention policy for the ClickHouse tables is set to 30 days. You can change the number by building the containers using ./autobuild.sh. Since ClickHouse doesn’t have an internal timestamp, the TTL will look at incoming packet’s date in pcap files. So while importing old pcap files, ClickHouse may automatically start removing the data as they’re being written and you won’t see any actual data in your Grafana. To fix that, you can change TTL to a day older than your earliest packet inside the PCAP file.

NOTE: to manually change the TTL, you need to directly connect to the ClickHouse server using the clickhouse-client binary and run the following SQL statements (this example changes it from 30 to 90 days):

ALTER TABLE DNS_LOG MODIFY TTL DnsDate + INTERVAL 90 DAY;`

NOTE: The above command only changes TTL for the raw DNS log data, which is the majority of your capacity consumption. To make sure that you adjust the TTL for every single aggregation table, you can run the following:

ALTER TABLE DNS_LOG MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_DOMAIN_COUNT` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_DOMAIN_UNIQUE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_PROTOCOL` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_GENERAL_AGGREGATIONS` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_EDNS` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_OPCODE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_TYPE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_CLASS` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_RESPONSECODE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_SRCIP_MASK` MODIFY TTL DnsDate + INTERVAL 90 DAY;

UPDATE: in the latest version of clickhouse, the .inner tables don’t have the same name as the corresponding aggregation views. To modify the TTL you have to find the table names in UUID format using SHOW TABLES and repeat the ALTER command with those UUIDs.

SAMPLE in clickhouse SELECT queries

By default, the main tables created by tables.sql (DNS_LOG) file have the ability to sample down a result as needed, since each DNS question has a semi-unique UUID associated with it. For more information about SAMPLE queries in Clickhouse, please check out this document.

Useful queries

  • List of unique domains visited over the past 24 hours
-- using domain_count table
SELECT DISTINCT Question FROM DNS_DOMAIN_COUNT WHERE t > Now() - toIntervalHour(24)

-- only the number
SELECT count(DISTINCT Question) FROM DNS_DOMAIN_COUNT WHERE t > Now() - toIntervalHour(24)

-- see memory usage of the above query in bytes
SELECT memory_usage FROM system.query_log WHERE query_kind='Select' AND  arrayExists(x-> x='default.DNS_DOMAIN_COUNT', tables) ORDER BY event_time DESC LIMIT 1 format Vertical

-- you can also get the memory usage of each query by query ID. There should be only 1 result so we will cut it off at one to optimize performance
SELECT sum(memory_usage) FROM system.query_log WHERE initial_query_id = '8de8fe3c-d46a-4a32-83da-4f4ba4dc49e5' format Vertical

4 - Elasticsearch/OpenSearch

Elasticsearch is a full-text search engine and it’s used widely across a lot of security tools. dnsmonster supports Elastic 7.x out of the box. The support for 6.x and 8.x has not been tested.

There is also a fork of Elasticsearch called Opendistro, later renamed to Opensearch. Both are compatible with 7.10.x Elastic, so it should also be supported too.

Configuration parameters

[elastic_output]
; What should be written to elastic. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
ElasticOutputType = 0

; elastic endpoint address, example: http://127.0.0.1:9200. Used if elasticOutputType is not none
ElasticOutputEndpoint =

; elastic index
ElasticOutputIndex = default

; Send data to Elastic in batch sizes
ElasticBatchSize = 1000

; Interval between sending results to Elastic if Batch size is not filled
ElasticBatchDelay = 1s

5 - InfluxDB

InfluxDB is a time series database used to store logs and metrics with high ingestion rate.

Configuration options

[influx_output]
; What should be written to influx. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
InfluxOutputType = 0

; influx Server address, example: http://localhost:8086. Used if influxOutputType is not none
InfluxOutputServer =

; Influx Server Auth Token
InfluxOutputToken = dnsmonster

; Influx Server Bucket
InfluxOutputBucket = dnsmonster

; Influx Server Org
InfluxOutputOrg = dnsmonster

; Minimum capacity of the cache array used to send data to Influx
InfluxOutputWorkers = 8

; Minimum capacity of the cache array used to send data to Influx
InfluxBatchSize = 1000

6 - Microsoft Sentinel

Microsoft Sentinel output module is designed to send dnsmonster logs to Sentinel. In addition to that, this module supports sending the logs to any Log Analytics workspace no matter if they are connected to Sentinel or not.

Please take a look at Microsoft’s official documentation to see how Customer ID and Shared key are obtained.

Configuration Parameters

[sentinel_output]
; What should be written to Microsoft Sentinel. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
SentinelOutputType = 0

; Sentinel Shared Key, either the primary or secondary, can be found in Agents Management page under Log Analytics workspace
SentinelOutputSharedKey =

; Sentinel Customer Id. can be found in Agents Management page under Log Analytics workspace
SentinelOutputCustomerId =

; Sentinel Output LogType
SentinelOutputLogType = dnsmonster

; Sentinel Output Proxy in URI format
SentinelOutputProxy =

; Sentinel Batch Size
SentinelBatchSize = 100

; Interval between sending results to Sentinel if Batch size is not filled
SentinelBatchDelay = 1s

7 - Splunk HEC

Splunk HTTP Event Collector is a widely used component of Splunk to ingest raw and JSON data. dnsmonster uses the JSON output to push the logs into a Splunk index. various configurations are also supported. You can also use multiple HEC endpoints to have load balancing and fault tolerance across multiple index heads. Note that the token and other settings are shared between multiple endpoints.

Configuration Parameters

[splunk_output]
; What should be written to HEC. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
SplunkOutputType = 0

; splunk endpoint address, example: http://127.0.0.1:8088. Used if splunkOutputType is not none, can be specified multiple times for load balanace and HA
SplunkOutputEndpoint =

; Splunk HEC Token
SplunkOutputToken = 00000000-0000-0000-0000-000000000000

; Splunk Output Index
SplunkOutputIndex = temp

; Splunk Output Proxy in URI format
SplunkOutputProxy =

; Splunk Output Source
SplunkOutputSource = dnsmonster

; Splunk Output Sourcetype
SplunkOutputSourceType = json

; Send data to HEC in batch sizes
SplunkBatchSize = 1000

; Interval between sending results to HEC if Batch size is not filled
SplunkBatchDelay = 1s

8 - Stdout, syslog or Log File

Stdout, syslog and file are supported outputs for dnsmonster out of the box. They are useful specially if you have a SIEM agent reading the files as they come in. Note that dnsmonster does not provide support for log rotation and the capacity of the hard drive while writing into a file. You can use a tool like logrotate to perform cleanups on the log files. The signalling on log rotation (SIGHUP) has not been tested with dnsmonster.

The JSON schema used to send the logs can be configured to be compatible with Open Cybersecurity Schema Framework (OCSF) as well.

Currently, Syslog output is only supported on Linux.

Configuration parameters

[file_output]
; What should be written to file. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
FileOutputType = 0

; Path to output file. Used if fileOutputType is not none
FileOutputPath =

; Output format for file. options:json, json-ocsf, csv, csv_no_header, gotemplate. note that the csv splits the datetime format into multiple fields
FileOutputFormat = json

; Go Template to format the output as needed
FileOutputGoTemplate = {{.}}

[stdout_output]
; What should be written to stdout. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
StdoutOutputType = 0

; Output format for stdout. options:json,csv, csv_no_header, gotemplate. note that the csv splits the datetime format into multiple fields
StdoutOutputFormat = json

; Go Template to format the output as needed
StdoutOutputGoTemplate = {{.}}

; Number of workers
StdoutOutputWorkerCount = 8

[syslog_output]
; What should be written to Syslog server. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
SyslogOutputType = 0

; Syslog endpoint address, example: udp://127.0.0.1:514, tcp://127.0.0.1:514. Used if syslogOutputType is not none
SyslogOutputEndpoint = udp://127.0.0.1:514

9 - VictoriaLogs

VictoriaLogs output module is designed to send dnsmonster logs to victorialogs.

Configuration Parameters

[victoria_output]
; Victoria Output Endpoint. example: http://localhost:9428/insert/jsonline?_msg_field=rcode_id&_time_field=time
victoriaoutputendpoint =

; What should be written to Microsoft Victoria. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
victoriaoutputtype = 0

; Victoria Output Proxy in URI format
victoriaoutputproxy =

; Number of workers
victoriaoutputworkers = 8

; Victoria Batch Size
victoriabatchsize = 100

; Interval between sending results to Victoria if Batch size is not filled. Any value larger than zero takes precedence over Batch Size
victoriabatchdelay = 0s

10 - Zinc Search

Zinc Search output module is designed to send dnsmonster logs to zincsearch.

Configuration Parameters


[zinc_output]
; What should be written to zinc. options:
;	0: Disable Output
;	1: Enable Output without any filters
;	2: Enable Output and apply skipdomains logic
;	3: Enable Output and apply allowdomains logic
;	4: Enable Output and apply both skip and allow domains logic
zincoutputtype = 0

; index used to save data in Zinc
zincoutputindex = dnsmonster

; zinc endpoint address, example: http://127.0.0.1:9200/api/default/_bulk. Used if zincOutputType is not none
zincoutputendpoint =

; zinc username, example: [email protected]. Used if zincOutputType is not none
zincoutputusername =

; zinc password, example: password. Used if zincOutputType is not none
zincoutputpassword =

; Send data to Zinc in batch sizes
zincbatchsize = 1000

; Interval between sending results to Zinc if Batch size is not filled
zincbatchdelay = 1s

; Zing request timeout
zinctimeout = 10s

11 - PostgreSQL

PostgreSQL is regarded as the world’s most advanced open source database. dnsmonster has experimental support to output to postgreSQL and any other compatible database engines (CockroachDB).

Configuration options


# [psql_output]
# What should be written to Microsoft Psql. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--psqlOutputType=0

# Psql endpoint used. must be in uri format. example: postgres://username:password@hostname:port/database?sslmode=disable
--psqlEndpoint=

# Number of PSQL workers
--psqlWorkers=1

# Psql Batch Size
--psqlBatchSize=1

# Interval between sending results to Psql if Batch size is not filled. Any value larger than zero takes precedence over Batch Size
--psqlBatchDelay=0s

# Timeout for any INSERT operation before we consider them failed
--psqlBatchTimeout=5s

# Save full packet query and response in JSON format.
--psqlSaveFullQuery

12 - Metrics

Each enabled input and output comes with a set of metrics in order to monitor performance and troubleshoot your running instance. dnsmonster uses the go-metrics library which makes it easy to register metrics on the fly and in a modular way.

currently, three metric outputs are supported:

  • stderr
  • statsd
  • prometheus

Configuration parameters

[metric]
; Metric Endpoint Service. Choices: stderr, statsd, prometheus
MetricEndpointType = stderr

; Statsd endpoint. Example: 127.0.0.1:8125 
MetricStatsdAgent =

; Prometheus Registry endpoint. Example: http://0.0.0.0:2112/metric
MetricPrometheusEndpoint =

; Interval between sending results to Metric Endpoint
MetricFlushInterval = 10s