This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.


Some Design Templates

All-In-One Test Environment


Above diagram shows the overview of the autobuild output. running ./ creates multiple containers:

  • a dnsmonster container per selected interfaces from the host to look at the raw traffic. Host’s interface list will be prompted when running, allowing you to select one or more interfaces. *a clickhouse container to collect dnsmonster’s outputs and save all the logs and data to their respective directory inside the host. Both paths will be prompted in The default tables and TTL for them will implemented automatically.
  • a grafana container connecting back to clickhouse. It automatically sets up the connection to ClickHouse, and sets up the builtin dashboards based on the default ClickHouse tables. Note that Grafana container needs an internet connection to successfully set up the plugins. If you don’t have an internet connection, the dnsmonster and clickhouse containers will start working without any issues, and the error produced by Grafana can be ignored.

All-in-one Demo

AIO Demo

1 - ClickHouse Cloud

use dnsmonster with ClickHouse Cloud

ClickHouse Cloud is a Serverless ClickHouse offering by the ClickHouse team. In this small tutorial I’ll go through the steps of building your DNS monitoring with it. At the time of writing this post, ClickHouse Cloud is in preview and some of the features might change over time.

Create a ClickHouse Cluster

First, let’s create a ClickHouse instance by signing up and logging into ClickHouse Cloud portal and clicking on “New Service” on the top right corner. You will be asked to provide a name and a region for your database. For the purpose of this tutorial, I will put the name of the database as dnsmonster in us-east-2 region. There’s a good chance that other parameters will be present when you define your cluster such as size and number of servers, but overall everything should look pretty much the same

After clicking on create, you’ll see the connection settings for your instance. the default username to login is default and the password is generated randomly. Save that password for a later use since the portal won’t show it forever!

And that’s it! You have a fully managed ClickHouse cluster running in AWS. Now let’s create our tables and views using the credentials we just got.

Create and configure Tables

when you checkout dnsmonster repository from GitHub, there is a replicated table file with the table definitions suited for ClickHouse cloud. note that the “traditional” table design won’t work on ClickHouse cloud since the managed cluster won’t allow non-replicated tables. This policy has been put in place to ensure the high availability and integrity of the tables’ data. Download the .sql file and save it anywhere on your disk. for example, /tmp/tables_replicated.sql. Now let’s use clickhouse-client tool to create the tables.

clickhouse-client --host --secure --port 9440 --password RANDOM_PASSWORD --multiquery < /tmp/tables_replicated.sql

replace the all caps variables with your server instance and this should create your primary tables. Everything should be in place for us to use dnsmonster. Now we can point the dnsmonster service to the ClickHouse instance and it should work without any issues.

dnsmonster --devName lo \
          --packetHandlerCount 8 \
          --clickhouseAddress \
          --clickhouseOutputType 1 \
          --clickhouseBatchSize 7000 \
          --clickhouseWorkers 16 \
          --clickhouseSecure \
          --clickhouseUsername default \
          --clickhousePassword "RANDOM_PASSWORD" \
          --clickhouseCompress \
          --serverName my_dnsmonster \
          --maskSize4 16 \
          --maskSize6 64

Compressing the ClickHouse INSERT connection (--clickhouseCompress) will make it efficient and fast. I’ve gotten better result by turning it on. Keep in mind that the tweaking of the packetHandlerCount as well as number of ClickHouse workers, batch size etc. will have a major impact on the overall performance. In my test, I’ve been able to exceed ~250,000 packets per seconds easily on my fibre connection. Keep in mind that you can substitute command line arguments with environment variables or a config file. Refer to the Configuration section of the documents for more info.

Configuring Grafana and dashboards

Now that the data is being pushed into ClickHouse, you can leverage Grafana with the pre-built dashboard to help you gain visibility over your data. Let’s start with running an instance of Grafana in a docker container.

docker run --name dnsmonster_grafana -p 3000:3000 grafana/grafana:8.4.3

then browse to localhost:3000 with admin as both username and password, and install the ClickHouse plugin for Grafana. There are two choices in Grafana store, so both of them should work file out of the box, I’ve tested Altinity plugin for ClickHouse but there’s also an official ClickHouse Grafana Plugin to choose from.

After installing the plugin, you can add your ClickHouse server as a datasource using the same address, port and the password you used to run dnsmonster. After connecting Grafana to ClickHouse, you can import the pre-built dashboard from here either via the GUI or the CLI. Once your dashboard is imported, you can point it to your datasource address and most panels should start showing data. most, but not all.

One final step to make sure everything is running smoothly, is to INSERT the dictionaries. Download the 4 dictonary files located here either manually or by cloning the git repo. I’ll assume that they’re in your /tmp/ directory. Now let’s go back to clickhouse-client and quickly make that happen

clickhouse-client --host --secure --port 9440 --password RANDOM_PASSWORD 
CREATE DICTIONARY dns_class (Id Uint64, Name String) PRIMARY KEY Id LAYOUT(FLAT()) SOURCE(HTTP(url "" format TSV)) LIFETIME(MIN 0 MAX 0)
CREATE DICTIONARY dns_opcode (Id Uint64, Name String) PRIMARY KEY Id LAYOUT(FLAT()) SOURCE(HTTP(url "" format TSV))  LIFETIME(MIN 0 MAX 0) 
CREATE DICTIONARY dns_response (Id Uint64, Name String) PRIMARY KEY Id LAYOUT(FLAT()) SOURCE(HTTP(url "" format TSV))  LIFETIME(MIN 0 MAX 0) 
CREATE DICTIONARY dns_type (Id Uint64, Name String) PRIMARY KEY Id LAYOUT(FLAT()) SOURCE(HTTP(url "" format TSV)) LIFETIME(MIN 0 MAX 0) 

And that’s about it. With above commands, the full stack of Grafana, ClickHouse and dnsmonster should work perfectly. No more managing ClickHouse clusters manually! You can also combine this with the Kubernetes tutorial and provide a cloud-native, serverless DNS monitoring platform at scale.

2 - Kubernetes

use dnsmonster to monitor Kubernetes DNS traffic

In this guide, I’ll go through the steps to inject a custom configuration into Kubernetes’ coredns DNS server to provide a dnstap logger, and set up a dnsmonster pod to receive the logs, process them and send them to intended outputs.

dnsmonster deployment

In order to make dnsmonster see the dnstap connection coming from coredns pod, we need to create the dnsmonster Service inside the same namespace (kube-system or equivalent)

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
    k8s-app: dnsmonster-dnstap
  name: dnsmonster-dnstap
  namespace: kube-system
  # change the replica count to how many you might need to comfortably ingest the data
  replicas: 1
      k8s-app: dnsmonster-dnstap
        k8s-app: dnsmonster-dnstap
      - name: dnsm-dnstap
          - "--dnstapSocket=tcp://"
          - "--stdoutOutputType=1"
        imagePullPolicy: IfNotPresent
          - containerPort: 7878
apiVersion: v1
# as per above documentation, each service will have a unique IP address that won't change for the lifespan of the service
kind: Service
  name: dnsmonster-dnstap
  namespace: kube-system
    k8s-app: dnsmonster-dnstap
  - name: dnsmonster-dnstap
    protocol: TCP
    port: 7878
    targetPort: 7878

now we can get the static IP assigned to the service to use it in coredns custom ConfigMap. Note that since CoreDNS itself is providing DNS, it does not support FQDN as a dnstap endpoint.

SVCIP=$(kubectl get service dnsmonster-dnstap --output go-template --template='{{.spec.clusterIP}}')

locate and edit the coredns config

Let’s try and see if we can see and manipulate configuration inside coredns pods. Using below command, we can get a list of running coredns containers

kubectl get pod --output yaml --all-namespaces | grep coredns

In above command, you should be able to see many objects associated with coredns, most notably, coredns-custom. coredns-custom ConfigMap allows us to customize coredns configuration file and enable builtin plugins for it. Many cloud providers have built coredns-custom ConfigMap into the offering. Take a look at AKS, Oracle Cloud and DigitalOcean docs for more details.

in Amazon’s EKS, there’s no coredns-custom. So the configuration needs to be edited on the main configuration file instead. On top of that, EKS will keep overriding your configuration with the “default” value through a DNS add-on. That add-on needs to be disabled for any customization in coredns configuration as well. Take a look at This issue for more information.

Below command has been tested on DigitalOcean managed Kubernetes

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
  name: coredns-custom
  namespace: kube-system
  log.override: |
    dnstap tcp://$SVCIP:7878 full

After running the above command, you will see the logs inside your dnsmonster pod. As commented in the yaml definitions, customizing the configuration parameters should be fairly straightforward.