Table of contents

Use an external database

By default, Deep Security Smart Check configures a database pod in your Kubernetes cluster. This is convenient for demonstration purposes, but for production, you should use an external database.

It is important for the database to be geographically close to the Deep Security Smart Check cluster. Network delays between Deep Security Smart Check and your database can cause the system to behave poorly.

Supported databases

You can configure Deep Security Smart Check with an external database running PostgreSQL 9.6, 10, 11, 12, or 13.

Install Deep Security Smart Check with an external database

You will need to know the userid and password for the master user on your database server as well as the host name and port.

Add the snippet below to your overrides.yaml file before installing Deep Security Smart Check:

db:
  user: postgres
  password: password
  host: database.example.com
  port: 5432

Configure TLS for your database connection

By default, Deep Security Smart Check uses a secure connection to your database server. You will likely need to configure trust for the certificate authority that created your database server's TLS certificate.

To do this, first create a ConfigMap with the certificate authority's certificate:

$ kubectl create \
  configmap \
  dssc-db-trust \
  --from-file=ca=ca.pem

and then update the db section of your overrides.yaml file and add a tls section to tell Deep Security Smart Check where to get the certificate to trust:

db:
  tls:
    ca:
      valueFrom:
        configMapKeyRef:
          name: dssc-db-trust
          key: ca

You can then install Deep Security Smart Check with these overrides.

At this time, Deep Security Smart Check only supports database certificates with RSA keys.

Migrating to an external database

Deep Security Smart Check does not support migrating from the built-in database to an external database. You must re-install.

Install Deep Security Smart Check with RDS for PostgreSQL in EKS

If you don't have an EKS cluster, eksctl is a convenient tool to provision one. We suggest that you deploy RDS for PostgreSQL in the same VPC as the cluster.

To facilitate the communication between the cluster and RDS, you need to allow inbound traffic from your cluster's security group in the RDS security group. If the cluster has multiple security groups, you may need to do this for all security groups. You should also allow inbound traffic from RDS to the cluster. The default port is 5432 but yours may be configured differently.

For security reasons, do not grant public access to the RDS instance.

By default, TLS is enabled in RDS. You can obtain the root and intermediate certificates to configure the secret for Smart Check to connect to RDS as described above.

Install Deep Security Smart Check with Azure Database for PostgreSQL in AKS

You can follow Azure document to prevision an AKS cluster using the Azure portal or the Azure CLI. We recommend that you select a virtual machine type that supports accelerated networking for the cluster nodes. Accelerated networking greatly improves the networking performance. D/DSv2 and F/Fs types of virtual machines support accelerated networking.

Follow the Azure documentation to create a database using either the Azure portal or the Azure CLI. To allow secure and direct connection between the cluster and database, enable VNet service endpoints and VNet rules in Azure Database for PostgreSQL.

If TLS connectivity needs to be enforced, you can obtain the root certificate to configure the secret for Smart Check to connect to Azure Database as described above.

Troubleshooting

If there is a problem with your setup, any pods that rely on the database connection will get stuck trying to initialize.

$ kubectl get pods --field-selector='status.phase!=Running'
NAME                                  READY   STATUS     RESTARTS   AGE
auth-7d78dccff7-nfh97                 0/1     Init:0/1   0          4m26s
openscap-scan-ddc7b9d-jrhc7           0/2     Init:0/1   0          4m26s
registryviews-5f46786b46-m6x84        0/1     Init:0/1   0          4m25s
scan-568ffb49d7-dp2tt                 0/1     Init:0/1   0          4m25s
vulnerability-scan-7b7c59d6f8-d5ql9   0/1     Init:0/2   0          4m25s

Pick one of these pods and run kubectl describe pod on it, then look at the Events section for more information.

FailedMount: MountVolume.SetUp failed for volume "database-ca": configmap not found

In this example, the error shows that the dssc-db-trust ConfigMap does not exist.

$ kubectl describe pod auth-7d78dccff7-nfh97
...
Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    7m13s                 default-scheduler  Successfully assigned default/auth-7d78dccff7-nfh97 to minikube
  Warning  FailedMount  61s (x11 over 7m13s)  kubelet, minikube  MountVolume.SetUp failed for volume "database-ca" : configmap "dssc-db-trust" not found
  Warning  FailedMount  36s (x3 over 5m10s)   kubelet, minikube  Unable to mount volumes for pod "auth-7d78dccff7-nfh97_default(ff34aa94-94fc-11e9-90aa-080027ce2867)": timeout expired waiting for volumes to attach or mount for pod "default"/"auth-7d78dccff7-nfh97". list of unmounted volumes=[database-ca]. list of unattached volumes=[database-ca]

Create the ConfigMap as described in Configure TLS for your database connection and ensure the overrides.yaml is using the correct value for the ConfigMap name.

If you modify the overrides.yaml file, you will need to use helm upgrade, or helm delete --purge and helm install to pick up the change.

If you did not modify the overrides.yaml file, you can simply delete the stuck pods. Kubernetes will re-create the pods.

FailedMount: MountVolume.SetUp failed for volume "database-ca": non-existent config key

In this example, the error shows that the ca key does not exist in the dssc-db-trust ConfigMap.

$ kubectl describe pod auth-7d78dccff7-p79j2
...
Events:
  Type     Reason       Age               From               Message
  ----     ------       ----              ----               -------
  Normal   Scheduled    13s               default-scheduler  Successfully assigned default/auth-7d78dccff7-p79j2 to minikube
  Warning  FailedMount  6s (x5 over 13s)  kubelet, minikube  MountVolume.SetUp failed for volume "database-ca" : configmap references non-existent config key: ca

Create the ConfigMap as described in Configure TLS for your database connection and ensure the overrides.yaml is using the correct value for the ConfigMap name and key.

If you modify the overrides.yaml file, you will need to use helm upgrade, or helm delete --purge and helm install to pick up the change.

If you did not modify the overrides.yaml file, you can simply delete the stuck pods. Kubernetes will re-create the pods.

No errors in Events

If there are no errors in the Events section of the kubectl describe pod output, check the logs of the db-init container in the pod.

I/O timeout in db-init logs

In this example, the i/o timeout error indicates that the container was unable to reach the database server.

$ kubectl logs auth-5447fbfb7-gvrbh -c db-init
{"commit":"79d968b712cfba4407e2cdc6f848034435c04859","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-17T15:48:15Z"}
{"component":"db-init","error":"dial tcp 192.168.19.226:5432: i/o timeout","message":"could not get database connection","severity":"info","timestamp":"2019-06-17T15:48:20Z"}

Check that your database server allows connections from all nodes in your cluster. This may involve firewall rules, security groups, or other security controls depending on your cluster infrastructure.

Once you have resolved the network connectivity issue, Deep Security Smart Check should automatically recover and complete the initialization process.

Password authentication failed

In this example, the error logs show that the database server has rejected the username/password that you provided.

$ kubectl logs -f auth-66b5d948c-xht4t -c db-init
{"commit":"5c108f2e383fd54fef8d3f6848c0424d9de9e001","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-22T15:22:08Z"}
Error: could not set up database: database not available: could not get database connection: pq: password authentication failed for user "postgres"

Check the username and password and update the overrides.yaml file if required.

If you modify the overrides.yaml file, you will need to use helm upgrade or helm delete --purge and helm install to pick up the change.

If you did not modify the overrides.yaml file, you can simply delete the stuck pods. Kubernetes will re-create the pods.

Certificate signed by unknown authority

In this example, the error logs show that a secure connection to the database could not be established because the server certificate has been issued by certificate authority that Deep Security Smart Check does not recognize.

kubectl logs auth-66b5d948c-t5jj9 -c db-init
{"commit":"5c108f2e383fd54fef8d3f6848c0424d9de9e001","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-22T15:33:25Z"}
{"component":"db-init","error":"x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"database.example.com\")","message":"could not get database connection","severity":"info","timestamp":"2019-06-22T15:33:25Z"}

Obtain the certificate (or certificate bundle) for your certificate authority, then create the ConfigMap as described in Configure TLS for your database connection.

If you are using Postgres in Amazon RDS, read Using SSL with a PostgreSQL DB Instance for more information and to get the certificate bundle you will need.

If you modify the overrides.yaml file, you will need to use helm upgrade, or helm delete --purge and helm install to pick up the change.

If you did not modify the overrides.yaml file, you can simply delete the stuck pods. Kubernetes will re-create the pods.

IP address not present in the server certificate

In this example, the error logs show that a secure connection to the database could not be established because Deep Security Smart Check has been configured to connect to the database using an IP address, but the server certificate does not include an entry for that address.

$ kubectl logs auth-66b5d948c-t5jj9 -c db-init
{"component":"db-init","error":"x509: cannot validate certificate for 192.168.2.54 because it doesn't contain any IP SANs","message":"could not get database connection","severity":"info","timestamp":"2019-07-11T11:29:45Z"}

There are two paths to resolving this issue: either configure Deep Security Smart Check to use a host name for the database server (the certificate must be valid for that host name), or re-create the certificate and ensure that the server's IP address is present in the Subject Alternative Name list.

SSL is not enabled on the server

In this example, the error logs show that the database server is not running TLS, but Deep Security Smart Check has been configured to use TLS.

$ kubectl logs -f auth-66b5d948c-9cp78 -c db-init
{"commit":"5c108f2e383fd54fef8d3f6848c0424d9de9e001","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-22T15:26:42Z"}
{"component":"db-init","error":"pq: SSL is not enabled on the server","message":"could not get database connection","severity":"info","timestamp":"2019-06-22T15:26:42Z"}

Update your server configuration to use TLS.

If you cannot update your server configuration to use TLS, you can disable TLS in your overrides.yaml file. This option is less secure and could make it easier for your system to be compromised.

See the documentation in the values.yaml for more details on options for configuring TLS for the database connection.

Pods are running but the dashboard is failing to load

Network delays between your external database and the Deep Security Smart Check cluster could cause Deep Security Smart Check to be less responsive or to fail.

Resolving the network delays should resolve the problems.