Secret Management

Document created by gjsissons on May 11, 2017Last modified by gjsissons on Jun 11, 2017
Version 14Show Document
  • View in full screen mode


secret_management.JPGApplication architectures are changing. While developers used to know where their applications and secure credentials were stored (a server in a datacenter for example) in the brave new world of clouds, micro-service architectures built using Docker containers are fast becoming the new norm. In these containerized application environments, keeping sensitive credentials secure is becoming a major issue.


A Primer on Docker Containers

To simplify application lifecycle management, and reduce deployment costs, cloud providers and enterprises increasingly use containers.  For those not familiar with the concept, container technologies basically partition modern Linux (and now even Windows environments) into separate name spaces.  The concept is similar to virtualization, but instead of each virtual machine having its own instance of an operating system, containers share a Linux kernel, but have their own view of things like the process space, file systems, memory and even network interfaces.


From a practical standpoint, logging into a container is like having your own physical or virtual machine, but in reality you are potentially sharing a kernel with many other tenants. The advantage of containers is that they can use hardware much more efficiently. Each container need only store the "deltas" between what is in the base operating environment and your own container context.  Also, because containers are basically just collections of processes running on a Linux host, they are extremely lightweight. Unlike VMs, where it can take several minutes for a machine to be provisioned and boot up, containers can be started and stopped in seconds.


With the advent of Docker, an open source project that allows container images to be easily encoded in a standard format, the use of containers has exploded. Aside from their efficiency, developers can package up application components along with dependencies (like libraries, language run-times and environment settings) into "ready to run" Docker containers.  From an operations standpoint, containers can run on any Docker machine on premises or in most cloud services.  This portability avoids a whole slew of potential technical issues associated with moving applications between environments, and it greatly accelerates progression from development to test to production.



What does this have to do with Payments?

Payment applications are affected by these same trends in application architecture.  Imagine a simple code fragment like the one below where we authorize a payment transaction by posting an XML or JSON message to an HTTPS endpoint:


curl -v --tlsv1.2  \
   -H "Content-Type: text/xml; charset=UTF-8"  \
   -X POST  \
   -H "Expect:" \
   -d  \
     '<?xml version="1.0"?>
      <litleOnlineRequest version="9.9" xmlns="" merchantId="1268016">
        <authorization id="ididid" reportGroup="YDP" customerId="12345">
                <name>Jane Doe</name>
                <addressLine1>20 Main Street</addressLine1>
                <city>San Jose</city>


My applications needs to handle and pass multiple secrets to the payment provider’s web-service endpoint.  Things like my merchant ID, my username and my password are all sensitive to varying degrees. In Dockerized applications, where machine instances are created from a registry, a frequent challenge that developers and application archiects face is where and how to store these secret credentials.


We can all agree that storing sensitive credentials in a file called “secretpassword.txt” in a Docker image is a bad idea. It turns out that storing an encrypted version of your credentials in a container is no safer since your code will need to decrypt the credentials at run-time, and anyone who gets a hold of your container will be able to see how this is done. Same issue applies with storing credentials in a local or remote database. If someone can access my container, and my container can access the database, then any user of my container can retrieve the credentials from the database.


Some REST APIs require one-time authentication and respond with a token that is valid for a short period of time like one hour or so. This is a little more secure, but you still need to store the initial credentials that allow you to obtain the one-time token somewhere, and anyone who has your initial credentials can generate another one-time token.  It turns out that storing credentials securely is a hard problem. Fortunately, this is a well recognized problem, and most container management platforms incorporate facilities to help with secret management.


Container Management Platforms

Large enterprises increasingly use container management platforms rather than deploying open-source Docker.  Popular choices include Kubernetes, Mesos, Rancher, Red Hat OpenShift and Docker native orchestration (previously Swarm).  Your favorite cloud providers (AWS, Azure or Google) also provide containerized services for a variety of application services.


It turns out that container management platforms have their own secrets to keep including database passwords, SSH keys, REST API credentials and the like.

Because this is such a common problem, most container management frameworks incorporate features specifically designed for secret management. As some examples:


  • Kubernetes secret management – Kubernetes has native support for a secret object to avoid storing sensitive credentials in Pods or containers. Secrets can be passed in environment variables or stored securely in volumes that can be attached to the container.
  • RedHat OpenShift secret management – OpenShift (which uses Kubernetes) follows the same approach, allowing a volume of type “secret” to be mounted by Kubernetes pods in OpenShift.
  • Rancher secret management – supports storage of encrypted secrets in a MySQL database accessible only to administrators of the platform or third party services like Vault Transit.
  • Amazon ECS secret management – leverages the parameter store feature in Amazon EC2 System Manager to hold secrets.


There are other solutions to this problem that are decoupled from container management solutions. Two popular solutions for securely storing and retrieving secrets are:




An eCommerce example using Kubernetes

Kubernetes (developed by Google) is one of the more popular container orchestration frameworks, so it is a good one to use as an example.


Kubernetes orchestration is used in Google’s Container Engine (GCE), Red Hat OpenShift, Rancher and other platforms.


To illustrate how secret management works, I deployed a three node Kubernetes cluster in the Google cloud.


If you want to create your own Kubernetes cluster in Google's cloud, a good place to start is here:


I’ll skip all the setup details and show that I have a running environment by listing the Kubernetes nodes in the cluster.  Each node is actually a Linux VM supporting the Docker environment.  Kubernetes itself is comprised of Docker services running across these cluster nodes.


As Kubernetes administrators will know, kubectl is a command that facilitates management of the cluster.




Storing eCommerce credentials as a Kubernetes secret


The first thing I need to do is create a secret object in Kubernetes to store my credentials.


I’ve created a script called “” (below) that does this.


This script lives outside of my Docker environment in a shell only accessible to cluster administrators. In this example we used the Kubernetes default namespace, but in a production environment, we could provide additional security by running applications in their own separate namespace.


We create a secret called ecomm-user-pass that stores both our Vantiv eCommerce username and password credentials using the command below.




Verify the Kubernetes secret


We can use the kubectl get secrets command to validate that our new secret exists.


I can use kubectl describe secrets/ecomm-user-pass to describe the Vantiv eCommerce secret. I can see there are two elements to the secret, a username and a password.




Passing the secret to my eCommerce containerized application


The trick with secret management is to make the credentials available to the Dockerized application once deployed, without the secret being part of the container drawn from the registry or the secret appearing in any scripts or configuration files used to create the container.


For illustration, I’m going to assume my eCommerce application is written to use an Apache httpd server and I’ll use the publicly available httpd image from Docker Hub.  In a real example, you will probably have your own container complete with a web server and your own application code stored in Docker Hub or another registry.  Also, a real application will almost certainly be more complex, comprised of multiple containers like a load balancer, a database, and a horizontally scalable web-tier.


For simplicity, I'll assume that my eCommerce application is deployed in a single Docker container called ecomm-web-app based on the httpd Docker image.


Deploying the Application and secret to the cluster


The next step is to deploy the eCommerce application to the cluster.


In Kubernetes, the basic unit of management is a “Pod”, a collection of managed Docker containers. In our case, the Pod is comprised of a single container. Pods would typically be deployed in replica sets with Kubernetes taking responsibility for scaling services and making sure that a set number of Pods are always available..


Deploying “Pods” (collections of managed containers) in Kubernetes is done using the kubectl create command.


Our sample eCommerce application is defined in Kubernetes YAML format below in a file called ecomm_app.yaml.





One convenient way to pass secrets to the container is to pass them in the environment of the executing container.


In the YAML definition above, we pass two environment variables, SECRET_USERNAME and SECRET_PASSWORD and we provide instructions on how to extract the corresponding secret values from the secret object that we created earlier.


This way, even if we publish the YAML to GitHub (as is often done so that others can easily deploy the same application), our secrets are not exposed because they are unique to our own container management platform.


Next, we create the eCommerce application instance based on the YAML above using the kubectl create command.


Once we create the vantiv-ecomm Pod, we see that one of our planned one required instances in the Pod is up and running.




Checking that our Dockerized application knows the secret


The whole point of this exercise is to show that we can pull a Docker container from a registry without the secret needing to be stored in the registry, or being exposed to the person instantiating the container.


Inside the container is a different matter though – we are trusting privileged users of the container to access the secret.


We can use kubectl exec to get access to a shell in the running vantiv-ecomm container.


Note that when I connect to the container, I see the original secrets exposed in the Linux shell's environment.


Now how cool is that!!




Processing eCommerce Transactions using our secret


Obviously if our container has access to our Vantiv eCommerce credentials, we’re in a position to securely process payment transactions, but having come this far we may as well drive this example to completion.


I add another simple shell script to the container to represent the application performing a payment transaction.


If you are wanting to follow this example, be aware that Docker containers are often minimal and may not include capabilities you need.


To create a script I need vim, and connect to Vantiv’s eCommerce endpoint I’m going to need to install cURL along with supporting libraries and pre-requisites.


In the Debian based container, you’ll need to run these commands to retrieve these components before you can use them.


$ apt-get clean
$ apt-get update
$ apt-get install vim
$ apt-get install curl


Once our container has cURL and the vim text editor, we can create the file below to process the payment transaction.


Note that we no longer need to actually store our credentials in the file itself.


Instead we reference them from the container environment using shell variables.




What’s cool about this, is that even if this file sits on a shared volume external to the container our secret remains secure.


The Docker container itself is generally more secure than a traditional server sitting in a data center rack. This is because application components typically only expose a minimal set of services making for a small "attack surface" for a would be hacker.


For example we might open only a single port (port 80), disallow all login accounts, and all network services.  We can use iptables in our Dockerized machine to only allow connections from a trusted upstream load balancer service so that even if malicious users have gained entry to containers in other Pods they cannot compromise our Docker host and gain access to our credentials.




GitHub Resources


Other Resources




Registered or unregistered marks on this page belong to their respective owners who are unaffiliated with and do not endorse or sponsor Vantiv, and Vantiv likewise does not endorse or sponsor any of the companies or technologies described above.