Using Terraform with docker-compose and nginx-proxy - docker-compose

Has anyone tried using all these tools together?
I'm currently using nginx-proxy and docker-compose for a four-container solution.
I'm now trying to make deployment better/faster/cheaper and think terraform might be the piece I'm now looking for.
My question is - does terraform work with docker-compose? Or is there too much overlap between them?
Thanks for any advice!

You can use Terraform provider as already suggested but if you want to stick to docker-compose for any reason you can also create your docker-compose file and run the necessary commands with user-data. Take a look to template_file and template_cloudinit_config
Example
nginx.tpl
#cloud-config
write_files:
- content: |
version: '2'
services:
nginx:
image: nginx:latest
path: /opt/docker-compose.yml
runcmd:
- 'docker-compose -f /opt/docker-compose.yml up -d'
nginx.tf
data "template_file" "nginx" {
template = "${file("nginx.tpl")}"
}
resource "aws_instance" "nginx" {
instance_type = "t2.micro"
ami = "ami-xxxxxxxx"
user_data = "${data.template_file.nginx.rendered}"
}
I use AWS but this should work with any provider supporting user-data and a box with cloud-init. Also this approach is suitable for autoscaling.

You can run single or multiple docker container in Terraform using the docker provider.
https://www.terraform.io/docs/providers/docker/index.html
Sample nginx terraform config
provider "docker" {
host = "tcp://ec2-xxxxxxx.compute.amazonaws.com:2375/"
}
resource "docker_image" "nginx" {
name = "nginx:1.11-alpine"
}
resource "docker_container" "nginx-server" {
name = "nginx-server"
image = "${docker_image.nginx.latest}"
ports {
internal = 80
external = 80
}
volumes {
container_path = "/usr/share/nginx/html"
host_path = "/home/scrapbook/tutorial/www"
read_only = true
}
}

Related

Jenkins - Kubernetes - Consolidate Agent Configuration

We are running Jenkins, in Kubernetes via the official helm chart.
Every pipeline has the same agent definition in place.
pipeline {
agent {
kubernetes {
inheritFrom 'default'
yamlFile 'automation/Jenkins/KubernetesPod.yaml'
}
}
The KubernetesPod.yaml looks like this.
metadata:
labels:
job-name: cicd_application
spec:
containers:
- name: operations
image: xxxxx.dkr.ecr.us-west-1.amazonaws.com/operations:0.1.3
command:
- sleep
args:
- 99d
This works fine. Our job DSL looks like this and everything just works.
steps {
container('operations') {
The problem comes in, when that operations container bumps from 0.1.3 to 0.1.4. I now have to create a Merge Request against 40 pipelines.
Is there a way to
Pull this file in from another repo.
Define and refer to this in JcasC
Ideally, when we bump the image ( its things like TF, Ansible etc) we can just do it all at once.
Thanks.

Is there a way to load private image using a skaffold config without building it?

I have created an mock.Dockerfile which just contains one line.
FROM eu.gcr.io/some-org/mock-service:0.2.0
With that config and a reference to it the build section, skaffold builds that dockerfile using the private GCR registry. However, if I remove that Dockerfile, skaffold does not build it, and when starting skaffold it only loads the images which are referenced in that build section(public images, like postgres work as well). So in that local kubernetes config, like minikube, this results in a
ImagePullBackOff
Failed to pull image "eu.gcr.io/some-org/mock-service:0.2.0": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials
So basically when I create a one-line Dockerfile, and include that, skaffold builds that image and loads it into minikube. Now it is possible to change the minikube config so that request to GCR succeds, but the goal is that developers don't have to change their minikube config...
Is there any other way to get that image loaded into Minikube, without changing the config and without that one-line Dockerfile?
skaffold.yaml:
apiVersion: skaffold/v2beta8
kind: Config
metadata:
name: some-service
build:
artifacts:
- image: eu.gcr.io/some-org/some-service
docker:
dockerfile: Dockerfile
- image: eu.gcr.io/some-org/mock-service
docker:
dockerfile: mock.Dockerfile
local: { }
profiles:
- name: mock
activation:
- kubeContext: (minikube|kind-.*|k3d-(.*))
deploy:
helm:
releases:
- name: postgres
chartPath: test/postgres
- name: mock-service
chartPath: test/mock-service
- name: skaffold-some-service
chartPath: helm/some-service
artifactOverrides:
image: eu.gcr.io/some-org/some-service
setValues:
serviceAccount.create: true
Although GKE comes pre-configured to pull from registries within the same project, Kubernetes clusters generally require special configuration at the pod level to pull from private registries. It's a bit involved.
Fortunately minikube introduced a registry-creds add-on that will configure the minikube instance with appropriate credentials to pull images.

Ambassador API Gateway doesn't pickup services

I'm a new Ambassador user here. I have walked thru the tutorial, in an effort to understand how use ambassador gateway. I am attempting to run this locally via Docker Compose until it's ready for deployment to K8s in production.
My use case is that all http traffic comes in on port 80, and then directed to the appropriate service. Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory? I ask because this doesn't appear to actually pickup my files (the postgres startup doesn't show in console). And when I run "docker ps" I only show:
CONTAINER ID IMAGE PORTS NAMES
8bc8393ac04c 05a916199684 k8s_statsd_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
1c00f2341caf d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
fe20c4819514 05a916199684 k8s_statsd_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
ba6415b028ba d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
9df07dc5083d 05a916199684 k8s_statsd_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
682e1f9902a0 d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
bb6d2f749491 quay.io/datawire/ambassador:0.40.2 0.0.0.0:80->80/tcp apigateway_ambassador_1
I have a docker-compose.yaml:
version: '3.1'
# Define the services/containers to be run
services:
ambassador:
image: quay.io/datawire/ambassador:0.40.2
ports:
- 80:80
volumes:
# mount a volume where we can inject configuration files
- ./config:/ambassador/config
postgres:
image: my-postgresql
ports:
- '5432:5432'
and in /config/mapping-postgres.yaml:
---
apiVersion: ambassador/v0
kind: Mapping
name: postgres_mapping
rewrite: ""
service: postgres:5432
volumes:
- ../my-postgres:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db1, db2, db3
- POSTGRES_USER=<>
- POSTGRES_PASSWORD=<>
volumes and environment are not valid configs for Ambassador Mappings. Ambassador lets you proxy to postgres but the authentication has to be handled by your application.
Having said that, it looks like your Postgres container is not starting. (Perhaps because it needs an initial config). You can check for errors with:
$ docker ps -a | grep postgres
$ docker logs <container-id-from-previous-step>
You can also check a postgres docker compose example here.
Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory?
It's pretty standard, but you can use any directory you like for this.

error: the server doesn't have resource type "svc"

Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
i am following this document
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card
while i am trying to test my configuration in step 11 of configure kubectl for amazon eks
apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Change "name: kubernetes" to actual name of your cluster.
Here is what I did to work it through....
1.Enabled verbose to make sure config files are read properly.
kubectl get svc --v=10
2.Modified the file as below:
apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
I have faced a similar issue, however this is not a direct solution but workaround. Use AWS cli commands to create cluster rather than console. As per documentation, the user or role which creates cluster will have master access.
aws eks create-cluster --name <cluster name> --role-arn <EKS Service Role> --resources-vpc-config subnetIds=<subnet ids>,securityGroupIds=<security group id>
Make sure that EKS Service Role has IAM access(I have given Full however AssumeRole will do I guess).
The EC2 machine Role should have eks:CreateCluster and IAM access. Worked for me :)
I had this issue and found it was caused default key setting in ~/.aws/credentials.
We have a few AWS accounts for different customers plus a sandbox account for our own testing and research. So our credentials file looks something like this:
[default]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[cpproto]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[sandbox]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
I was messing around our sandbox account but the [default] section was pointing to another account.
Once I put the keys for sandbox into the default section the "kubectl get svc" command worked fine.
Seems we need a way to tell aws-iam-authenticator which keys to use same as --profile in the aws cli.
I guess you should uncomment "env" item and change your refer to ~/.aws/credentials
Because your aws_iam_authenticator requires exact AWS credentials.
Refer this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
To have the AWS IAM Authenticator for Kubernetes always use a specific named AWS credential profile (instead of the default AWS credential provider chain), uncomment the env lines and substitute with the profile name to use.

ansible docker postgres volume

i am doing some postgres deployment with docker, ansible and terraform in aws
things are going relatively well, i start the instance with terraform, provision the instance with docker using ansible, start my postgres container with ansible also, and attach a ebs volume to my instance, which i intend to use as the main data storage.
but i am confused as to how to attach the volume to the docker (not to the instance as i am able to do that using terraform)
i imagine it is possible using ansible or modifiying the dockerfile, but the documentation of the "volume" which seems to be the answer is not that clear to me.
so if i had an ansible playbook like this:
name: Start postgis
docker_container:
name: postgis
image: "{{ ecr_url }}"
network_mode: bridge
exposed_ports:
5432
published_ports:
5432:5432
state: started
how would i specify the ebs volume to be used for the data storage of Postgres?
resource "aws_volume_attachment" "ebs-volume-postgis-attach" {
device_name = "/dev/xvdh"
volume_id = "${aws_ebs_volume.ebs-volume-postgis.id}"
instance_id = "${aws_instance.postgis.id}"
}
that was the code used to attach the ebs volume, in case someone is interested
please ask any kind of info that you need, all help is deeply apreciated
Here is a checklist:
Attach EBS volume (disk) to EC2 instance (e.g. /dev/xvdh)
Make partition (optional) (e.g. /dev/xvdh1)
Make filesystem on the partition/disk
Mount filesystem inside your EC2 instance (e.g. /opt/ebs_data)
Start Docker-container with volume (e.g. /opt/ebs_data:/var/lib/postgresql/data)
In Ansible's docker_container module, volumes is a list, so:
- docker_container:
name: postgis
image: "{{ ecr_url }}"
network_mode: bridge
exposed_ports:
- 5432
published_ports:
- 5432:5432
state: started
volumes:
- /opt/ebs_data:/var/lib/postgresql/data