Adding a Second Service with AWS Copilot - amazon-ecs

I've very familiar with doing all of this (quite tedious) stuff manually with ECS.
I'm experimenting with Copilot - which is really working - I have one service up really easily, but my solution has multiple services/containers.
How do I now add a second service/container to my cluster?

Short answer: change to your second service's code directory and run copilot init again! If you need to specify a different dockerfile, you can use the --dockerfile flag. If you need to use an existing image, you can use --image with the name of an existing container registry.
Long answer:
Copilot stores metadata in SSM Parameter Store in the account which was used to run copilot app init or copilot init, so as long as you don't change the AWS credentials you're using when you run Copilot, everything should just work when you run copilot init in a new repository.
Some other use cases:
If it's an existing image like redis or postgres and you don't need to customize anything about the actual image or expose it, you can run
copilot init -t Backend\ Service --image redis --port 6379 --name redis
If your service lives in a separate code repository and needs to access the internet, you can cd into that directory and run
copilot init --app $YOUR_APP_NAME --type Load\ Balanced\ Web\ Service --dockerfile ./Dockerfile --port 1234 --name $YOUR_SERVICE_NAME --deploy
So all you need to do is run copilot init --app $YOUR_APP_NAME with the same AWS credentials in a new directory, and you'll be able to set up and deploy your second services.
Copilot also allows you to set up persistent storage associated with a given service by using the copilot storage init command. This specifies a new DynamoDB table or S3 bucket, which will be created when you run copilot svc deploy. It will create one storage addon per environment you deploy the service to, so as not to mix test and production data.

Related

How to run docker-compose on google cloud run?

I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose

AWS ECR authentication for compose service on drone 0.4

I'm using Drone 0.4 for as my CI. While trying to migrate from a self hosted private registry to AWS's ECS/ECR, I've come across an authentication issue when referencing these images in my .drone.yml as a composed service.
for example
build:
image: python:3.5
commands:
- some stuff
compose:
db:
image: <account_id>.dkr.ecr.us-east-1.amazonaws.com/reponame:latest
when the drone build runs it's erroring out, like it should, saying
Authentication required to pull from ecr. As I understand when you authenticate for AWS ECR you use something like aws-cli's ecr get-login which gives you a temporary password. I know that I could inject that into my drone secret file and use that value in auth_config but that would mean I'd have to update my secrets' file every twelve hours (or however long that token lasts). Is there a way for drone to perform the authentication process itself?
You can run the authentication command in the same shell before executing your build/compose command:
How we do it in our setup with docker is, we have this shell script part in out Jenkins pipeline(this shell script will work with or without Jenkins, all you have to do is configure your aws credentials):
`aws ecr get-login --region us-east-1`
${MAVEN_HOME}/bin/mvn clean package docker:build -DskipTests
docker tag -f ${DOCKER_REGISTRY}/c-server ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION
docker push ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION}
So while running the maven command which creates the image or the subsequent commands to push it in ECR, it uses the authentication it gets from the first command.

Access docker within container on jenkins slave

my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible

How to connect to your Cloud SQL instance with a kubernetes service?

I'm creating a container with a connection to a cloudsql database, when I run the image with kubernetes It does not have an external IP that I can use to allow the new image to connect to the database. But as this is part of the init configuration I can't wait to know what is the public IP to add to the whitelist databases.
I know that are ways to connect a database through services in the same cluster, but I can't figure out how to connect with the cloudsql provided by google.
There are two ways to solve that:
The first option is to use a cloudsql proxy using the instructions available in: https://cloud.google.com/sql/docs/sql-proxy
In your docker image you need to ensure that fuse is available in your installation, in wasn't my case (using a ubuntu:trusty-20160119 as base image). If you need to able that, then use the following steps in your Dockerfile:
# install fusermount
# RUN apt-get install build-essential -y
# RUN wget https://github.com/libfuse/libfuse/releases/download/fuse_2_9_5/fuse-2.9.5.tar.gz
# RUN tar -xzvf fuse-2.9.5.tar.gz
# RUN cd fuse-2.9.5 && ./configure && make -j8 && make install
Then at the startup of your container you must create a script that open the socket as described in https://cloud.google.com/sql/docs/sql-proxy#example_proxy_invocations_and_connection_strings.
The second way is just to allow the ips from the nodes that support the kubernetes cluster in the whitelist for the cloudsql.
I prefer the first option, because it works in any machine I deploy the image and I don't need to care about to add or remove ips if I need to deliver more nodes in the kubernetes cluster.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.