I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose
Related
first I would like to thank you for been here! I hope you doing well!
So... I'm trying to create an Ubuntu:20.04 container on Google Cloud Run or Kubernetes..
Whenever I try to deploy this Dockerfile on Google Cloud Run
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
It fails, and shows an error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable
Apparently, this happens due to lack of a webserver inside the container?
To fix this, I followed this guideline by Google itself.
So, basically, inside the Dockerfile, I just added couple of code lines:
It just installs python, flask and gunicorn and set default to automatically run app.py when container is created.
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN apt-get install -y python3 && apt-get install -y pip && pip install Flask gunicorn
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Also, I created a new file "app.py" that import Flask.
Its just a simple webserver...
# Python run this file, and when someone send a request to this Ubuntu:20.04 container ip on port 8080
# a simple text is showed "Hello World".
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
return 'Hello {}!\n'.format(target)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
And boom... It works!! We have Ubuntu:20.04 running on Google Cloud Run... the error was fixed!
So, Google Cloud Run works like:
if there's a webserver running on that port:
then GCR LAUNCH CONTAINER
if there's NO webserver running on that port:
GCR DOESN'T launch container...
IN RESUME:
I just want to run a python code on ubuntu container.
just like I run in my local machine, that works perfectly.
also this python code doesn't use flask or any webservice, it runs independently, make some compute works and comunicate through an external database.
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and acess through /bin/bash CLI...???
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and access through /bin/bash CLI...???
There might be a misunderstanding of the Google services here.
Google Cloud Run
Runs your web application (a web server) in a container. It is not a service for other things than web applications (e.g. only http).
Key features: Keeps your server up and running, and can scale out to multiple instances.
Google Kubernetes Engine
Runs services (processes that starts and then are meant to stay running) in containers, both stateless, as Deployment and stateful as StatefulSet. Also support for jobs that is tasks that perform something and then terminates.
Key features: Keeps your server up and running, and can scale out to multiple instances. Can re-run Jobs that failed.
Google Compute Engine
If no one of the above fits your needs, you can always go low level and run and maintain virtual machines with e.g. Linux and containers on it.
I've very familiar with doing all of this (quite tedious) stuff manually with ECS.
I'm experimenting with Copilot - which is really working - I have one service up really easily, but my solution has multiple services/containers.
How do I now add a second service/container to my cluster?
Short answer: change to your second service's code directory and run copilot init again! If you need to specify a different dockerfile, you can use the --dockerfile flag. If you need to use an existing image, you can use --image with the name of an existing container registry.
Long answer:
Copilot stores metadata in SSM Parameter Store in the account which was used to run copilot app init or copilot init, so as long as you don't change the AWS credentials you're using when you run Copilot, everything should just work when you run copilot init in a new repository.
Some other use cases:
If it's an existing image like redis or postgres and you don't need to customize anything about the actual image or expose it, you can run
copilot init -t Backend\ Service --image redis --port 6379 --name redis
If your service lives in a separate code repository and needs to access the internet, you can cd into that directory and run
copilot init --app $YOUR_APP_NAME --type Load\ Balanced\ Web\ Service --dockerfile ./Dockerfile --port 1234 --name $YOUR_SERVICE_NAME --deploy
So all you need to do is run copilot init --app $YOUR_APP_NAME with the same AWS credentials in a new directory, and you'll be able to set up and deploy your second services.
Copilot also allows you to set up persistent storage associated with a given service by using the copilot storage init command. This specifies a new DynamoDB table or S3 bucket, which will be created when you run copilot svc deploy. It will create one storage addon per environment you deploy the service to, so as not to mix test and production data.
I have an asp.net core 2.0 application whose docker image runs fine locally, but when that same image is deployed to an AKS cluster, the pods have a status of CrashLoopBackOff and the pod log shows:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409.
And since you can't ssh to AKS clusters, it's pretty difficult to figure this out?
Dockerfile:
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapi.dll"]
Turned out that our build system wasn't putting the app code into the container as we thought. Since the container wasn't runnable, I didn't know how to inspect its contents until I found this command which is a lifesaver for these kinds of situations:
docker run --rm -it --entrypoint=/bin/bash [image_id]
... which at this point, you can freely inspect/verify the contents of the container.
I just ran into the same issue and it's because I was missing a key piece to the puzzle.
docker-compose -f docker-compose.ci.build.yml run ci-build
VS2017 Docker Tools will create that docker-compose.ci.build.yml file. After that command is run, the publish folder is populated and docker build -t <tag> will build a populated image (without an empty /app folder).
I have been deploying my app from a bash terminal using an app.yaml script and the command:
gcloud app deploy app.yaml
This runs a main.app script to set the environment from a custom made docker image.
How can I deploy this locally only so that I can make small changes and see their effects before actually deploying which takes quite a while?
If you want to run your app locally, you should be able to do that outside of the docker container. We actually place very few restrictions on the environment - largely you just need to make sure you're listening on port 8080.
However if you really want to test locally with docker - you can...
# generate the Dockerfile for your applications
gcloud beta app gen-config --custom
# build the docker container
docker build -t myapp .
# run the container
docker run -it -p 8080:8080 myapp
From there, you should be able to hit http://localhost:8080 and see your app running.
Hope this helps!
my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible