IBM Cloud Functions: How to run a Docker function? - ibm-cloud

FROM python:3.7
COPY ./src /data/python
WORKDIR /data/python
RUN pip install --no-cache-dir flask
EXPOSE 8080
CMD ["python", "main.py"]
main.py
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return {'body': os.environ.items()}
def run():
app.run(host='0.0.0.0', port=8080)
if __name__ == '__main__':
run()
click invoke result
[
"2021-01-29T09:53:30.727847Z stdout: * Serving Flask app \"main\" (lazy loading)",
"2021-01-29T09:53:30.727905Z stdout: * Environment: production",
"2021-01-29T09:53:30.727913Z stdout: WARNING: This is a development server. Do not use it in a production deployment.",
"2021-01-29T09:53:30.727918Z stdout: Use a production WSGI server instead.",
"2021-01-29T09:53:30.727923Z stdout: * Debug mode: off",
"2021-01-29T09:53:30.731130Z stderr: * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)",
"2021-01-29T09:53:30.747035Z stderr: 172.30.139.167 - - [29/Jan/2021 09:53:30] \"\u001b[33mPOST /init HTTP/1.1\u001b[0m\" 404 -",
"2021-01-29T09:53:30.748Z stderr: The action did not initialize or run as expected. Log data might be missing."
]
I've added the Docker container to IBM Cloud Functions
What would be the best way to approach this?

Docker images which are uploaded to IBM Cloud Functions must implement certain REST interfaces. The easiest way to achieve this is to base your container on the openwhisk/dockerskeletonimage.
Please see
How to run a docker image in IBM Cloud functions?
and https://github.com/iainhouston/dockerPython
for more details

The docs for IBM Cloud Functions have some pointers on how to create Docker-based functions. IMHO Cloud Functions are more for short-running serverless workloads and I would like to point you to another serverless technology in the form of IBM Cloud Code Engine. Its model is based on Docker containers and one of its use cases are http-based web applications, e.g., like your Flask app.
You would define the Dockerfile as you want, don't need a special skeleton and could just follow this guide on Dockerfile best practices for Code Engine.

Related

I can't run a simply Ubuntu Container on Google Cloud Kubernetes!! It just allow to run containers which is hosting a webserver... ONLY?

first I would like to thank you for been here! I hope you doing well!
So... I'm trying to create an Ubuntu:20.04 container on Google Cloud Run or Kubernetes..
Whenever I try to deploy this Dockerfile on Google Cloud Run
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
It fails, and shows an error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable
Apparently, this happens due to lack of a webserver inside the container?
To fix this, I followed this guideline by Google itself.
So, basically, inside the Dockerfile, I just added couple of code lines:
It just installs python, flask and gunicorn and set default to automatically run app.py when container is created.
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN apt-get install -y python3 && apt-get install -y pip && pip install Flask gunicorn
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Also, I created a new file "app.py" that import Flask.
Its just a simple webserver...
# Python run this file, and when someone send a request to this Ubuntu:20.04 container ip on port 8080
# a simple text is showed "Hello World".
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
return 'Hello {}!\n'.format(target)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
And boom... It works!! We have Ubuntu:20.04 running on Google Cloud Run... the error was fixed!
So, Google Cloud Run works like:
if there's a webserver running on that port:
then GCR LAUNCH CONTAINER
if there's NO webserver running on that port:
GCR DOESN'T launch container...
IN RESUME:
I just want to run a python code on ubuntu container.
just like I run in my local machine, that works perfectly.
also this python code doesn't use flask or any webservice, it runs independently, make some compute works and comunicate through an external database.
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and acess through /bin/bash CLI...???
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and access through /bin/bash CLI...???
There might be a misunderstanding of the Google services here.
Google Cloud Run
Runs your web application (a web server) in a container. It is not a service for other things than web applications (e.g. only http).
Key features: Keeps your server up and running, and can scale out to multiple instances.
Google Kubernetes Engine
Runs services (processes that starts and then are meant to stay running) in containers, both stateless, as Deployment and stateful as StatefulSet. Also support for jobs that is tasks that perform something and then terminates.
Key features: Keeps your server up and running, and can scale out to multiple instances. Can re-run Jobs that failed.
Google Compute Engine
If no one of the above fits your needs, you can always go low level and run and maintain virtual machines with e.g. Linux and containers on it.

How to run docker-compose on google cloud run?

I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose

How can I deploy a Flask website with MongoDB to Google Cloud Platform?

I'm using Flask as my backend and MongoDB for storing the data. I've connected my Website to MongoDB Atlas and tested everything locally and it works as expected. But when I try deploying the website to Google Cloud Platform (GCP) MongoDB doesn't work at all. The only configuration file that I have is app.yaml and it looks like this:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
handlers:
- url: /static
static_dir: static
- url: /.*
script: auto
I have connected my project to an Instance and enabled API Engine. For deployment, I simply ran the following command in Cloud Shell
gcloud app deploy
Do I need any other configuration files to run MongoDB? Or is the whole process wrong and I need to do it in another way.

docker-compose 'ports' mapping in development vs travisCI

I have a development environment using docker-compose, it has 5 services:
db (postrgresql)
redis
celery
celery-beat
web (a django web app - development is occurring here)
In development, I run the top four in containers with
docker-compose up db redis celery celery-beat
These four containers can connect to each other no problem.
However, while I code with the web app, I need to run it locally so I can get live updates and debug. However, running locally, the web app can't connect with the containers, and I need to map the ports on the containers, e.g:
db:
ports:
- 5432:5432
so that my locally running web app can connect with them.
However, if I then push my code to github, TravisCI fails it with this error:
ERROR: for db Cannot start service db: b'driver failed programming external connectivity on endpoint hackerspace_db_1 (493e7fb9e53f551b3b1eea35f9e2baf5725e9077fc642d8121891cab31b34373): Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use'
ERROR: Encountered errors while bringing up the project.
The command "docker-compose run web sh -c "python src/manage.py test src/"" exited with 1.
TravisCI passes without the port mapping, but I have to develop with port mapping.
How can I get around this so that they both work with the same settings? I'm willing to try different workflows, as I'm new to docker and containers and trying to find my way around.
I've tried:
Developing in a container with Visual Studio Code's Remote - Containers extension, but there doesn't seem to be a way to view the debug log/output
Finding a parameter to add to the docker-compose up ... that would map the ports when I run them, but there doesn't seem to be an option for this.

Using supervisor to run a flask app

I am deploying my Flask application on WebFaction. I am using flask-socketio which has lead me to deploying it with a Custom Websocket App (listening on port). Flask-socketio's instructs me to deploy my app by starting the serving with the call socketio.run(app, port= < port_listening_on >) in my main python script. I have installed eventlet on the server so socketio.run should run the app on the eventlet web server.
I can call python < app >.py and all works great – server runs, can view it at the domain, sockets working, etc. My problems start when I attempt to turn this into a long running process. I've been advised to use supervisor which I have installed and configured on my webapp following these instructions: https://community.webfaction.com/questions/18483/how-do-i-install-and-use-supervisord-to-control-long-running-processes
The problem is once I actually add the command for supervisor to run my app it errors with:
Exited too quickly
My log states the above error as well as:
(exit status 1; not expected)
In my supervisor config file I currently have the following program config:
[program:<prog_name>]
command=/usr/bin/python2.7 /home/<user>/webapps/<app_name>/<app>.py
autostart=true
autorestart=true
I have tried a removing and adding settings but it all leads to the same FATAL error.
So this is what part of my supervisor config looks like, I'm using gunicorn to run my flask app.
Also, I'm logging errors to a file from the supervisor config, so if you do that, it might help you see why it's not starting correctly.
[program:gunicorn]
command=/juzten/venv/bin/gunicorn run:app --preload -p rocket.pid -b 0.0.0.0:5000 --access-logfile "-"
directory=/juzten/app-folder-name
user=juzten
autostart=true
autorestart=unexpected
stdout_logfile=/juzten/gunicorn.log
stderr_logfile=/juzten/gunicorn.log