Flask Application not starting up, server says HTTP 500 - deployment

My flask app is in $OPENSHIFT_REPO_DIR/repo directory with files as
..repo$ ls
runserver.py app.py
and my app.py looks like
def run_simple_httpd_server(app, ip, port=8080):
from wsgiref.simple_server import make_server
make_server(ip, port, app).serve_forever()
if __name__ == '__main__':
ip = os.environ['OPENSHIFT_INTERNAL_IP']
port = 8080
from runserver import run
run_simple_httpd_server(run, ip, port)
while runserver.py looks like
from configuration import app
from core.expense import expense
from core.budget import budget
def run():
app.register_blueprint(budget)
app.register_blueprint(expense)
app.run()
When I restart my app, I do not see anything happening
\> ctl_app restart
when I hit the url in browser it says
A server error occurred. Please contact the administrator.
I do not even see the logs anywhere, what is that I am doing wrong here?
I am doing the deployment for the very first time

How are you deploying your flask application? Are you using the Flask example on github: https://github.com/openshift/flask-example ?
Overall, you shouldn't be required to start your app from ssh on the gear as our start/stop hooks should handle that. Give the flask-example a try. Otherwise, you can review your logs to troubleshoot your 500 error:
https://www.openshift.com/faq/how-to-troubleshoot-application-issues-using-logs

I had to call my application from app.py, I did following and got my app running
cd ~/app_root/repo
vi app.py
# change last part of file to
# you need to do every time code is pushed via git push
if __name__ == '__main__':
ip = os.environ['OPENSHIFT_INTERNAL_IP']
port = 8080
from runserver import run
run(ip, port)
# restart app
ctl_app restart
and my runserver.py looks like
def run(host, port):
from configuration import app
app.run(host=host, port=port)

Related

connecting to a mongo client using .OVPN file in python script

I am kind of a stuck in a situation where I want to use python script to connect to a VPN and then connect to the Mongo Client. Our company has given me a ovpn file which I connect to using OPENVPN connect client. After the VPN connection is made, I then connect to the mongo client/server with the pymongo module in python and everything works fine and i can run all the relevant scripts on the DB.
I want to ask if it is possible to NOT use the OPENVPN connect client(because this actually connects my whole internet to the VPN and that what i dont want) and somehow use the OVPN file in python script and connect to the mongo server. I heard the that we can use the subprocess module to connect using a OVPN file but dont know how.
I am on a windows computer if this information helps. I how i got the message across. Any help will be appreciated.
here is the script i wrote but it gave me a server timeout error.
import subprocess
import pymongo
subprocess.run(["openvpn", "--config", "test-vpn.ovpn" ], shell = True)
client = pymongo.MongoClient('CONNECTIONSTRING')
client.list_database_names()

I can't run a simply Ubuntu Container on Google Cloud Kubernetes!! It just allow to run containers which is hosting a webserver... ONLY?

first I would like to thank you for been here! I hope you doing well!
So... I'm trying to create an Ubuntu:20.04 container on Google Cloud Run or Kubernetes..
Whenever I try to deploy this Dockerfile on Google Cloud Run
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
It fails, and shows an error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable
Apparently, this happens due to lack of a webserver inside the container?
To fix this, I followed this guideline by Google itself.
So, basically, inside the Dockerfile, I just added couple of code lines:
It just installs python, flask and gunicorn and set default to automatically run app.py when container is created.
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN apt-get install -y python3 && apt-get install -y pip && pip install Flask gunicorn
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Also, I created a new file "app.py" that import Flask.
Its just a simple webserver...
# Python run this file, and when someone send a request to this Ubuntu:20.04 container ip on port 8080
# a simple text is showed "Hello World".
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
return 'Hello {}!\n'.format(target)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
And boom... It works!! We have Ubuntu:20.04 running on Google Cloud Run... the error was fixed!
So, Google Cloud Run works like:
if there's a webserver running on that port:
then GCR LAUNCH CONTAINER
if there's NO webserver running on that port:
GCR DOESN'T launch container...
IN RESUME:
I just want to run a python code on ubuntu container.
just like I run in my local machine, that works perfectly.
also this python code doesn't use flask or any webservice, it runs independently, make some compute works and comunicate through an external database.
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and acess through /bin/bash CLI...???
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and access through /bin/bash CLI...???
There might be a misunderstanding of the Google services here.
Google Cloud Run
Runs your web application (a web server) in a container. It is not a service for other things than web applications (e.g. only http).
Key features: Keeps your server up and running, and can scale out to multiple instances.
Google Kubernetes Engine
Runs services (processes that starts and then are meant to stay running) in containers, both stateless, as Deployment and stateful as StatefulSet. Also support for jobs that is tasks that perform something and then terminates.
Key features: Keeps your server up and running, and can scale out to multiple instances. Can re-run Jobs that failed.
Google Compute Engine
If no one of the above fits your needs, you can always go low level and run and maintain virtual machines with e.g. Linux and containers on it.

IBM Cloud Functions: How to run a Docker function?

FROM python:3.7
COPY ./src /data/python
WORKDIR /data/python
RUN pip install --no-cache-dir flask
EXPOSE 8080
CMD ["python", "main.py"]
main.py
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return {'body': os.environ.items()}
def run():
app.run(host='0.0.0.0', port=8080)
if __name__ == '__main__':
run()
click invoke result
[
"2021-01-29T09:53:30.727847Z stdout: * Serving Flask app \"main\" (lazy loading)",
"2021-01-29T09:53:30.727905Z stdout: * Environment: production",
"2021-01-29T09:53:30.727913Z stdout: WARNING: This is a development server. Do not use it in a production deployment.",
"2021-01-29T09:53:30.727918Z stdout: Use a production WSGI server instead.",
"2021-01-29T09:53:30.727923Z stdout: * Debug mode: off",
"2021-01-29T09:53:30.731130Z stderr: * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)",
"2021-01-29T09:53:30.747035Z stderr: 172.30.139.167 - - [29/Jan/2021 09:53:30] \"\u001b[33mPOST /init HTTP/1.1\u001b[0m\" 404 -",
"2021-01-29T09:53:30.748Z stderr: The action did not initialize or run as expected. Log data might be missing."
]
I've added the Docker container to IBM Cloud Functions
What would be the best way to approach this?
Docker images which are uploaded to IBM Cloud Functions must implement certain REST interfaces. The easiest way to achieve this is to base your container on the openwhisk/dockerskeletonimage.
Please see
How to run a docker image in IBM Cloud functions?
and https://github.com/iainhouston/dockerPython
for more details
The docs for IBM Cloud Functions have some pointers on how to create Docker-based functions. IMHO Cloud Functions are more for short-running serverless workloads and I would like to point you to another serverless technology in the form of IBM Cloud Code Engine. Its model is based on Docker containers and one of its use cases are http-based web applications, e.g., like your Flask app.
You would define the Dockerfile as you want, don't need a special skeleton and could just follow this guide on Dockerfile best practices for Code Engine.

Using supervisor to run a flask app

I am deploying my Flask application on WebFaction. I am using flask-socketio which has lead me to deploying it with a Custom Websocket App (listening on port). Flask-socketio's instructs me to deploy my app by starting the serving with the call socketio.run(app, port= < port_listening_on >) in my main python script. I have installed eventlet on the server so socketio.run should run the app on the eventlet web server.
I can call python < app >.py and all works great – server runs, can view it at the domain, sockets working, etc. My problems start when I attempt to turn this into a long running process. I've been advised to use supervisor which I have installed and configured on my webapp following these instructions: https://community.webfaction.com/questions/18483/how-do-i-install-and-use-supervisord-to-control-long-running-processes
The problem is once I actually add the command for supervisor to run my app it errors with:
Exited too quickly
My log states the above error as well as:
(exit status 1; not expected)
In my supervisor config file I currently have the following program config:
[program:<prog_name>]
command=/usr/bin/python2.7 /home/<user>/webapps/<app_name>/<app>.py
autostart=true
autorestart=true
I have tried a removing and adding settings but it all leads to the same FATAL error.
So this is what part of my supervisor config looks like, I'm using gunicorn to run my flask app.
Also, I'm logging errors to a file from the supervisor config, so if you do that, it might help you see why it's not starting correctly.
[program:gunicorn]
command=/juzten/venv/bin/gunicorn run:app --preload -p rocket.pid -b 0.0.0.0:5000 --access-logfile "-"
directory=/juzten/app-folder-name
user=juzten
autostart=true
autorestart=unexpected
stdout_logfile=/juzten/gunicorn.log
stderr_logfile=/juzten/gunicorn.log

Working example of celery with mongo DB

I'm new to celery, and am working on running asynchronous tasks using Celery.
I want to save the results of my tasks to MongoDB.
I want to use the AMQP broker.
Celery project examples didn't help me much. Can anyone point me to some working examples?
To use MongoDB as your backend store you have to explicitly configure Celery to use MongoDB as the backend.
http://docs.celeryproject.org/en/latest/getting-started/brokers/mongodb.html#broker-mongodb
As you said the documentation does not show a complete working example. I just started playing with Celery but have been using MongoDB. I created a short working tutorial using MongoDB and Celery http://skillachie.com/?p=953
However these snippets should contain all you need to get a hello world going with Celery and MongoDB
celeryconfig.py
from celery.schedules import crontab
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
"host": "127.0.0.1",
"port": 27017,
"database": "jobs",
"taskmeta_collection": "stock_taskmeta_collection",
}
#used to schedule tasks periodically and passing optional arguments
#Can be very useful. Celery does not seem to support scheduled task but only periodic
CELERYBEAT_SCHEDULE = {
'every-minute': {
'task': 'tasks.add',
'schedule': crontab(minute='*/1'),
'args': (1,2),
},
}
tasks.py
from celery import Celery
import time
#Specify mongodb host and datababse to connect to
BROKER_URL = 'mongodb://localhost:27017/jobs'
celery = Celery('EOD_TASKS',broker=BROKER_URL)
#Loads settings for Backend to store results of jobs
celery.config_from_object('celeryconfig')
#celery.task
def add(x, y):
time.sleep(30)
return x + y
I have been testing RabbitMQ as a broker and MongoDB as backend, and MongoDB as both broker and backend. These are my findings. I hope they help someone out there.
Assumption: You have MongoDB running on default settings(localhost:21017)
Setting Environment using conda(you can use whatever package manager)
conda update -n base conda -c anaconda
conda create -n apps python=3.6 pymongo
conda install -n apps -c conda-forge celery
conda activate apps
Update my conda, create an environment called apps and install pymongo and celery.
RabbitMQ as a broker and MongoDB as backend
sudo apt install rabbitmq-server
sudo service rabbitmq-server restart
sudo rabbitmqctl status
If no errors then rabbitmq out to be running. Lets create tasks in executor.py and call them in runner.py
# executor.py
import time
from celery import Celery
BROKER_URL = 'amqp://localhost//'
BACKEND_URL = 'mongodb://localhost:27017/from_celery'
app = Celery('executor', broker=BROKER_URL, backend=BACKEND_URL)
#app.task
def pizza_bot(string:str, snooze=10):
'''return a dictionary with bot and
lower case string input
'''
print(f'Pretending to be working {snooze} seconds')
time.sleep(snooze)
return {'bot':string.lower()}
and we call them in runner.py
# runner.py
import time
from datetime import datetime
from executor import pizza_bot
def run_pizza(msg:str, use_celery:bool=True):
start_time = datetime.now()
if use_celery: # Using celery
response = pizza_bot.delay(msg)
else: # Not using celery
response = pizza_bot(msg)
print(f'It took {datetime.now()-start_time}!'
' to run')
print(f'response: {response}')
return response
if __name__ == '__main__':
# Call using celery
response = run_pizza('This finishes extra fast')
while not response.ready():
print(f'[Waiting] It is {response.ready()} that we have results')
time.sleep(2) # sleep to second
print('\n We got results:')
print(response.result)
Run celery on terminal A:
cd path_to_our_python_files
celery -A executor.app worker --loglevel=info
This is done in development only. I wanted to see what was happening in the background. In production, run it in daemonize.
Run runner.py on terminal B:
cd path_to_our_python_files
conda activate apps
python runner.py
In terminal A, you will see that the task is received and in snooze seconds it will be completed. On your MongoDB, you will see a new collection called from_celery, with the message and results.
MongoDB as both broker and backend
A simple modification was needed to set this. As mentioned, I had to create a config file to set MongoDB backend settings.
#mongo_config.py
#Backend Settings
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
"host": "localhost",
"port": 27017,
"database": "celery",
"taskmeta_collection": "pizza_collection",
}
Let's create executor_updated.py which is pretty much the same as executor.py but the broker is now MongoDB and backend is added via config_from_object
# executor_updated.py
import time
from celery import Celery
BROKER_URL = 'mongodb://localhost:27017/celery'
app = Celery('executor_updated',broker=BROKER_URL)
#Load Backend Settings
app.config_from_object('mongo_config')
#app.task
def pizza_bot(string:str, snooze=10):
'''return a dictionary with bot and
lower case string input
'''
print(f'Pretending to be working {snooze} seconds')
time.sleep(snooze)
return {'bot':string.lower()}
Run celery on terminal C:
cd path_to_our_python_files
celery -A executor_updated.app worker --loglevel=info
Run runner.py on terminal D:
cd path_to_our_python_files
conda activate apps
python runner.py
Now we have both MongoDB as broker and backend. In MongoDB, you will see a collection called celery and a table pizza_collection
Hope this helps in getting you started with these awesome tools.