I want to write a fabric task that check is mongod is running and runs it if necessary. Is this possible?
Here is how I would do it on an Ubuntu server using fabric and fabtools:
from fabric.api import task
from fabtools import require
#task
def setup_mongodb():
# Install latest official MongoDB package
require.deb.key('7F0CEB10', keyserver='keyserver.ubuntu.com')
require.deb.source('mongodb', 'http://downloads-distro.mongodb.org/repo/ubuntu-upstart', 'dist', '10gen')
require.deb.package('mongodb-10gen')
# Make sure the server is started
require.service.started('mongodb')
Related
first I would like to thank you for been here! I hope you doing well!
So... I'm trying to create an Ubuntu:20.04 container on Google Cloud Run or Kubernetes..
Whenever I try to deploy this Dockerfile on Google Cloud Run
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
It fails, and shows an error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable
Apparently, this happens due to lack of a webserver inside the container?
To fix this, I followed this guideline by Google itself.
So, basically, inside the Dockerfile, I just added couple of code lines:
It just installs python, flask and gunicorn and set default to automatically run app.py when container is created.
FROM ubuntu:20.04
RUN apt-get update
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN apt-get install -y python3 && apt-get install -y pip && pip install Flask gunicorn
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Also, I created a new file "app.py" that import Flask.
Its just a simple webserver...
# Python run this file, and when someone send a request to this Ubuntu:20.04 container ip on port 8080
# a simple text is showed "Hello World".
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
return 'Hello {}!\n'.format(target)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
And boom... It works!! We have Ubuntu:20.04 running on Google Cloud Run... the error was fixed!
So, Google Cloud Run works like:
if there's a webserver running on that port:
then GCR LAUNCH CONTAINER
if there's NO webserver running on that port:
GCR DOESN'T launch container...
IN RESUME:
I just want to run a python code on ubuntu container.
just like I run in my local machine, that works perfectly.
also this python code doesn't use flask or any webservice, it runs independently, make some compute works and comunicate through an external database.
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and acess through /bin/bash CLI...???
So, my question is, how to deploy a container image that doesn't host a web service, on Google Cloud Run or Kubernetes, just like I create on my local machine and access through /bin/bash CLI...???
There might be a misunderstanding of the Google services here.
Google Cloud Run
Runs your web application (a web server) in a container. It is not a service for other things than web applications (e.g. only http).
Key features: Keeps your server up and running, and can scale out to multiple instances.
Google Kubernetes Engine
Runs services (processes that starts and then are meant to stay running) in containers, both stateless, as Deployment and stateful as StatefulSet. Also support for jobs that is tasks that perform something and then terminates.
Key features: Keeps your server up and running, and can scale out to multiple instances. Can re-run Jobs that failed.
Google Compute Engine
If no one of the above fits your needs, you can always go low level and run and maintain virtual machines with e.g. Linux and containers on it.
According to Celery Documentation:
librabbitmq
If you’re using RabbitMQ (AMQP) as the broker then you can install the librabbitmq module to use an optimized client written in C:
$ pip install librabbitmq
The ‘amqp’ transport will automatically use the librabbitmq module if it’s installed, or you can also specify the transport you want directly by using the pyamqp:// or librabbitmq:// prefixes.
I installed librabbitmq and changed the BROKER_URL setting so that it starts with librabbitmq://.
How do I verify that Celery is now using librabbitmq (i.e., that I did everything correctly)?
Uninstall librabbitmq.
Ensure that BROKER_URL starts with librabbitmq://.
Try to do something with celery (e.g., python manage.py celery worker if using djcelery).
The command will fail with ImportError: No module named librabbitmq.
Reinstall librabbitmq.
Repeat step 3.
The command should now work without any problems.
It's not 100% conclusive, but does yield a reasonably good indication that celery is using librabbitmq.
I'm a new to linux platform. I need to establish mongodb as a start-up service. In fedora, I was able to run following commands and successfully did the task.
chkconfig —add mongodb
chkconfig mongodb on
But in ubuntu 13.10, this chkconfig command is not available. I found the update-rc.dcommand is an alternative for that. But I'm still unable to execute those cammands. How can I achieve this task in ubuntu ?
Contrary to Fedora the services that are installed on an Ubuntu system are enabled by default, so you don't need to add or enable them to the init system.
You can check the service status with:
$ service mongodb status
On 12.04 LTS the 10gen mongodb package provides integration into the upstart init system provided in Ubuntu, you can find the job file in /etc/init/mongodb.conf
I'm using Fabric 1.6.0 on OS X 10.8.2, running commands on a remote host on Ubuntu Lucid 10.04.
On the server, I can run sudo /etc/init.d/celeryd restart to restart the Celery service.
I pass the same command through fabric using:
#task
def restart():
run('sudo /etc/init.d/celeryd restart')
Or
#task
def restart2():
sudo('/etc/init.d/celeryd restart')
Or use the command line form fab <task_that_sets_env.host> -- sudo /etc/init.d/celeryd restart
The command always fails silently - meaning that fabric returns no errors, but celeryd reports that it's not running.
I'm tearing my hair out here! There's nothing relevant in the Celery log file, and AFAIK Fabric should just pass the commands straight through.
Maybe I'm pretty late to the party, and you can downvote me if this doesnt work, but I've had similar problems running other programs in /etc/init.d with fabric. My solution (works with tomcat and mysql) is to add pty=False
#task
def restart():
sudo('/etc/init.d/celeryd restart', pty=False)
Theres documentation on the option here:
http://docs.fabfile.org/en/1.7/api/core/operations.html#fabric.operations.run
I'm new to celery, and am working on running asynchronous tasks using Celery.
I want to save the results of my tasks to MongoDB.
I want to use the AMQP broker.
Celery project examples didn't help me much. Can anyone point me to some working examples?
To use MongoDB as your backend store you have to explicitly configure Celery to use MongoDB as the backend.
http://docs.celeryproject.org/en/latest/getting-started/brokers/mongodb.html#broker-mongodb
As you said the documentation does not show a complete working example. I just started playing with Celery but have been using MongoDB. I created a short working tutorial using MongoDB and Celery http://skillachie.com/?p=953
However these snippets should contain all you need to get a hello world going with Celery and MongoDB
celeryconfig.py
from celery.schedules import crontab
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
"host": "127.0.0.1",
"port": 27017,
"database": "jobs",
"taskmeta_collection": "stock_taskmeta_collection",
}
#used to schedule tasks periodically and passing optional arguments
#Can be very useful. Celery does not seem to support scheduled task but only periodic
CELERYBEAT_SCHEDULE = {
'every-minute': {
'task': 'tasks.add',
'schedule': crontab(minute='*/1'),
'args': (1,2),
},
}
tasks.py
from celery import Celery
import time
#Specify mongodb host and datababse to connect to
BROKER_URL = 'mongodb://localhost:27017/jobs'
celery = Celery('EOD_TASKS',broker=BROKER_URL)
#Loads settings for Backend to store results of jobs
celery.config_from_object('celeryconfig')
#celery.task
def add(x, y):
time.sleep(30)
return x + y
I have been testing RabbitMQ as a broker and MongoDB as backend, and MongoDB as both broker and backend. These are my findings. I hope they help someone out there.
Assumption: You have MongoDB running on default settings(localhost:21017)
Setting Environment using conda(you can use whatever package manager)
conda update -n base conda -c anaconda
conda create -n apps python=3.6 pymongo
conda install -n apps -c conda-forge celery
conda activate apps
Update my conda, create an environment called apps and install pymongo and celery.
RabbitMQ as a broker and MongoDB as backend
sudo apt install rabbitmq-server
sudo service rabbitmq-server restart
sudo rabbitmqctl status
If no errors then rabbitmq out to be running. Lets create tasks in executor.py and call them in runner.py
# executor.py
import time
from celery import Celery
BROKER_URL = 'amqp://localhost//'
BACKEND_URL = 'mongodb://localhost:27017/from_celery'
app = Celery('executor', broker=BROKER_URL, backend=BACKEND_URL)
#app.task
def pizza_bot(string:str, snooze=10):
'''return a dictionary with bot and
lower case string input
'''
print(f'Pretending to be working {snooze} seconds')
time.sleep(snooze)
return {'bot':string.lower()}
and we call them in runner.py
# runner.py
import time
from datetime import datetime
from executor import pizza_bot
def run_pizza(msg:str, use_celery:bool=True):
start_time = datetime.now()
if use_celery: # Using celery
response = pizza_bot.delay(msg)
else: # Not using celery
response = pizza_bot(msg)
print(f'It took {datetime.now()-start_time}!'
' to run')
print(f'response: {response}')
return response
if __name__ == '__main__':
# Call using celery
response = run_pizza('This finishes extra fast')
while not response.ready():
print(f'[Waiting] It is {response.ready()} that we have results')
time.sleep(2) # sleep to second
print('\n We got results:')
print(response.result)
Run celery on terminal A:
cd path_to_our_python_files
celery -A executor.app worker --loglevel=info
This is done in development only. I wanted to see what was happening in the background. In production, run it in daemonize.
Run runner.py on terminal B:
cd path_to_our_python_files
conda activate apps
python runner.py
In terminal A, you will see that the task is received and in snooze seconds it will be completed. On your MongoDB, you will see a new collection called from_celery, with the message and results.
MongoDB as both broker and backend
A simple modification was needed to set this. As mentioned, I had to create a config file to set MongoDB backend settings.
#mongo_config.py
#Backend Settings
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
"host": "localhost",
"port": 27017,
"database": "celery",
"taskmeta_collection": "pizza_collection",
}
Let's create executor_updated.py which is pretty much the same as executor.py but the broker is now MongoDB and backend is added via config_from_object
# executor_updated.py
import time
from celery import Celery
BROKER_URL = 'mongodb://localhost:27017/celery'
app = Celery('executor_updated',broker=BROKER_URL)
#Load Backend Settings
app.config_from_object('mongo_config')
#app.task
def pizza_bot(string:str, snooze=10):
'''return a dictionary with bot and
lower case string input
'''
print(f'Pretending to be working {snooze} seconds')
time.sleep(snooze)
return {'bot':string.lower()}
Run celery on terminal C:
cd path_to_our_python_files
celery -A executor_updated.app worker --loglevel=info
Run runner.py on terminal D:
cd path_to_our_python_files
conda activate apps
python runner.py
Now we have both MongoDB as broker and backend. In MongoDB, you will see a collection called celery and a table pizza_collection
Hope this helps in getting you started with these awesome tools.