Celery setup for multiple Django Projects running in Docker Swarm - celery

We have multiple Django Projects (in dedicated GitHub repos) running in Docker Swarm. We want to set up Celery in such a way that it can be used across all the projects.
Is there a way to achieve this? I am looking for more ideas and considerations while architecting this.
I have tried setting it up on one of the projects and invoking its tasks from the other projects by using the send_task method. It kind of works in one direction.
How can I make it execute tasks whose definition is not present in the project in which it is running? Or in other words, how can I execute un-unregistered tasks from other projects?

Related

Is it possible to utilize the same service worker for two projects?

I have an issue with a service worker, I have two different projects that are in the same server but in different folders, and I want to precache the files on project number 2 using my service worker (My service worker is already working on project number 1). My question is, is it possible to do this? is there any other way I can attack this? Any help is very much appreciated.
In general, yes, as long as the service worker is hosted at a URL that is at the same level (or "higher") than the root of each of those projects. That would ensure that each project will be within scope of the service worker.
I'm assuming that one of the challenges you're asking about relates to creating a precache manifest within that service worker that contains build artifacts from both projects. There are a few different ways to tackle that, but I think the most straightforward would be to ensure that you always run the build process for each project at the same time, and then when you use Workbox's build tooling to create the precache manifest, you ensure that you grab all the assets that were output by each of the projects.
The specifics of configuring that build process depends on what you're currently using. You mention that there's a service worker (presumably using Workbox's precaching) already in place for the first project, so I think just using the same build setup, with tweaks to pick up the additional assets, would be easiest.

Docker deployment options

I'm wondering which options are there for docker container deployment in production. Given I have separate APP and DB server containers and data-only containers holding deployables and other holding database files.
I just have one server for now, which I would like to "docker enable", but what is the best way to deploy there(remotely will be the best option)
I just want to hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers.
There is myriad of tools(Fleet, Flocker, Docker Compose etc.), I'm overwhelmed by the choices.
Only thing I'm clear is, I don't want to build images with codes from git repo. I would like to have docker images as wrappers for my releases. Have I grasped the docker ideas from wrong end?
My team recently built a Docker continuous deployment system and I thought I'd share it here since you seem to have the same questions we had. It pretty much does what you asked:
"hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers"
We had the challenge that our Docker deployment scripts were getting too complex. Our containers depend on each other in various ways to make the full system so when we deployed, we'd often have dependency issues crop up.
We built a system called "Skopos" to resolve these issues. Skopos detects the current state of your running system and detects any changes being made and then automatically plans out and deploys the update into production. It creates deployment plans dynamically for each deployment based on a comparison of current state and desired state.
It can help you continuously deploy your application or service to production using tags in your repository to automatically roll out the right version to the right platform while removing the need for manual procedures or scripts.
It's free, check it out: http://datagridsys.com/getstarted/
You can import your system in 3 ways:
1. if you have a Docker Compose, we can suck that in and start working iwth it.
2. If your app is running, we can scan it and then start working with it.
3. If you have neither, you can create a quick descriptor file in YAML and then we can understand your current state.
I think most people start their container journey using tools from Docker Toolbox. Those tools provide a good start and work as promised, but you'll end up wanting more. With these tools, you are missing for example integrated overlay networking, DNS, load balancing, aggregated logging, VPN access and private image repository which are crucial for most container workloads.
To solve these problems we started to develop Kontena - Docker Container Orchestration Platform. While Kontena works great for all types of businesses and may be used to run containerized workloads at any scale, it's best suited for start-ups and small to medium sized business who require worry-free and simple to use platform to run containerized workloads.
Kontena is an open source project and you can view it on GitHub.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

using a single celery with multiple python daemons

I have "project" written in Python with multiples components: there are several distinct Pyramid and Twisted apps running.
We're looking at using Celery to offload some of the work from Pyramid and Twisted. Just to be clear, we're looking at one Celery instance / config, that handles the work for multiple Pyramid and Twisted apps.
All the info I found online covers multiple Celery for one or more apps; not one Celery for multiple apps. Celery will be doing 4-5 functions that are common to all these apps.
Are there any recommended strategies / common pitfalls for this sort of setup, or should we be generally fine with having a standalone celery_tasks package that all the different projects import ?
It is distributed system. By the definition it doesn't matter from where you call the tasks as long as they get executed by a worker and the caller is able to fetch the results.
You should be fine with both projects configured properly to sending tasks and receiving results. One shared module with common tasks is going to be just fine.
Shared workers should import only that module.

Deploy a project in local machine in server

I'm using Fedora and I deploy Symfony projects in my local machine using virtual hosts. How can I deploy my projects in server to public which others can view it through their machines?
Thanx...
You have several way to deploy you symfony project. I will avoid ftp, svn up on prod, etc .. So, here is 2 good ways.
The built-in deploy task
Symfony comes with a built-in depoy task that has been used when symfony 1.4 was released. I think it's less and less used now (because there is better tool).
The simplest way to deploy your website is to use the built-in project:deploy task. It uses SSH and rsync to connect and transfer the files from one computer to another one.
Using capifony, which use Capistrano
Capistrano is an open source tool for running scripts on multiple servers. It’s primary use is for easily deploying applications.
capifony is a deployment recipes collection that works with both symfony and Symfony2 applications.
This way is far better than the previous one because you can automate many script when deploying (like testing your code, start a fresh built lib, upgrade database, share config file). But the most important one (from my POV) is that you can easily rollback a bad deployement. It's damm easy.