Not able to connect rabbitmq from kubernetes cron jobs - kubernetes

I am using rabbitmq at a remote (cloudamqp.com) and I create a cron job on Kubernetes. On my local machine, my job is working fine and the Kubernetes cronJob schedules perfectly well but the Job redirects the rabbitmq connection URL to 127.0.0.1:5672 and I get an error.
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I check logs of cron job and my connection URL is perfectly fine but when pika is trying to connect to the host it automatically redirects to 127.0.0.1:5672 as we know the cron pod is not running any rabbitmq server so it refuses the connection.
CronJob.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-news
spec:
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: scrape-news
spec:
containers:
- name: scrape-news
image: SCRAPER_IMAGE
imagePullPolicy: Always
restartPolicy: Never
schedule: '* * * * *'
successfulJobsHistoryLimit: 3
RabbitMQ Connection
print(env.RABBIT_URL)
self.params = pika.URLParameters(env.RABBIT_URL)
self.connection = pika.BlockingConnection(parameters=self.params)
self.channel = self.connection.channel() # start a channel
Connection URL is exact same and works on my local setup.

Based on your CronJob spec you are not passing the environment variable RABBIT_URL.
Your code looks as if it is expecting this variable to be set, which it is not, and which is likely why it is defaulting to localhost.
self.params = pika.URLParameters(env.RABBIT_URL)
You probably want something like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-news
spec:
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: scrape-news
spec:
containers:
- name: scrape-news
image: SCRAPER_IMAGE
imagePullPolicy: Always
env:
- name: RABBIT_URL
value: cloudamqp.com
restartPolicy: Never
schedule: '* * * * *'
successfulJobsHistoryLimit: 3

Related

Kubernetes cronjob doesn't use env variables

I have a Cron job in Kubernetes that does not use the urls in env variables to go to another api's find information to use in him, returning errors like will be using the urls of the appsettings/launchsettings from the console application project.
When I executed the cronjob, it returned an error ex: "Connection refused (20.210.70.20:80)"
My Cron job:
`
apiVersion: batch/v1
kind: CronJob
metadata:
name: productintegration-cronjob
spec:
schedule: "0 3 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: productintegration-cronjob
image: reconhece.azurecr.io/ms-product-integration:9399
command:
- /bin/sh
- -c
- echo dotnet - $(which dotnet);
echo Running Product Integration;
/usr/bin/dotnet /app/MsProductIntegration.dll
env:
- name: DatabaseProducts
value: "http://catalog-api:8097/api/Product/hash/{0}/{1}"
- name: DatabaseCategory
value: "http://catalog-api:8097/api/Category"
`
My catalogApi deployment where my cron job needs to go:
`
apiVersion: apps/v1
kind: Deployment
metadata:
name: catalog-api-deployment
labels:
app: catalog-api
spec:
replicas: 1
selector:
matchLabels:
app: catalog-api
template:
metadata:
labels:
app: catalog-api
spec:
containers:
- name: catalog-api
image: test.azurecr.io/ms-catalog-api:6973
ports:
- containerPort: 80
env:
- name: DatabaseSettings__ConnectionString
value: "String Connection" - I removed
- name: DatabaseSettings__DatabaseName
value: "DbCatalog"
``
The minikube works fine.
How do I fix this error ?
I already changed the port from my catalogApi but without success.
I tried changing the name of the env variable but without success too.

Kubernetes Cronjobs are not removed

I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?
It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.

What APIs are required along with OCP CronJob

I have ConfigMap, ImageStream, BuildConfig, DeploymentConfig APIs that successfully deploy my app and launch the number of pods as asked. But I want to use CronJob now.
Do I replace the DeploymentConfig completely? Because the idea is to launch a new pod according to a corn expression that is passed into the CronJob API.
Yes, why not 🤷, you can reuse the template 📓 section of your DeploymentConfig. For example:
kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "frontend"
spec:
template:
metadata:
labels:
name: "frontend"
spec:
containers:
- name: "helloworld"
image: "openshift/origin-ruby-sample"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 5
triggers:
- type: "ConfigChange"
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- "helloworld"
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
strategy:
type: "Rolling"
paused: false
revisionHistoryLimit: 2
minReadySeconds: 0
would just become something like this 📃🏃:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: frontend
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
name: "frontend"
spec:
containers:
- name: "helloworld"
image: "openshift/origin-ruby-sample"
ports:
- containerPort: 8080
protocol: "TCP"
restartPolicy: OnFailure
✌️
Do I replace the DeploymentConfig completely? Because the idea is to
launch a new pod according to a corn expression that is passed into
the CronJob API.
I don't think so. Basically, "DeploymentConfig" is for running "Pod", "CronJob" is for running one-off "Pod" based on "Job". So their use cases are different each other.
For example, "DeploymentConfig" have a feature which trigger based on image changes through "ImageStream", this is required the target pod should run, not one-off one. It's not available to "CronJob".
But if you just want to use "CronJob" for the pod deployment instead of "DeploymentConfig" without image triggering feature, you should also consider how to refer the "ImageStream" on the "CronJob". Because "CronJob" is a native Kubernetes resource, so "CronJob" cannot use directly "ImageStream".
Add "alpha.image.policy.openshift.io/resolve-names: '*'" annotation to "CronJob" as follows for that. Refer Using Image Streams with Kubernetes Resources for more details.
e.g.>
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
annotations:
alpha.image.policy.openshift.io/resolve-names: '*' <-- You need this for using ImageStream
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: "<ImageStream name>"
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: OnFailure
But if you don't care to use ImageStream, you can deploy the same template for containers of the pod between "DeploymentConfig" and "CronJob" as Rico mentioned. I hope if help you. :)

kubernetes cronjob dont works correctly when values are customized

I am using rancher 2.3.3
When I config cronjob with schedule values like #hourly and #daily, works fine.
but when I config it with values like "6 1 * * *" , doesn't work.
OS times are sync between all cluster nodes
My config file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplename
namespace: samplenamespace
spec:
schedule: "6 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: samplename
image: myimageaddress:1.0
restartPolicy: OnFailure
I find the root cause
scheduler container has different timezone, so it run with a few hours delay

Kubernetes Cron Jobs - Run multiple pods for a cron job

Our requirement is we need to do batch processing every 3 hours but single process can not handle the work load. we have to run multiple pods for the same cron job. Is there any way to do that ?
Thank you.
You can provide parallelism: <num_of_pods> to cronjob.spec.jobTemplate.spec and it will run the multiple pods () at the same time.
Following is the example of a cronjob which runs 3 nginx pod every minute.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: null
labels:
run: cron1
name: cron1
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
creationTimestamp: null
spec:
parallelism: 3
template:
metadata:
creationTimestamp: null
labels:
run: cron1
spec:
containers:
- image: nginx
name: cron1
resources: {}
restartPolicy: OnFailure
schedule: '*/1 * * * *'
concurrencyPolicy: Forbid
status: {}