How to run airflow CLI commands with airflow/kubernetes installed from Helm stable/airflow? - kubernetes

Difficulty running airflow commands when running Airflow on Kubernetes that I installed from the Helm stable/airflow repo. For instance I try to exec into the scheduler pod and run airflow list and I get the following error:
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the KubernetesExecutor airlow
Ok so I switch to the celery executor.
Same thing
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the CeleryExecutor
So what is the correct way to run airflow CLI commands when running on K8s?

Make sure you are using bash. /home/airflow/.bashrc imports the environment variables from /home/airflow/airflow_env.sh to setup the connection. The following are some examples:
kubectl exec -ti airflow-scheduler-nnn-nnn -- /bin/bash
$ airflow list_dags
Or with shell you can import the env vars yourself:
kubectl exec -ti airflow-scheduler-nnn-nnn -- sh -c ". /home/airflow/airflow_env.sh && airflow list_dags"

Related

How to acess the airflow cli when deployed with docker-compose

When I deploy Apache Airflow with docker-compose, and run it via docker-compose run -d, the CLI Container stops automatically, and I have no Chance to exec into it. I am using the standard docker-compose file, I only deletet the profiles: debug option where the cli is defined.
I first run: docker-compose run
Then: docker exec -it airflow-airflow-cli-1 bash
It then says: Container is Not Running.
Why has it stopped and how can I stop this behaviour?
Are you following the steps in the official guide?
Using the following steps from the guide:
Do curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.5.0/docker-compose.yaml'
In that folder docker compose up airflow-init
docker-compose up
docker exec -it <FOLDER_NAME>-airflow-scheduler-1 /bin/bash
I end up in the scheduler container where you can use the Airflow CLI. (I ran airflow tasks test example_bash_operator runme_0 '2023-01-01' to test)

How to put Nextcloud in kubernetes in maintenance mode

I'm trying to migrate my Nextcloud instance to a kubernetes cluster. I've succesfully deployed a Nextcloud instance using openEBS-cStor storage. Before I can "kubectl cp" my old files to the cluster, I need to put Nextcloud in maintenance mode.
This is what I've tried so far:
Shell access to pod
Navigate to folder
Run OCC command to put next cloud in maintenance mode
These are the commands I used for the OCC way:
kubectl exec --stdin --tty -n nextcloud nextcloud-7ff9cf449d-rtlxh -- /bin/bash
su -c 'php occ maintenance:mode --on' www-data
# This account is currently not available.
Any tips on how to put Nextcloud in maintenance mode would be appreciated!
The su command fails because there is no shell associated with the www-data user.
What worked for me is explicitly specifying the shell in the su command:
su -s /bin/bash www-data -c "php occ maintenance:mode --on"

Start interactive shell into a sql server 2019 container running in an aks pod

I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.

running a linux command against a pid inside k8 pod

Is it possible to run a linux command against a process which is running inside a kubernetes pod. Example: I want to grab heapdumps on a java process running inside a k8 pod. The pod comes with minimal installation and does not have that much disk space either, so I want to run jmap command from local machine (pointing to k8 cluster). Thanks.
As I have already mentioned in the comments, what you can use is the kubectl exec command:
Execute a command in a container.
Usage:
$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]
The kubectl exec command is a tool that allows you to inspect and debug your applications, by executing commands inside your containers.
If you need more details and examples regarding how to use it, I recommend these two guides:
Get a Shell to a Running Container: This page shows how to use kubectl exec to get a shell to a running container.
How does kubectl exec work?
kubectl exec did it. It allows to run any command inside the container. For example:
kc exec <POD_NAME> -- jmap -dump:live,format=b,file=heapdump.bin <pid>

How to run a container in Kubernetes without creating Deployment or Job?

I'm trying to run an interactive Pod (container) in Kubernetes that does not create a Job or Deployment and deletes itself after completing.
The purpose of the container is to give our developers an easy way to access our database, which doesn't have a public IP address.
Currently, we are using this command:
kubectl run -i --tty proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
which works the first time you run it, however, after exiting the session if you try to run the above again to connect to the database again, we get:
Error from server: jobs.extensions "proxy-pgclient" already exists
Forcing the developer to delete the job with:
kubectl delete job proxy-pgclient
before they can run the command and connect again.
Is there any way of starting up an interactive container (Pod) in Kubernetes without creating a Job or Deployment object and having that container be deleted when the interactive session is closed?
Adding the "--rm" flag to the original command resulted in the Job (and Pod) being deleted at the completion of the interactive session, which is what I was after. The command then becomes:
kubectl run -i --tty --rm proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
There isn't a short kubectl command that will do exactly what you want. Instead, you can create a yaml/json file with your pod description and run kubectl create -f pod.yaml. Your pod can be set to never restart, so it will terminate once it exits.