kubernetes Python Client utils.create_from_yaml raise Create Error when deploy argo workflow - kubernetes

I need to deploy the Argo workflow using kubernetes python API SDK.
My code is like this and 'quick-start-postgres.yaml' is the official deployment yaml file.
argo_yaml = 'quick-start-postgres.yaml' res = utils.create_from_yaml(kube.api_client, argo_yaml, verbose=True, namespace="argo")
I tried to create the argo-server pod, postgress pod, etc. I finally created services and pods successfully except for the argo-server
and there is also an error shows as the following:
error information here
I am not clear about what happened so anybody can give me a help? Thanks!

Related

Issues running kubectl from Jenkins

I have deployed jenkins on the kubernetes cluster using helmchart by following this:
https://octopus.com/blog/jenkins-helm-install-guide
I have the pods and services running in the cluster. I was trying to create a pipeline to run some kubectl commands. It is failing with below error:
java.io.IOException: error=2, No such file or directory
Caused: java.io.IOException: Cannot run program "kubectl": error=2, No such file or directory
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)
I thought that it has something to do with the Kubernetes CLI plugin for jenkins and raised an issue here:
https://github.com/jenkinsci/kubernetes-cli-plugin/issues/108
I have been advised to install kubectl inside jenkins pod.
I have the jenkins pod already running (deployed using helmchart). I have been seeing options to include the kubectl image binary as part of the dockerfile. But, I have used the helmcharts and not sure if I have to luxury to edit and deploy the pod to add the kubectl.
Can you please help with your inputs to resolve this? IS there any steps/documentation that explain how to install kubectl on the running pod? Really appreciate your inputs as this issue stopped one of my critical projects. Thanks in advance.
I tried setting the rolebinding for the jenkins service account as mentioned here:
Kubernetes commands are not running inside the Jenkins container
I haven't installed kubectl inside pod yet. Please help.
Jenkins pipeline:
kubeconfig(credentialsId: 'kube-config', serverUrl: '')
sh 'kubectl get all --all-namespaces'
(attached the pod/service details for jenkins)enter image description here

Replicasets are not creating in mongodb community operator

I am creating mongodb cluster using following documentation.
https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md
I am trying to create custom resources as following.
kubectl apply -f config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml
It is saying created successfully but I am unable to see any replicaset created for that and also no pods are being created for the same. I am using minikube to generate resources.
Got the issue as I came to know how can we check logs of custom "kind" in Kubernetes.
As I checked mongodbcommunity kind as follows. And it was giving failed stage.
kubectl get mongodbcommunity
Then I checked the following command for mongodbcommunity.
kubectl describe mongodbcommunity
It was showing errors as secrets are not found. I had changed credentials in custom resources but didn't update in secrets. So it was failing.
In the dashboard, nothing was happening and even during executing yaml no errors were showing up and were showing as "created".
So I just didn't know how to check the issue for this but figured way and after correcting the username properly MongoDB cluster is working fine now.

How to pull image from private Docker registry in KubernetesPodOperator of Google Cloud Composer?

I'm trying to run a task in an environment built from an image in a private Google Container Registry through the KubernetesPodOperator of the Google Cloud Composer.
The Container Registry and Cloud Composer instances are under the same project.
My code is below.
import datetime
import airflow
from airflow.contrib.operators import kubernetes_pod_operator
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
# Create Airflow DAG the the pipeline
with airflow.DAG(
'my_dag',
schedule_interval=datetime.timedelta(days=1),
start_date=YESTERDAY) as dag:
my_task = kubernetes_pod_operator.KubernetesPodOperator(
task_id='my_task',
name='my_task',
cmds=['echo 0'],
namespace='default',
image=f'gcr.io/<my_private_repository>/<my_image>:latest')
The task fails and I get the following error message in the logs in the Airflow UI and in the logs folder in the storage bucket.
[2020-09-21 08:39:12,675] {taskinstance.py:1147} ERROR - Pod Launching failed: Pod returned a failure: failed
Traceback (most recent call last)
File "/usr/local/lib/airflow/airflow/contrib/operators/kubernetes_pod_operator.py", line 260, in execut
'Pod returned a failure: {state}'.format(state=final_state
airflow.exceptions.AirflowException: Pod returned a failure: failed
This not very informative...
Any idea what I could be doing wrong?
Or anywhere I can find more informative log messages?
Thank you very much!
In general, the way how we start troubleshooting GCP Composer once getting a failure running the DAG is finely explained in the dedicated chapter of GCP documentation.
Moving to KubernetesPodOperator specifically related issues, the certain user investigation might consists of:
Verifying the particular task status for the corresponded DAG
file;
Inspecting the task inventory logs and events;, logs also can be found in GCP Composer's storage bucket;
With any K8s related resource/objects errors it's strongly
required to check Composer's relevant GKE cluster log/event
journals.
Further analyzing the error context and KubernetesPodOperator.py source code, I assume that this issue might occur due to Pod launching problem on Airflow worker GKE node, ending up with Pod returned a failure: {state}'.format(state=final_state) message once the Pod execution is not successful.
Personally, I prefer to check the image run in prior executing Airflow task in a Kubernetes Pod. Having said this and based on the task command provided, I believe that you can verify the Pod launching process, connecting to GKE cluster and redrafting kubernetes_pod_operator.KubernetesPodOperator definition being adoptable for kubectl command-line executor:
kubectl run test-app --image=eu.gcr.io/<Project_ID>/image --command -- "/bin/sh" "-c" "echo 0"
This would simplify the process of image validation, hence you'll be able to get closer look at Pod logs or event records as well:
kubectl describe po test-app
Or
kubectl logs test-app
If you want to pull or push an image in KubernetesPodOperator from private registry, you should create a Secret in k8s which contains a service account (SA). This SA should have permission for pulling or maybe pushing images (RO/RW permission).
Then just use this secret with SA in KubernetesPodOperator and specify image_pull_secrets argument:
my_task = kubernetes_pod_operator.KubernetesPodOperator(
task_id='my_task',
name='my_task',
cmds=['echo 0'],
namespace='default',
image=f'gcr.io/<my_private_repository>/<my_image>:latest',
image_pull_secrets='your_secret_name')

No YAML Files in K8s Deployment

TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml

callback-method like Options in kubernetes API

I have been working on kubernetes REST API calls to create deployments and services using python client. Now the scenario is that i have to create deployment and when the pods get ready i have to tell users that their deployment is ready using some callback method. I can achieve this using cli like
watch kubectl describe pod <pod-name>
and looking into pod status.
But how can i implement a call-back function which is called when pod status is changed e.g from container creating -> ready.
Any help would be appreciated.
There is a Python Kubernetes Client Library
pip install kubernetes
from kubernetes import client, config
def get_pods(name, exact=False, namespace='default'):
# TODO check if this could be created once in an object.
config.load_kube_config(os.path.join(os.environ["HOME"], '.kube/config'))
v1 = client.CoreV1Api()
pod_list = v1.list_namespaced_pod(namespace)
if exact:
relevant_pods = [pod for pod in pod_list.items if name == pod.metadata.name]
else:
relevant_pods = [pod for pod in pod_list.items if name in pod.metadata.name]
return relevant_pods
You can use Airflow for most use cases
https://kubernetes.io/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/
I wrote a library which reacts to kubernetes events and more.
It is reactive meaning it is directly opposed to the declarative style of Helm and is a debuggable python deployment tool to replace Helm.
https://github.com/hamshif/Wielder
pip install wielder
To use open source examples
https://github.com/hamshif/wield-services
and in tandem with Apache Airflow
https://github.com/hamshif/dags/tree/6daf6313d35824b58efa7f61f90e30a169946532
I guess you could watch the events on the namespace where you are deploying the application, and react to that
Events such as the ones you saw at the end of kubectl describe pod are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use kubectl get events