How to deploy statefulset in Kubernetes Federation? - kubernetes

It seems like the federation api currently doesn't support statefulset deployment, but I would like to know is that possible to deploy statefulsets in Kubernetes Federation in the situation described below.
For example, for three pods: pod-0, pod-1, pod-2 are running on cluster A, and pod-3, pod-4, pod5 are running on cluster B?

Related

mysql router in kubernetes

There are few deployments in different namespaces in our application. Mysql is outside of the cluster and planning to use mysql router. Deployed mysql router as a service and the deployments are able to connect using kubernetes service URL. The question is should mysql router be deployed as sidecar (possible to??) to each of the deployments or run as another deployment within the cluster? If it runs as the separate deployment how the router deployment will handle the increase in requests if the app deployments are scaled up?
You can deploy the Mysql router as deployment and service.
So when if your application scale up you can also scale the deployment with HPA based on the resource you are looking CPU/Memory/Requests etc.

Uisng Nginx Ingress to implement Blue-Green Deployment between two AKS clusters

please i wanted to find out if it is possible to implement a blue-green deployment for my application on separate AKS clusters using nginx ingress controller.
I have the current application (blue) running on one AKS cluster and i have a new AKS cluster (Green, with a new k8s version) where i want to migrate my workloads to.
Is it possible to implement the blue-green deployment strategy between these 2 AKS clusters using nginx ingress controller? If so, please can someone give me a suggestion on how this can be implemented? Thank you

Prometheus: Better Option to monitor external K8s Cluster

I have two kubernetes clusters who do not talk to one another in any way. The idea is to maintain one prometheus instance(in another 3rd cluster) that can scrape endpoints from both the clusters.
I created a service account in each cluster, gave it cluster role & clusterrolebinding and took an yaml file of the secret. I then imported the same secret in the 3rd cluster where I have prometheus running. Using these mounted secrets, I was able to pull data from all pods in cluster 1 and 2.
Are there any better options to achieve this usecase?
I am in a way transferring secrets from one cluster to another to get the same ca.crt and token.
I think it is not safe to share secrets between clusters.
What about federation prometheus, one prometheus instance can export some data, which can be consumed by external prometheus instance.
For example, a cluster scheduler running multiple services might expose resource usage information (like memory and CPU usage) about service instances running on the cluster. On the other hand, a service running on that cluster will only expose application-specific service metrics. Often, these two sets of metrics are scraped by separate Prometheus servers.
Or deploy some exporter, which can be consumed by external prometheus. e.g. https://github.com/kubernetes/kube-state-metrics (but it is not providing cpu/memory usage of pods)

How to talk to Kubernetes CRD service within a pod in the same k8s cluster?

I installed a Spark on K8s operator in my K8s cluster and I have an app running within the k8s cluster. I'd like to enable this app to talk to the sparkapplication CRD service. Can I know what would be the endpoint I should use? (or what's the K8s endpoint within a K8s cluster)
It's clearly documented here. So basically, it creates a NodePort type of service. It also specifies that it could create an Ingress to access the UI. For example:
...
status:
sparkApplicationId: spark-5f4ba921c85ff3f1cb04bef324f9154c9
applicationState:
state: COMPLETED
completionTime: 2018-02-20T23:33:55Z
driverInfo:
podName: spark-pi-83ba921c85ff3f1cb04bef324f9154c9-driver
webUIAddress: 35.192.234.248:31064
webUIPort: 31064
webUIServiceName: spark-pi-2402118027-ui-svc
webUIIngressName: spark-pi-ui-ingress
webUIIngressAddress: spark-pi.ingress.cluster.com
In this case, you could use 35.192.234.248:31064 to access your UI. Internally within the K8s cluster, you could use spark-pi-2402118027-ui-svc.<namespace>.svc.cluster.local or simply spark-pi-2402118027-ui-svc if you are within the same namespace.

K8s cluster working with Openshift?

I know that Openshift uses some K8s components to orchestrate PODS. Is there any way K8 and Openshift integrate together?. Means I should see the PODS which are deployed with K8s in Openshift UI and vise versa.
Followed Openshift as POD in K8 documentation,but I was struck at Step-4, unable to find kubernetes account key in GCE cluster (/srv/kubernetes/server.key).
Or is any way K8 nodes join under Openshift cluster?