Azure AKS node pool is not scaling - kubernetes

i have created the Azure AKS cluster with autoscale feature enabled, by following the link.
Deployed Django, Celery and Rabbitmq based application and created Keda in it to scale pods based on Rabbitmq queue length. Keda is able to scale pods but Nodes are not getting scaled in node pool.
Can some one help me with it?

The following is answer I got from azure support team on this -
"Unfortunately, Autoscaling feature on Virtual machine availabilty sets is natively support by Azure kubernetes for now. We have VMSS autoscaler feature and that too in preview phase."
They were focusing on manual scaling for now.
They also mentioned one github repo to refer but azure won't provide any support for it.
Its mentioned as follows -
I have done a quick research ,please find the github link where we have procedure on autoscaling of VM availbility sets, Kindly go through standard deployment section in the link. This is not directly supported by us and if you have any issues or concerns you can approach github for the same.
click here

Related

Issues with setting up Autodiscovery Auto-Configuration for Kubernetes integration in Datadog

I have recently updated DataDog to use a Cluster Agent. I am currently trying to set up the Kubernetes integration. This should be an auto discovered through the auto_conf.yaml. But for some reason when updating to the Cluster Agent we lost metrics through the kubernetes integration. My guess was to set it as a cluster check by adding cluster_check:true in the auto_conf.yaml file, but that did not work. I currently have it set up only on the node agents and configured just like it says in this documentation. Is there something else that needs to be done to set up the Kubernetes integration with a Cluster Agent?
Solved the issue by adding kubernetes-stat-core via the following manifests. This is uses the kube-state-metrics v2.0.
https://github.com/DataDog/datadog-agent/tree/main/Dockerfiles/manifests/kubernetes_state_core

How to create a multi-master cluster in Azure

I need to create an Azure Kubernetes Service with 3 master nodes. So far I used to work with single master cluster, now I am in need of creating a multi-master cluster for production environments.
Can I get a way to create an AKS with multiple control planes. Thanks in Advance.
As Soundarya mentioned in the comment, the solution could be fould here:
As your ask is on AKS (Managed service from Azure) with HA enabled Clusters you already have more than one master running. As AKS is a managed offering service you will will not have the visibility/control on this.
Can I get a way to create an AKS with multiple control planes?
For this you can check the AKS Uptime SLA, Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters.
Please check this document for more details.
If you are using AKS Engine (unmanaged service), then you can specify the number of masters. Please refer to this document for more details.

Kubernetes dashboard via GCP

Sorry to bother you, but i am having a serious issue with my online DevOps learning.
In fact, i am taking a Devops course and we are using the google cloud platform as a cloud. When i create my cluster with gcloud container clusters create xxx and then do the describe command like gcloud container clusters describe xxx, it works but i have no information regarding the login and password to Kubernetes;
That is one of the problem.
After creating the cluster, i got not Kubernetes dashboard link with the command kubectl cluster-info. Normally i should have a Kubernetes dashboard to manage my app. In place of having the Kubernetes dashboard, there is something called Kubernetes system metric.
Can somebody help me fix this problem probably someone who is used to practice on GCP.
Best regards
Can you please go through this Google Cloud Kubernetes dashboards docs[1]?
Because, I'm able to see Kubernetes dashboard in my console. But, I don't know why you are not able to see that, and I also checked there is now any service outage on Kubernetes from Google Cloud Status Dashboard[2]. But, It's working fine. So, kindly go through that Kubernetes docs, from that you will get some better understanding of working with Kubernetes in GCP.
If you're still facing any issue or abnormal behavior, please go to public issue tracker[3] or support from GCP console and raise a ticket.
[1]. https://cloud.google.com/kubernetes-engine/docs/concepts/dashboards
[2]. https://status.cloud.google.com/
[3]. https://cloud.google.com/support/docs/issue-trackers#trackers-list
When you visit the GCP dashboard docs, you should see red warning on top of the website, saying:
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. As an alternative, use the Cloud Console dashboards described in this guide.
Below you read:
Starting with GKE v1.15, you will no longer be able to enable the Kubernetes Dashboard by using the add-on API. You will still be able to install Kubernetes Dashboard manually by following the instructions in the project's repository. For clusters in which you have already deployed the add-on, it will continue to function but you will need to manually apply any updates and security patches that are released.
To deploy it, follow the instructions on k8s dashboard github repo

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.

Google Container Engine Subnets

I'm trying to isolate services from one another.
Suppose ops-human has a bunch of mysql stores running on Google Container Engine, and dev-human has a bunch of node apps running on the same cluster. I do NOT want dev-human to be able to access any of ops-human's mysql instances in any way.
Simplest solution: put both of these in separate subnets. How do I do such a thing? I'm open to other implementations as well.
The Kubernetes Network-SIG team has been working on the network isolation issue for a while, and there is an experimental API in kubernetes 1.2 to support this. The basic idea is to provide network policy on a per-namespace basis. A third party network controller can then react to changes to resources and enforces the policy. See the last blog post about this topic for the details.
EDIT: This answer is more about the open-source kubernetes, not for GKE specifically.
The NetworkPolicy resource is now available in GKE in alpha clusters (see latest blog post).