How to run multiple hazelcast cluster in one deployment? - kubernetes

I'm deploying hazelcast on k8s using the helmchart on github currently on revision 5.3.2.
How would one go about running two clusters, say dev_cache and qa_cache in one helm deployment each with different members? Is that possible?
I see the fields
hazelcast:
javaOptions:
existingConfigMap: xxx
and
configurationFiles: #any additional Hazelcast configuration files
in the values.yaml but am unable to find any documentation on how to use them.

In one Helm deployment, you always run one Hazelcast cluster. You need to run Helm command twice to create 2 separate Hazelcast clusters.

Related

Is it possible to configure Kubernetes to deploy services in a specific order?

I have 3 services that are based on the same image, they're basically running the same app in 3 different configurations. 1 service is responsible for running migrations and data updates which the other 2 services will need. So I need this 1 service to be deployed first before the other 2 will be deployed. Is there any way to do this?
I assume that the two services which are dependent on service 1 are either Pod, Deployment or ReplicaSet. So, make use of Init Containers.
Here is an example that will make a pod wait until the dependent services are up and running. busybox is most common image used for init containers that supports variety of linux commands that will help you check the status of dependent services.
I would look for a solution outside of Kubernetes.
Assuming that you have a release pipeline to deploy the changes, you could have a step to migrate and update the database. If the migration step succeeds, then deploy all the new services.
If you really have to use K8S, see the InitContainers doc.
In flux there is an option to define the sequence of applications to be deployed.
With spec.dependsOn you can specify that the execution of a Kustomization follows another. When you add "dependsOn" entries to a Kustomization, that Kustomization is applied only after all of its dependencies are ready.
You can add under the spec of kustomization
spec:
dependsOn:
- name: dependency-kustomization

How to share a software product having multiple microservices and with specific Kubernetes configurations

I am new to Kubernetes. I have a doubt.
Say I have a software product which has 5 Microservices. It is deployed on my Kubernetes cluster (OCP Cluster) and it is working fine. Each microservice has Kubernetes configuration files on cluster - deployment, service, configmap yaml files etc.
Suppose I want to sell the product to customers and they will setup in their own prod Kubernetes cluster (May be in OCP or Amazon EKS etc)
Which is the appropriate way to give the product to the customers?
Way1 - Sending all the yaml files (deployment.yaml, service.yaml, configmap.yaml) for each microservice. As total 5 Microservices so 5 deployment.yaml, 5 service.yaml, configMap.yaml files etc. Upon receiving those files they will manually setup the files in their Kubernetes cluster and then all the pods will come up.
Way2 - For each microservice we will give the Kubernetes Helm chart. So total 5 Helm Charts for 5 Microservices. Upon receiving the charts they will install all the yaml files with helm install.
Way3 - Or any other better way?
Note- Private Docker Image Repository access will be given to the customers for pulling the microservice images.
Way 1: No. There is tooling to do this. You shouldn't need to send yaml files around.
Way 2: This is the best way but with some edits. Create 5 helm charts that manage the separate components of your product. Then create a 6th chart that depends on the other 5. This way you only have to give 1 "product" chart to the consumer and that chart then pulls everything it needs to run. This is usually how its done. See projects like loki-stack from grafana. They have a helm chart called 'loki-stack' and that chart has dependencies on grafana, loki, Prometheus, promtail etc. Your consumers will then just helm install my-product and helm will take care of getting the 5 service charts.
Way 3: There are many ways to do this but they are all kind of specific to implementations. For example you can use OAM + KubeVela. However if a consumer doesn't use OAM + KubeVela its a problem. IMO Helm is the standard approach fow now until more people start using stuff like KubeVela

How to configure Helm to deploy multiple microservices(payment,order) into different environments(dev,QA) using AKS?

I'm quite new to Helm. I'm learning to automate the microservices deployment using helm and azure kubernetes service. I need to deploy multiple microservices(payment,order) into different environments(dev,QA).
As per my analysis I hope we achieve this by following steps
Create separate clusters for different environments
Create multiple variable files based on environments.
we can pass only cluster name and variable file based on our deployment. so it will deploy according to our inputs.
I'm trying to implement the same, but I'm not sure how to configure the above scenarios in helm part in real time.
Shall we achieve this completely using helm alone or shall we use Ansible for orchestration along with helm ?
Anyone could you please advise me on this and suggest me any other best practices if we have?
Reference :
https://codefresh.io/helm-tutorial/helm-deployment-environments/
Thanks in advance :)
Helm cannot control which cluster it's deploying to, this is being decided by the kubeconfig file on the machine used to invoke the helm command.
If your kubeconfig file is configured to access multiple clusters, you can just set the right context before each helm install command, and it will target the command at the cluster of your choice.

Kubernetes single deployment yaml file for spinning up the application

I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.
After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.
Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.
You can use this script:
NAMESPACE="your_namespace"
RESOURCES="configmap secret daemonset deployment service hpa"
for resource in ${RESOURCES};do
rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed "s/\"//g")
for r in ${rsrcs};do
dir="${NAMESPACE}/${resource}"
mkdir -p "${dir}"
kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} > "${dir}/${r}.yaml"
done
done
Remember to specify what resources you want exported in the script.
More info here
Is there any way to create a single master deployment yaml file which will create the entire set of services,replicas etc for the entire application.
Since you already mentioned kubernetes-helm why don't you actually used it for that exact purpose? In short helm is sort of package manager for Kubernetes, some say similar to yum or apt. It deploys charts which you can actually refer to as packed application. Its pack of all your pre-configured applications which can be deploy as one unit. It's not entirely one file but more collection of files that build so called helm chart.
What are the helm charts?
Well they are basically K8s yaml manifest combined into a single package that can be installed to your cluster. And installing the package is just as simple as running single command such as helm install. Once done the charts are highly reusable which reduces the time for creating dev, test and prod environments.
As an example of a complex helm chart deploying multiple resources you many want to check Stackstorm.
Basically once deployed without any custom config this chart will deploy 2 replicas for each component of StackStorm as well as backends like RabbitMQ, MongoDB and Redis.

What is the difference between the core os projects kube-prometheus and prometheus operator?

The github repo of Prometheus Operator https://github.com/coreos/prometheus-operator/ project says that
The Prometheus Operator makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
kube-prometheus combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
Can someone elaborate this?
I've always had this exact same question/repeatedly bumped into both, but tbh reading the above answer didn't clarify it for me/I needed a short explanation. I found this github issue that just made it crystal clear to me.
https://github.com/coreos/prometheus-operator/issues/2619
Quoting nicgirault of GitHub:
At last I realized that prometheus-operator chart was packaging
kube-prometheus stack but it took me around 10 hours playing around to
realize this.
**Here's my summarized explanation:
"kube-prometheus" and "Prometheus Operator Helm Chart" both do the same thing:
Basically the Ingress/Ingress Controller Concept, applied to Metrics/Prometheus Operator.
Both are a means of easily configuring, installing, and managing a huge distributed application (Kubernetes Prometheus Stack) on Kubernetes:**
What is the Entire Kube Prometheus Stack you ask? Prometheus, Grafana, AlertManager, CRDs (Custom Resource Definitions), Prometheus Operator(software bot app), IaC Alert Rules, IaC Grafana Dashboards, IaC ServiceMonitor CRDs (which auto-generate Prometheus Metric Collection Configuration and auto hot import it into Prometheus Server)
(Also when I say easily configuring I mean 1,000-10,000++ lines of easy for humans to understand config that generates and auto manage 10,000-100,000 lines of machine config + stuff with sensible defaults + monitoring configuration self-service, distributed configuration sharding with an operator/controller to combine config + generate verbose boilerplate machine-readable config from nice human-readable config.
If they achieve the same end goal, you might ask what's the difference between them?
https://github.com/coreos/kube-prometheus
https://github.com/helm/charts/tree/master/stable/prometheus-operator
Basically, CoreOS's kube-prometheus deploys the Prometheus Stack using Ksonnet.
Prometheus Operator Helm Chart wraps kube-prometheus / achieves the same end result but with Helm.
So which one to use?
Doesn't matter + they achieve the same end result + shouldn't be crazy difficult to start with 1 and switch to the other.
Helm tends to be faster to learn/develop basic mastery of.
Ksonnet is harder to learn/develop basic mastery of, but:
it's more idempotent (better for CICD automation) (but it's only a difference of 99% idempotent vs 99.99% idempotent.)
has built-in templating which means that if you have multiple clusters you need to manage / that you want to always keep consistent with each other. Then you can leverage ksonnet's templating to manage multiple instances of the Kube Prometheus Stack (for multiple envs) using a DRY code base with lots of code reuse. (If you only have a few envs and Prometheus doesn't need to change often it's not completely unreasonable to keep 4 helm values files in sync by hand. I've also seen Jinja2 templating used to template out helm values files, but if you're going to bother with that you may as well just consider ksonnet.)
Kubernetes operator are kubernetes specific application(pods) that configure, manage and optimize other Kubernetes deployments automatically. They are implemented as a custom controller.
According to official coreOS website:
Operators were introduced by CoreOS as a class of software that operates other software, putting operational knowledge collected by humans into software.
The prometheus operator provides the easy way to deploy configure and monitor your prometheus instances on kubernetes cluster. To do so, prometheus operator introduces three types of custom resource definition(CRD) in kubernetes.
Prometheus
Alertmanager
ServiceMonitor
Now, with the help of above CRD's, you can directly create a prometheus instance by providing kind: Prometheus and the prometheus instance is ready to serve, likewise you can do for AlertManager. Without this you would have to setup the deployment for prometheus with its image, configuration and many more things.
The Prometheus Operator serves to make running Prometheus on top of Kubernetes as easy as possible, while preserving Kubernetes-native configuration options.
Now, kube-prometheus implemented the prometheus operator and provides you minimum yaml files to create your basic setup of prometheus, alertmanager and grafana by running a single command.
git clone https://github.com/coreos/prometheus-operator.git
kubectl apply -f prometheus-operator/contrib/kube-prometheus/manifests/
By running above command in kube-prometheus directory, you will get a monitoring namespace which will have an instance of alertmanager, prometheus and grafana for UI. This is enough setup for most of the basic implementation and if you need any more specifics according to your application, you can add more yamls of exporter you need.
Kube-prometheus is more of a contribution to prometheus-operator project, which implements the prometheus operator functionality very well and provide you a complete monitoring setup for your kubernetes cluster. You can start with kube-prometheus and extend the functionality of your monitoring setup according to your application from there.
You can learn more about prometheus-operator here
As of today, 28-09-2020, this is the way to install Prometheus in a Kubernetes cluster
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack
According to official documentation, kube-prometheus-stack is a rename of prometheus-operator.
As I understood, kube-prometheus-stack also has preinstalled grafana dashboards and prometheus rules.
Note: This chart was formerly named prometheus-operator chart, now
renamed to more clearly reflect that it installs the kube-prometheus
project stack, within which Prometheus Operator is only one component.
Taken from https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
Architecturally the container runs docker
The default container logs are managed by Docker, and the default log driver uses JSON-file
log-driver": "json-file
https://docs.docker.com/config/containers/logging/configure/
If the default jSON-file is used to manage container logs, log rotation is not performed by default. Therefore, the default JSON-file log driver the log files stored by the log driver can result in a large amount of disk space for containers that generate a large amount of output, which can cause disk space to run out.
In this case, save the log to ES, store it separately, and periodically delete the index using curator kubernetes
And run a scheduled task in K8S to delete the index periodically
Another solution for disk space is to periodically delete old logs from jSON-files
Typically we set the size and number of logs
This will set up a maximum of 10 log files, each with a maximum size of 20 Mb. Therefore, the container has a maximum of 200 Mb of logs
"log-driver": "json-file", "log-opts": { "max-size": "20m", "max-file": "10" },
Note: In general, the default Docker log is placed
/var/lib/docker/containers/
But in the same case kubernetes also saves logs and creates a directory structure to help you find pods-based logs, so you can find container logs for each Pod running on a node
/var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/
When removing pod, / var/lib/container under the docker/containers/log and k8s created under/var/log/pods/pod log will be deleted
For example, if the POD is restarted during production, the pod log will be deleted whether it is on the original node or jumped to another node
Therefore, this log needs to be saved in ES for centralized management. Many R&D projects will check the log for troubleshooting in most cases