I would like to run an OpenDJ cluster in my kubernetes cluster. In order to do so I used this procedure
I've created a StetefulSet, service and storageclass.
In order to initialize the replication, I need to run several commands dsreplication enable and dsreplication initialize-all inside one of the StatefulSet replicas because when I try running those commands through a kubernetes job as a separated pod it sends the following error - The local instance is not configured or you do not have permissions to access it as the server has not initialized as one of the opendj cluster's replicas (Which has the initial command that launch the opendj server).
How would you run those commands from the statefulsets pods? (When I keep thinking about scaling - how will I join new pods to the cluster when they'll launch with the hpa?)
Or maybe the better question is how can I run commands on my cluster from a remote pod?
Thanks.
Related
I have an application running in a Kubernetes cluster Azure AKS which is made up of a website running in one deployment, background worker processes running as scheduled tasks in Kubernetes, RabbitMQ running as another deployment and a SQL Azure DB which is not part of the Kubernetes.
I would like to deploy achieve load balancing and failover by deploying another kubernetes cluster in another region and placing a Traffic Manager DNS Load Balancer in front of the web site.
The problem that I see is that if the two rabbit instances are in separate kubernetes clusters then items queued in one will not be available in the other.
Is there a way to cluster the rabbitmq instances running in each kubernetes cluster or something besides clustering?
Or is there a common design pattern that might avoid problems from having seperate queues?
I should also note that currently there is only one node running RabbitMq in the current kuberntes cluster but as part of this upgrade it seems like a good idea to run multiple nodes in each cluster which I think the current Helm charts support.
You shouldn't cluster RabbitMQ nodes across regions. Your cluster will get split brain because of network delays. To synchronise RabbitMQ queues, exchanges between clusters you can use federation or shovel plugin, depending on your use case.
Federation plugin can be enabled on a cluster by running commands:
rabbitmqctl stop_app
rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
rabbitmqctl start_app
Mode details on Federation.
For shovel:
rabbitmq-plugins enable rabbitmq_shovel
rabbitmq-plugins enable rabbitmq_shovel_management
rabbitmqctl stop_app
rabbitmqctl start_app
Mode details on Shovel.
Full example on how to setup Federation on RabbitMQ cluster can be found here.
I want to setup my hyperledger blockchain application into kubernetes cluster.
I don't want to encourage questions like this but here are some steps that you could possibly help you:
Ensure your application runs correctly locally on Docker.
Construct your Kubernetes configuration files. What you will need:
A deployment or a statefulset for each of your peers.
A statefulset for the couchdb for each of your peers.
A deployment or a statefulset for each of your orderers.
One service per peer, orderer and couchdb (to allow them to communicate).
A job that creates and joins the channels.
A job that installs and instantiates the chaincode.
Generated crypto-material and network-artifacts.
Kubernetes Secrets or persistent volumes that hold your crypto-material and network-artifacts.
An image of your dockerized application (I assume you have some sort of server using an SDK to communicate with the peers) uploaded on a container registry.
A deployment that uses that image and a service for your application.
Create a Kubernetes cluster either locally or on a cloud provider and install the kubectl CLI on your computer.
Apply (e.g. kubectl apply -f peerDeployment.yaml) the configuration files on your cluster with this order:
Secrets
Peers, couchdb's, orderers (deployments, statefulsets and services)
Create channel jobs
Join channel jobs
Install and instantiate chaincode job
Your application's deployment and service
If everything was configured correctly, you should have a running HLF platform in your Kubernetes cluster. It goes without saying that you have to research each step to understand what you need to do. And to experiment, a lot.
I have a cluster with several workloads and different configurations on GCP's Kubernetes Engine.
I want to create a clone of this existing cluster along with cloning all the workloads in it. It turns out, you can clone a cluster but not the workloads.
So, at this point, I'm copying the deployment yaml's of the workloads from the cluster which is working fine, and using them for the newly created workload's in the newly created cluster.
When I'm deploying the pods of this newly created workload, the pods are stuck in the pending state.
In the logs of the container, I can see that the error has something to do with Redis.
The error it shows is, Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete].
Also, when I'm connected with the first cluster and run the command,
kubectl get secrets -n=development, it shows me a bunch of secrets which are supposed to be used by my workload.
However, when I'm connected with the second cluster and run the above kubectl command, I just see one service related secret.
My question is how do I make my workload of the newly created cluster to use the configurations of the already existing cluster.
I think there are few things that can be done here:
Try to use kubectl config command and set the same context for both of your clusters.
You can find more info here and here
You may also try to use Kubernetes Cluster Federation. But bear in mind that it is still in alpha.
Remember that keeping your config in a version control system is generally a very good idea. You want to store it before the cluster applies defaults while exporting.
Please let me know if that helped.
I am implementing the continuous integration and continuous deployment by using Ansible, Docker, Jenkins and Kubernetes. I already created one Kubernetes cluster with 1 master and 2 worker nodes by using Ansible and kubespray deployment. And I have 30 - 40 number of micro service application. I need to create that much of service and deployments.
My Confusion
When I am using Kubernetes package manager Kubernetes Helm chart, then do I need to initiate my chart on master node or in my base machine from where I I deployed my kubernet cluster ?
If I am initiating inside master, then can I use kubectl to deploy using ssh on remote worker nodes?
If I am initiating outside the Kubernetes cluster nodes , then Can i use kubectl command to deploy in Kubernetes cluster ?
Your confusion seems to lie in the configuration and interactions of Helm components. This explanation provides a good graphics to represent the relationships.
If you are using the traditional Helm/Tiller configuration, Helm will be installed locally on your machine and, assuming you have the correct kubectl configuration, you can "initialize" your cluster by running helm init to install Tiller into your cluster. Tiller will run as a deployment in kube-system, and has the RBAC privileges to create/modify/delete/view the chart resources. Helm will automatically manage all the API objects for you, and the kube-scheduler will schedule the pods to all your nodes accordingly. You should not be directly interacting with your master and nodes via your console.
In either configuration, you would always be making the Helm deployment from your local machine with a kubectl access to your cluster.
Hope this helps!
If you look for the way for running helm client inside your Kubernetes cluster, please check the concept of Helm-Operator.
I would recommend you also to look around for term "GitOps" - set of practices which combines Git with Kubernetes, and sets Git as a source of truth for your declarative infrastructure and applications.
There are two great OSS projects out there, that implements GitOps best practices:
flux (uses Helm-Operator)
Jenkins-x (uses helm as a part of release pipeline, check out this session on YT to see it in action)
Has anyone had any luck running the Ignite visor in a Kubernetes environment? Should it be run from its own pod? Would I need to open extra ports or configure the ignite service differently? So far I have had no luck, but my experience with Ignite is fairly shallow.
To run Ignite Visor in Kubernetes you need to configure it absolutely the same as simple Ignite nodes, it means that you need to configure DiscoverySpi and CommunicationSpi.
Here is a link to documentation with configuration of Ignite in Kubernetes environment: https://apacheignite.readme.io/docs/kubernetes-deployment
In Kubernetes you have to use only one the same network port for all Apache Ignite instances including Visor, instead of port range, for discovery and communication between instances [1]. This happens because you cannot expose a port range for POD in k8s. Moreover, you have to be sure that instances in the cluster see each other, so you have to use special discovery SPI. By default, if you start Visor in the POD where you have one instance already started, then Visor cannot obtain the same port and uses another from a range, and as a result it doesn't see other nodes in the cluster or see only one node in the POD where it has been started.
If this is the case, then I'd recommend to start a separate POD with the same config but with another CMD, which doesn't start a server node, but runs a sleep loop instead, in order that k8s won't kill the POD. Then you can kubectl exec -ti pod-id -- bash and start the Visor/Sqlline/Control with the same config that you've provided for other instances.
[1] https://apacheignite.readme.io/docs/network-config
Hope it will help.