i am trying to use rabbitMQ Topology Operator to manage a rabbitMQ cluster running on kubernetes.
As a setup i have deployed rabbitmq-cluster-operator to create cluster and enabled necessary plugins like the management plugin.
Next i deployed rabbitmq-topology operator in same namespace.
After definining some infrastructur for topology oeprator eg an Exchange the topology operator just logs ERRORs when trying to create the exchange
"Error: API responded with a 401 Unauthorized"
Seems like the topology operator can not authorize against the management api.
I followed instructions to install the operator here
https://www.rabbitmq.com/kubernetes/operator/using-topology-operator.html
Iam wondering if a have to configure a user for the topology operator to authorize against the management api?
The Topology Operator is using the "{RabbitClusterName}-default-user" secret and the RabbitMQ Cluster Operator is generating a random default username/password pair when it creates the cluster.
I had the same problem because I overwrote the default user and password in aditionalConfig and the one created by the operator didn't work anymore.
Make sure that the user from the {RabbitClusterName}-default-user secret works with the management api. It should be in the same namespace as the cluster.
Related
I've got a container inside a GKE cluster and I want it to be able to talk to the Kubernetes API of another GKE cluster to list some resources there.
This works well if run the following command in a separate container to proxy the connection for me:
gcloud container clusters get-credentials MY_CLUSTER --region MY_REGION --project MY_PROJECT; kubectl --context MY_CONTEXT proxy --port=8001 --v=10
But this requires me to run a separate container that, due to the size of the gcloud cli is more than 1GB big.
Ideally I would like to talk directly from my primary container to the other GKE cluster. But I can't figure out how to figure out the IP address and set-up the authentication required for the connection.
I've seen a few questions:
How to Authenticate GKE Cluster on Kubernetes API Server using its Java client library
Is there a golang sdk equivalent of "gcloud container clusters get-credentials"
But it's still not really clear to me if/how this would work with the Java libraries, if at all possible.
Ideally I would write something like this.
var info = gkeClient.GetClusterInformation(...);
var auth = gkeClient.getAuthentication(info);
...
// using the io.fabric8.kubernetes.client.ConfigBuilder / DefaultKubernetesClient
var config = new ConfigBuilder().withMasterUrl(inf.url())
.withNamespace(null)
// certificate or other autentication mechanishm
.build();
return new DefaultKubernetesClient(config);
Does that make sense, is something like that possible?
There are multiple ways to connect to your cluster without using the gcloud cli, since you are trying to access the cluster from another cluster within the cloud you can use the workload identity authentication mechanism. Workload Identity is the recommended way for your workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services in a secure and manageable way. For more information refer to this official document. Here they have detailed a step by step procedure for configuring workload identity and provided reference links for code libraries.
This is drafted based on information provided in google official documentation.
I have some limitations with the rights required by Flink native deployment.
The prerequisites say
KubeConfig, which has access to list, create, delete pods and **services**, configurable
Specifically, my issue is I cannot have a service account with the rights to create/remove services. create/remove pods is not an issue. but services by policy only can be created within an internal tool.
could it be any workaround for this?
Flink creates two service in native Kubernetes integration.
Internal service, which is used for internal communication between JobManager and TaskManager. It is only created when the HA is not enabled. Since the HA service will be used for the leader retrieval when HA enabled.
Rest service, which is used for accessing the webUI or rest endpoint. If you have other ways to expose the rest endpoint, or you are using the application mode, then it is also optional. However, it is always be created currently. I think you need to change some codes to work around.
I have Apache Airflow on k8s.
Earlier, when Airflow was running on my local server (not k8s) i didn't have troubles with oauth2 creds verification: when Google Operators (based on GoogleCloudHook) starts, my browser opens and redirects me to Google Auth page. It was one-time procedure.
With Airflow on k8s my tasks running on separate pods and there are troubles with this oauth2 creds verification, i cant "open browser" inside pod, and i dont want to do it every time when my task will be running.
Can I somehow disable this procedure or automatizate this?
Is there any solution?
In order to authenticate you shoukd firstly be using the correct operator and executor in Airflow. In your case this would be the Kubernetes Executor. When using this executor you need to set up secret/s for use with k8s.
Refer to the documentation here Kubernetes Executor
Overview
I recently started to explore k8s extensions and got introduced to two concepts:
CRD.
Service catalogs.
They look pretty similar to me. The only difference to my understanding is, CRDs are deployed inside same cluster to be consumed; whereas, catalogs are deployed to be exposed outside the cluster for example as database service (client can order cluster of mysql which will be accessible from his cluster).
My query here is:
Is my understanding correct? if yes, can there be any other scenario where I would like to create catalog and not CRD.
Yes, your understanding is correct. Taken from official documentation:
Example use case
An application developer wants to use message queuing as part of their application running in a Kubernetes cluster.
However, they do not want to deal with the overhead of setting such a
service up and administering it themselves. Fortunately, there is a
cloud provider that offers message queuing as a managed service
through its service broker.
A cluster operator can setup Service Catalog and use it to communicate
with the cloud provider’s service broker to provision an instance of
the message queuing service and make it available to the application
within the Kubernetes cluster. The application developer therefore
does not need to be concerned with the implementation details or
management of the message queue. The application can simply use it as
a service.
With CRD you are responsible for provisioning resources, running backend logic and so on.
More info can be found in this KubeCon 2018 presentation.
I am trying to run ejabberd on google kubernetes engine. As I am using daemonset as kubernetes resource to deploy manage kubernetes pods of ejabberd, I need to setup custom healthcheck(which must receive status code 200 to be successful) for ejabberd container. (:5280/admin doesn't work as there is basic auth there, :5222 and :5269 send response that libcurl cannot parse, thus both doesn't work).
Tried to configure api and set custom healthcheck an api irl, but actually it's not secure and more configuration to be done.
does anyone passed through this problem and what solution can be done for this?