I wanted to create kafka connect connector on openshift project through postman. But when sending Post command through postman getting error as below. In openshift to expose pod as a service(interact through postman) any specific command we need to run? Please advise.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The error you are getting is because you havent created any route to interact with the required service.
In openshift a service is exposed to the applications outside of cluster through routes.
Use the following command to expose service outside of cluster :
oc expose service
You can include many options with the above command . to know more use the following command
oc expose service --help
Related
I recently successfully deployed my Vue.JS webapp to Cloud Run. Beforehand the webapp was deployed by a Kubernetes Deployment and Service. I also had an Ingress running that redirect my http requests to that service. Now Cloud Run takes over the work.
Unfortunately the new Cloud Run driven Knative "Service" does not seem to work anymore.
My Ingress is showing me the following error message:
(Where importer-controlroom is my application's name)
The error message is not cromprehensible to me. I hereby try to provide you with some more information with what you maybe be able to help me out with this issue.
This is current list of resources that have been created. I especially was looking at the importer-controlroom-frontend External Name. I somewhat think this is the Service that replaced the old one?
Because I used it's name in my Ingress Rules to map it to a domain as you can see here:
The error message in the Ingress says:
could not find port "80" in service "dev/importer-controlroom-frontend"
However the Cloud Run revision shows that port 80 is being provided:
A friend of mine redirect me to this article: https://cloud.google.com/solutions/integrating-https-load-balancing-with-istio-and-cloud-run-for-anthos-deployed-on-gke?hl=de#handling_health_check_requests
Unfortunately I have no idea what it is talking about. True thing is that we are using Istio but I did not configure it and have a very hard time getting my head around it for this particular case.
INFO_1
Dockerfile contains:
EXPOSE 80
CMD [ "http-server", "dist", "-p 80"]
Cloud Run for Anthos apps do not work with a GKE Ingress.
Knative services are exposed through a public gateway service called istio-ingress on the gke-system namespace:
$ kubectl get svc -n gke-system
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingress LoadBalancer 10.4.10.33 35.239.55.104
Domain names etc work very differently on Cloud Run for Anthos so make sure to read the docs on that.
How to get Kubernetes cluster name from K8s API mentions that
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
(from within the cluster), or
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
(from outside the cluster), can be used to retrieve the cluster name. That works.
Is there a way to perform the same programmatically using the k8s client-go library? Maybe using the RESTClient()? I've tried but kept getting the server could not find the requested resource.
UPDATE
What I'm trying to do is to get the cluster-name from an app that runs either in a local computer or within a k8s cluster. the k8s client-go allows to initialise the clientset via in cluster or out of cluster authentication.
With the two commands mentioned at the top that is achievable. I was wondering if there was a way from the client-go library to achieve the same, instead of having to do kubectl or curl depending on where the service is run from.
The data that you're looking for (name of the cluster) is available at GCP level. The name itself is a resource within GKE, not Kubernetes. This means that this specific information is not available using the client-go.
So in order to get this data, you can use the Google Cloud Client Libraries for Go, designed to interact with GCP.
As a starting point, you can consult this document.
First you have to download the container package:
➜ go get google.golang.org/api/container/v1
Before you will launch you code you will have authenticate to fetch the data:
Google has a very good document how to achieve that.
Basically you have generate a ServiceAccount key and pass it in GOOGLE_APPLICATION_CREDENTIALS environment:
➜ export GOOGLE_APPLICATION_CREDENTIALS=sakey.json
Regarding the information that you want, you can fetch the cluster information (including name) following this example.
Once you do do this you can launch your application like this:
➜ go run main.go -project <google_project_name> -zone us-central1-a
And the result would be information about your cluster:
Cluster "tom" (RUNNING) master_version: v1.14.10-gke.17 -> Pool "default-pool" (RUNNING) machineType=n1-standard-2 node_version=v1.14.10-gke.17 autoscaling=false%
Also it is worth mentioning that if you run this command:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
You are also interacting with the GCP APIs and can go unauthenticated as long as it's run within a GCE machine/GKE cluster. This provided automatic authentication.
You can read more about it under google`s Storing and retrieving instance metadata document.
Finally, one great advantage of doing this with the Cloud Client Libraries, is that it can be launched externally (as long as it's authenticated) or internally within pods in a deployment.
Let me know if it helps.
If you're running inside GKE, you can get the cluster name through the instance attributes: https://pkg.go.dev/cloud.google.com/go/compute/metadata#InstanceAttributeValue
More specifically, the following should give you the cluster name:
metadata.InstanceAttributeValue("cluster-name")
The example shared by Thomas lists all the clusters in your project, which may not be very helpful if you just want to query the name of the GKE cluster hosting your pod.
My company runs OpenShift v3.10 cluster consisting of 3 masters and 4 nodes. We would like to change URL of the OpenShift API and also the URL of the OpenShift web console. Which steps we need to take to successfully do so?
We have already tried to update the openshift_master_cluster_hostname and openshift_master_cluster_public_hostname variables to new DNS names, which resolve our F5 virtual hosts which load balances the traffic between our masters, and then started the upgrade Ansible playbook, but the upgrade fails. We have also tried to run the Ansible playbook which redeploys the cluster certificates, but after that step the OpenShift nodes status changes to NotReady.
We have solved this issue. What we had to do is to change the URL-s defined in the variables in the inventory file and then we executed the ANSIBLE playbook to update master configuration. The process of running that playbook is describe in the official documentation.
After that we also had to update the OpenShift Web Console configuration map with new URL-s and then scale down and scale up the web-console deployment. The process on how to update the configuration of the web-console is described here.
I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.
I'm trying to fetch nodes list via ansible playbook using a context name. but its not working
my playbook:
getnodes.yaml
- name: "get nodes"
hosts: kubernetes
tasks:
- name: "nodes"
command: "kubectl get nodes --context='contextname'"
I do have multiple clusters in config file. I need to either specify cluster name or context name and get the nodes list or to perform any activity on a particular cluster
As far as I understand you when you run the command kubectl get nodes --context='contextname' directly on your master node, everything works fine, right ? And it fails only when you run it as a part of your ansible playbook against the master node ? What errors do you get ?
Yes that's correct. i'm able to execute from command line
"The connection to the server localhost:8080 was refused - did you
specify the right host or port?"
Are you sure it is available on the same host as you run your ansible playbook ? I mean your Kubernetes master node, on which you have kubectl binary installed ? My guess is that it is not and even if it is on the same host you'll not be able to connect to it using localhost:8080.
Look. You're not using here any particular Ansible module specific to manage Kubernetes cluster like this one, which you run directly against the API server and you need to provide its valid URL. Instead here you are just using simple command module which doesn't care what command you want to run as long as you provide a valid hostname with ssh access and Python installed.
In this case your Ansible simply tries to ssh to your Kubernetes master node and execute the shell command you passed to it:
kubectl get nodes --context='contextname'
I really doubt that your ssh server listens on port 8080.
If you run your ansible playbook on same host you can run your kubectl commands there are much easier solutions in Ansible for such cases like:
local_action or delegate_to: localhost statements in your task or more globally connection: local
More details on usage of all above mentioned statements in your Ansible plays you can find in Ansible docs and in this article.
I hope it will help you.