Eclipse Hono 1.1.1 Installed using Kubernetes - Probelm in Creating a Tenant - kubernetes

I have installed Eclipse Hono 1.1.1 Kubernetic cluster using Helm as per the below instructions.
https://hub.helm.sh/charts/eclipse-iot/hono
Initially I tried to create a tenant using the below command
curl -X POST "http://servername:28080/v1/tenants/DEFAULT_TENANT123" -H "accept: application/json" -H "Content-Type: application/json"
But then I got the Resource not found error.
And then as per the instructions in the How do i run curl command from within a Kubernetes pod
curl -X POST "http://ServiceName:Serviceport/v1/tenants/DEFAULT_TENANT123" -H "accept: application/json" -H "Content-Type: application/json"
Again it dint work..!!!
I tried the following command to enter into the device registry pod
kubectl exec -it honohelmdeploy-service-device-registry-0 -- sh
And inside the device registry pod, I tried to run the above command and still it dint work.
I am not sure what should be the Host and Port while using the below command
curl -X POST "http://HOST:PORT/v1/tenants/DEFAULT_TENANT123" -H "accept: application/json" -H "Content-Type: application/json"
I tried using device registry service name/ device registry pod name as hosts.
I tried using device registry ports
I tried using my server name / localhost as hosts..
I tried using 28080 as ports..
But I was not able to create a Tenant. Please assist.
Edited with the screeshot for kubectl get svc command

Ok, it seems like you have installed Hono to minikube without any loadbalancer running. You can see this from the EXTERNAL-IP column which contains <pending> for all of Hono's (externally visible) service endpoints.
You need to start minikube tunnel in order for these endpoints to be exposed via a loadbalancer as described in the chart's README. You should be able to run the minikube tunnel command either before or after having installed Hono to the cluster. Once the loadbalancer is running, the EXTERNAL-IP addresses should be bound and you should be able to access the service endpoints.

Related

Connecting Web server and Kong API Gateway

I have installed Magento 2 Server on one system with the system IP Address as Domain name and have installed Kong API Gateway on another system with default port id 8001. I have used curl -i -X POST --url http://localhost:8001/services --data 'name=testMagento' --data 'url=http://<IP ADDRESS OF SYSTEM>' to register magento services. To register end points I have used curl -i -X POST --url http://localhost:8001/services/testMagento/routes --data 'hosts[]=localhost' --data 'paths[]=/customer/account/login/referer/<key for login>' --data 'strip_path=false' --data 'methods[]=GET'. I need help on checking whether the requests being forwarded through Kong or not.

Get "Upgrade request required" from kubectl exec from Windows10/cygwin

For quite a while, I've been running kubectl v1.17 on Windows10/Cygwin to connect to the clusters for our application. Every once in a while, I use "kubectl exec" to perform an operation within a container. I've never had a problem doing this, until the last couple of days.
A couple of days ago, this attempt failed with "Upgrade request required". I talked to a colleague with a similar setup, and he hadn't been seeing this error. He was using v1.18, so I upgraded, and that seemed to fix the problem. I then used that for a few hours yesterday with no problem.
This morning, I'm getting "Upgrade request required" again, so the "upgrade" didn't actually fix it.
From the occurrences of this on the web, I see it has something to do with the connection handshake, but that's about all I know.
Our clusters are running v1.13.5 of k8s.
I tried running the command with "-v=10" to get more info.
The actual internal curl command that gets this appears to be this (with some elisions):
I0612 10:40:31.032729 10408 round_trippers.go:423] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl.exe/v1.18.0 (windows/amd64) kubernetes/9e99141" -H "Authorization: Bearer ..." 'https://...'
Is there anything in this that might indicate what might be going wrong here?
Update:
I discovered this morning that the issue is definitely related to Cygwin. If I take the resulting "kubectl exec" command and execute in a Windows cmd shell, it works perfectly fine. No error.
A relevant portion of my "uname -a" string might be "3.1.5(0.340/5/3) 2020-06-01 08:59 x86_64 Cygwin".

Troubleshooting Kubernetes tutorial fine parallel

I am attempting to work through the following tutorial https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/ . My problem happens at the very first step, trying to start up redis. When I run
kubectl run -i --tty temp --image redis --command "/bin/sh"
I create a new pod, however running
redis-cli -h redis
returns an error: Could not connect to Redis at redis:6379: Name or service not known
It looks like you don't have Kube DNS setup correctly and what you got it's just a simple problem with name resolution.
If you look again at the tutorial, they even mention you can encounter such problem:
Note: if you do not have Kube DNS setup correctly, you may need to
change the first step of the above block to redis-cli -h
$REDIS_SERVICE_HOST.
So just instead of using redis-cli -h redis use redis-cli -h $REDIS_SERVICE_HOST and everything should work.

Could not resolve host: rest-proxy when I add a new avro consumer

knowing that my kafka_cluster work fine and I'm using docker-compose
I put inside the container :registery schema
curl -X POST -H "Content-Type: application/vnd.kafka.v1+json" \
--data '{"name": "my_consumer_instance", "format": "avro", "auto.offset.reset": "smallest"}' \
http://rest-proxy:8082/consumers/my_avro_consumer
curl: (6) Could not resolve host: rest-proxy
rest-proxy:8082
This must be an actual server or IP address.
To make it work with a short-name, setup your DNS server to know that you have a machine with a hostname of rest-proxy exists on your network.
I'm using docker-compose
Make sure that your curl is happening within the Docker network.
For example,
docker-compose exec kafka-rest sh, and run curl from there

What would be Openshift REST API equivalent of a process template command

I am automating some continuous delivery processess that use openshift 3.5. They work fine from a command line, but I can hardly find any documentation of how the oc commands map to the OCP REST API. I've figured out how talk to the API and use what it directly offers. For example, I have a line:
oc process build-template -p APPLICATION_NAME=worldcontrol -n openshift | oc create -f - -n conspiracyspace
That takes a template named "build-template" from "openshift" namespace and processes it, piping the resulting definition to build a few objects like application image, into another namespace. I would appreciate an example of how this could be expressed in http request terms.
edit
Following #Graham's hint, here is what I got. First request is getting the contents of the template:
curl -k -v -XGET -H "User-Agent: oc/v3.5.5.15 (linux/amd64) openshift/4b5f317" -H "Authorization: Bearer ...." -H "Accept: application/json, */*" https://example.com/oapi/v1/namespaces/openshift/templates/build-template
Then apparently the oc client expands the parameters internally, and feeds the result into the POST:
curl -k -v -XPOST -H "Content-Type: application/json" -H "User-Agent: oc/v3.5.5.15 (linux/amd64) openshift/4b5f317" -H "Accept: application/json, */*" -H "Authorization: Bearer ...." https://example.com/oapi/v1/namespaces/openshift/processedtemplates
Run the oc command with the option --loglevel=10. This will show you what REST API calls it makes underneath and thus you can work out what you need to do to do the same thing with just the REST API. Do note that certain things may be partly done in the oc client, rather than delegating to a REST API endpoint call.
I did this, and at the very end of the output from the CLI, I saw this:
service "trade4-65869977-9d56-49a5-afa2-4a547df82d5c" created
deploymentconfig "trade4-65869977-9d56-49a5-afa2-4a547df82d5c" created
When piping to oc create -f -, then, the CLI must be inspecting the resulting template and creating each object in the objects array. No evidence of those calls were outputted to my command window, other than the two "created" statements.
So to fully automate this through the REST API, we would still need to parse that objects array returned by processtemplates and POST to the appropriate endpoints, correct?