unable to access a service from another pod, using xmlhttprequest object - kubernetes

So, I wrote an API that is listening on the the path /api/v1/books and is deployed as a deployment on my k8s cluster, created service (restapi-service) so that we can call that from another pods.
Now I created another deployment (restapi-ui-deployment) that just has a .html page and and its being deployed on nginx, which eventually calls the the service that we have created earlier to get the response.
Now, the issue is when I exec into the pods of restapi-ui-deployment I am successfully able to curl http://restapi-service:8081/api/v1/books. But if we are trying to do the same thing from .html page that is deployed I get
GET http://restapi-service:8081/api/v1/books net::ERR_NAME_NOT_RESOLVED
Below is the code that is being deployed as restapi-ui-deployment
if (xmlObj != null){
xmlObj.open("GET", "http://restapi-service:8081/api/v1/books", true)
xmlObj.onreadystatechange = processResponse;
xmlObj.send(null)
}
else{
console.log("There was an error getting the object.")
}
function processResponse(){
if (xmlObj.status == 200 && xmlObj.readyState == 4){
console.log("Got the response successfully")
response = xmlObj.responseText
}
else{
console.log("There was an issue getting the response.")
}
}

I am afraid that you are confused about the way your application works. The XmlHttpRequest is originating in a webbrowser, therefore outside of the kubernetes cluster, not from nginx inside your cluster. (nginx serves the html page)
The kubernetes dns is not available outside of kubernetes, nor would a connection to a ClusterIP work from the outside.
Solution: Create an appropriate Ingress and call that from your frontend or provide a proxy on your nginx where you got the frontend delivered. (That would lead to really getting the request origin as your nginx)

Related

Assign Domain Name For Same IP, Different Nodeports Minikube

I'm trying to set up a local environment for microservices using minikube. My cluster consists of 4 pods and the minikube ip for all 4 of them are the same. However, each service runs on a unique nodeport.
EG: 172.42.159.12:12345 & 172.42.159.12:23456
Ingress generates them as
http://172.42.159.12:12345
http://172.42.159.12:23456
http://172.42.159.12:34567
http://172.42.159.12:45678
They all work fine when using the ip to access them, and they work fine when using a loadbalancer and deploying a cloud environment.
But I want this to work on my minikube, and I can't use the ../etc/hosts to assign domain names for each service ecause it does not accept the nodeports being passed in.
Any help on this is really appreciated.
so I found a solution for this.
The only way to do it is with a third-party app called Fiddler.
How To:
Download And Run Fiddler
Open Fiddler => Rules => Customize Rules
Scroll down to find static function OnBeforeRequest(oSession: Session)
Pass in
if (oSession.HostnameIs("your-domain.com")){
oSession.bypassGateway = true;
oSession["x-overrideHost"] = "minikube_ip:your_port";
}
Save File

Run kubernetes build from terraform

I'm trying to make a simple test to build a simple nginx on kubernetes from terraform.
This is the first time working terraform.
This is the basic terraform file:
provider "kubernetes" {
host = "https://xxx.xxx.xxx.xxx:8443"
client_certificate = "${file("~/.kube/master.server.crt")}"
client_key = "${file("~/.kube/master.server.key")}"
cluster_ca_certificate = "${file("~/.kube/ca.crt")}"
username = "xxxxxx"
password = "xxxxxx"
}
resource "kubernetes_service" "nginx" {
metadata {
name = "nginx-example"
}
spec {
selector {
App = "${kubernetes_pod.nginx.metadata.0.labels.App}"
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "nginx" {
metadata {
name = "nginx-example"
labels {
App = "nginx"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
port {
container_port = 80
}
}
}
}
I'm getting the following error after running the terraform apply.
Error: Error applying plan:
1 error(s) occurred:
kubernetes_pod.nginx: 1 error(s) occurred:
kubernetes_pod.nginx: the server has asked for the client to provide credentials (post pods)
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
I have admin permissions on kubernetes and everything is working correctly.
But for some reason I'm getting that error.
What I'm doing wrong?
Thanks
Regarding #matthew-l-daniel question
When I'm only using the username/password I get this error:
Error: Error applying plan:
1 error(s) occurred:
kubernetes_pod.nginx: 1 error(s) occurred:
kubernetes_pod.nginx: Post https://xxx.xxx.xxx.xxx:8443/api/v1/namespaces/default/pods:
x509: certificate signed by unknown authority
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
I tried using the server name or the server ip and got the same error everytime.
When using the certs I got the error from the original post, regarding the "credentials"
I forgot to mention that this is an openshift installation. I don't believe it will have any impact in the end, but I thought I should mention it.
The solution was rather simple, I was using the master crt and key from openshift on terraform.
Then I tested it using the admin crt and key from openshift and it worked.
Aside from the official kubernetes provider documentation suggesting only certificate or basic (user/pass) should be required, this sounds like an OpenShift issue. Have you been able to obtain any logs from the OpenShift cluster?
Some searching links the message you are seeing to some instability bugs within Kubernetes wherein the kubelet does not properly register after a reboot. I would manually confirm the node shows as Ready in OpenShift before you attempt a deployment, as until this occurs Terraform will not be able to interact with it.
If in fact the node is not Ready, Terraform is just surfacing the underlying error passed back from OpenShift.
Separately, the error you are seeing when trying to authenticate using purely certificate parameters is indicative of a misconfiguration. A similar question was raised on the Kubernetes GitHub, and the suggestion there was to investigate the Certificate Authority loaded on to the cluster.

DCOS Marathon-LB returns 503

I deployed application 1 on service port 10101. It's an external facing app with label HAPROXY_0_VHOST=vhost1.xxx.xxx. And it works with no problems.
Then I deployed a similar application 2 on service port 10102, with HAPROXY_1_VHOST=vhost2.xxx.xxx. I read Marathon-LB's document and this is my understanding of how to deploy 2 apps on different VHOST. However, curl http://vhost2.xxx.xxx returns HTTP/1.0 503 Service Unavailable.
I confirmed that application 2 is running normally by checking the result from curl marathon-lb.marathon.mesos:10102 on DCOS master node.
Did I configure VHOST incorrectly? Or something else was wrong?
Figured this out: the app for vhost2 should be labeled HAPROXY_0_VHOST=vhost2.xxx.xxx instead of HAPROXY_1_VHOST=vhost2.xxx.xxx. The documentation is note clear here.

Publicly exposing a WCF restful service via http from Service Fabric

I am trying to expose a WCF based restful service via http and am so far unsuccessful. I'm trying on my local machine first so prove it works. I found a suggestion here that I remove my local cluster and then manually run this powershell command from the SF SDK folder as administrator to recreate it with the machine name binding: .\DevClusterSetup.ps1 -UseMachineName
It created the cluster successfully. I can use the SF Explorer and see in the cluster manifest that entries in the NodeList show the machine name rather than localhost. This seems good.
But the first problem I notice is that if I expand my way through SF Explorer down to the node my app is running on I see an endpoints entry but the URL is not what I'd expect. I am seeing this: http://surfacelap/d5be9425-3247-4290-b77f-1a90f728fb8d/39cda0f7-cef4-4c7f-8af2-d05786a834b0-131111019607641260
Is that what I should see even though I have an endpoint setup? I did not expect the guid and other numbers in the path. This makes me suspect that SF is not seeing my service as being publicly accessible and instead is maybe only setup for internal access within the app? If I dig down into my service manifest I see this as expected:
<Resources>
<Endpoints>
<Endpoint Name="ResolverEndpoint" Protocol="http" Type="Input" Port="80" />
</Endpoints>
</Resources>
But how do I know if the service itself is mapped to it? When I use the crazy long url above and try a simple method of my service I get an http 202 response and no response data as expected. If I then change the method name to one that doesn't exist I get the same thing, not the expected http 404. I've tried using both my machine name and localhost. Same result.
So clearly I'm doing something wrong. Below is my CreateServiceInstanceListeners override. In it you can see I use "ResolverEndpoint" as my endpoint resource name, which matches the service manifest:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener((context) =>
new WcfCommunicationListener<IResolverV2>(
serviceContext: context,
wcfServiceObject: new ResolverServiceV2(),
listenerBinding: new WebHttpBinding(WebHttpSecurityMode.None),
endpointResourceName: "ResolverEndpoint"
)
)};
}
What am I doing wrong here?
Here's a way to get it to work: https://github.com/loekd/ServiceFabric.WcfCalc
Essential changes to your code are the use of the public name of your cluster as endpoint URL and an additional WebHttpBehavior behavior on that endpoint.
The endpoint resources specified in the service manifest is shared by all of the replica listeners in the process that use the same endpoint resource name. So if your service has more than one partition it is possible that more than one replica from different partition may end up in the same process. In order to differentiate the messages addressed to different partitions, the listener adds partition ID and additional instance GUID to the path.
If you are going to have a singleton partition service and know that there will not be more than one replicas in the same process you can directly supply EndpointAddress you want to the listener to open at. Use the CodePackageActivationContext API to get the port from the endpoint resource name, node name or IP address from the NodeContext and then provide the path you want the listener to open at.
Here is the code in the WcfCommunicationListener that constructs the Listen Address.
private static Uri GetListenAddress(
ServiceContext serviceContext,
string scheme,
int port)
{
return new Uri(
string.Format(
CultureInfo.InvariantCulture,
"{0}://{1}:{2}/{5}/{3}-{4}",
scheme,
serviceContext.NodeContext.IPAddressOrFQDN,
port,
serviceContext.PartitionId,
serviceContext.ReplicaOrInstanceId,
Guid.NewGuid()));
}
Please note that you can now have only once application, one service and one partition on a node and when you are testing locally, keep the instance count of that service as 1. When deployed in the actual cluster you can use -1 instance count.

Kubernetes - nginx angularjs POD to access nodejs REST API url (another POD) within the cluster dynamically

Using Kubernetes -- Gogle Container Enginer setup , Within the Same google cloud Cluster, I am having the Front end Service -> nginx + Angular JS and REST API service --> NodeJS API. I don't want to expose NodeJS API KubeCTL Service public domain. So, 'ServiceType' is set to only 'ClusterIP' . How do we infer this NODE_API_SERIVCE_HOST , NODE_API_SERIVCE_PORT -- {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT -- inside AngularJS program.
(function() {
'use strict';
angular
.module('mymodule', [])
.config(["RestangularProvider", function(RestangularProvider) {
var apiDomainHost = process.env.NODE_API_SERVICE_HOST;
var apiDomainPort = process.env.NODE_API_SERVICE_PORT;
RestangularProvider.setBaseUrl('https://'+apiDomainHost+':'+apiDomainPort+'/node/api/v1'); }]);
})();
Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side.
My docker file is simple to inherit the nginx stop / start controls.
FROM nginx
COPY public/angular-folder /usr/share/nginx/html/projectSubDomainName
Page 43 of 62 in http://www.slideshare.net/carlossg/scaling-docker-with-kubernetes, explains that can we invoke the Command sh
"containers": {
{
"name":"container-name",
"command": {
"sh" , "sudo nginx Command-line parameters
'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'"
}
}
}
Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?
Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side.
If the client is outside the cluster, the only way it will be able to access the NodeJS API is if you expose it to the client's network, which is probably the public internet. If you're concerned about the security implications of that, there are a number of different ways to authenticate the service, such as using nginx auth_basic.
"containers": {
{
"name":"container-name",
"command": {
"sh" , "sudo nginx Command-line parameters
'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'"
}
}
}
Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?
Yes, service IP & port is stable, even across pod restarts. As for how to communicate the NODE_API_SERVICE_{HOST,PORT} variables to the client, you will need to inject them from a process running server side (within your cluster) into the response (e.g. directly into the JS code, or as a JSON response).