Openshift routes are not redirected to particular service - service

I have deployed few services of my application on openshift :
E.g., app-ui, app-backend,app-store ,main.
I have defined separate routes for these services to access externally.
app-ui -- ui.test-dev.***.net
app-backend -- backend.test-dev.***.net
app-store-- store.test-dev.***.net
Main -- test-dev.***.net
I have defined these DNS using host property in route in openshift yaml file.
The issue is when I m trying to access app-ui, i am getting the response from app-backend,similarly when I am checking app-store or main app , i am getting random response from any of these services. I m not sure why it's not able to redirect to correct service based on the route .Can anyone please help me on this ??

Validate the following in the route
Path
Service "should be the one of the app you are directing to"
Target port "that it's going to the port of the app you want to expose"
How to check above?
oc describe route <route name>
In the output, check all above ones to troubleshoot.
Good luck

I think you have to validate Route as well as service to check whether it is pointing to the right one with a selector pod or not.
for this you can run the following commands:
oc get routes <route-name>
oc describe routes <route-name>
oc describe service <service-name>
example: * This consist resource names that i used in openshift env. so don;t get confuse.
Step 1:
oc get routes <route-name>
Step 2:
oc describe routes dev-route
here you can validate Service and Endpoint values
Step 3:
oc describe service farnodes-web-svc
here you can validate Selector, Type, Port & TargetPort
In the output, Follow all above steps to troubleshoot.
Post back over here.
Good luck!!!

Related

Assign Domain Name For Same IP, Different Nodeports Minikube

I'm trying to set up a local environment for microservices using minikube. My cluster consists of 4 pods and the minikube ip for all 4 of them are the same. However, each service runs on a unique nodeport.
EG: 172.42.159.12:12345 & 172.42.159.12:23456
Ingress generates them as
http://172.42.159.12:12345
http://172.42.159.12:23456
http://172.42.159.12:34567
http://172.42.159.12:45678
They all work fine when using the ip to access them, and they work fine when using a loadbalancer and deploying a cloud environment.
But I want this to work on my minikube, and I can't use the ../etc/hosts to assign domain names for each service ecause it does not accept the nodeports being passed in.
Any help on this is really appreciated.
so I found a solution for this.
The only way to do it is with a third-party app called Fiddler.
How To:
Download And Run Fiddler
Open Fiddler => Rules => Customize Rules
Scroll down to find static function OnBeforeRequest(oSession: Session)
Pass in
if (oSession.HostnameIs("your-domain.com")){
oSession.bypassGateway = true;
oSession["x-overrideHost"] = "minikube_ip:your_port";
}
Save File

Publicly exposing a WCF restful service via http from Service Fabric

I am trying to expose a WCF based restful service via http and am so far unsuccessful. I'm trying on my local machine first so prove it works. I found a suggestion here that I remove my local cluster and then manually run this powershell command from the SF SDK folder as administrator to recreate it with the machine name binding: .\DevClusterSetup.ps1 -UseMachineName
It created the cluster successfully. I can use the SF Explorer and see in the cluster manifest that entries in the NodeList show the machine name rather than localhost. This seems good.
But the first problem I notice is that if I expand my way through SF Explorer down to the node my app is running on I see an endpoints entry but the URL is not what I'd expect. I am seeing this: http://surfacelap/d5be9425-3247-4290-b77f-1a90f728fb8d/39cda0f7-cef4-4c7f-8af2-d05786a834b0-131111019607641260
Is that what I should see even though I have an endpoint setup? I did not expect the guid and other numbers in the path. This makes me suspect that SF is not seeing my service as being publicly accessible and instead is maybe only setup for internal access within the app? If I dig down into my service manifest I see this as expected:
<Resources>
<Endpoints>
<Endpoint Name="ResolverEndpoint" Protocol="http" Type="Input" Port="80" />
</Endpoints>
</Resources>
But how do I know if the service itself is mapped to it? When I use the crazy long url above and try a simple method of my service I get an http 202 response and no response data as expected. If I then change the method name to one that doesn't exist I get the same thing, not the expected http 404. I've tried using both my machine name and localhost. Same result.
So clearly I'm doing something wrong. Below is my CreateServiceInstanceListeners override. In it you can see I use "ResolverEndpoint" as my endpoint resource name, which matches the service manifest:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener((context) =>
new WcfCommunicationListener<IResolverV2>(
serviceContext: context,
wcfServiceObject: new ResolverServiceV2(),
listenerBinding: new WebHttpBinding(WebHttpSecurityMode.None),
endpointResourceName: "ResolverEndpoint"
)
)};
}
What am I doing wrong here?
Here's a way to get it to work: https://github.com/loekd/ServiceFabric.WcfCalc
Essential changes to your code are the use of the public name of your cluster as endpoint URL and an additional WebHttpBehavior behavior on that endpoint.
The endpoint resources specified in the service manifest is shared by all of the replica listeners in the process that use the same endpoint resource name. So if your service has more than one partition it is possible that more than one replica from different partition may end up in the same process. In order to differentiate the messages addressed to different partitions, the listener adds partition ID and additional instance GUID to the path.
If you are going to have a singleton partition service and know that there will not be more than one replicas in the same process you can directly supply EndpointAddress you want to the listener to open at. Use the CodePackageActivationContext API to get the port from the endpoint resource name, node name or IP address from the NodeContext and then provide the path you want the listener to open at.
Here is the code in the WcfCommunicationListener that constructs the Listen Address.
private static Uri GetListenAddress(
ServiceContext serviceContext,
string scheme,
int port)
{
return new Uri(
string.Format(
CultureInfo.InvariantCulture,
"{0}://{1}:{2}/{5}/{3}-{4}",
scheme,
serviceContext.NodeContext.IPAddressOrFQDN,
port,
serviceContext.PartitionId,
serviceContext.ReplicaOrInstanceId,
Guid.NewGuid()));
}
Please note that you can now have only once application, one service and one partition on a node and when you are testing locally, keep the instance count of that service as 1. When deployed in the actual cluster you can use -1 instance count.

Exception while running the batch using File System Adapter

I am getting an error " There is no service with namespace = ' http://schemas.microsoft.com/dynamics/2008/01/services' and external name = 'ItemService' while running the batch using file system adapter. I went to different forums and as per the suggestions I made sure that the name of the namespace is correct in the services node in the AOT. I am not being able to understand why the system is not able to find the service with the given namespace. Any suggestions?
One cause of this problem can be an extra slash at the end of your namespace on the service node.
Remove it and it should work, as I describe here: AIF: There is no service with namespace = ‘http://yournamespace’ and external name = ‘aService’.
You might also want to do a refresh in the services form and a redeploy of your services after that.

Cannot start Windows Azure VM programmatically

I'm performing REST API operation Start Role (http://msdn.microsoft.com/en-us/library/jj157189.aspx)
In the link https://management.core.windows.net/{subscription-id}/services/hostedservices/{service-name}/deployments/{deployment-name}/roles/{role-name}/Operations we have replaced {service-name}, {deployment-name} and {role-name} with name of VM.
In result we have next message:
"ResourceNotFoundThe resource service name hostedservices is not supported."
List Hosted Services operation (http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx) shows us that we have 2 WMs as hosted services.
Get Role operaion (http://msdn.microsoft.com/en-us/library/jj157193.aspx) also gives info about each of VMs.
Thanks in advance.
You are using:
{subscription-id}/services/hostedservices/{service-name}/deployments/{deployment-name}/roles/{role-name}/Operations
But the correct Uri is:
{subscriptionID}/services/hostedservices/{serviceName}/deployments/{deploymentName}/roleInstances/{roleInstanceName}/Operations
See the difference?
I haven't worked with this particular operation, however a few things:
service-name: It should be the name of the hosted service (the one with .cloudapp.net) and what you see when you list your hosted service.
deployment-name: Generally speaking it's a GUID returned by Get Deployment operation (http://msdn.microsoft.com/en-us/library/windowsazure/ee460804.aspx).
role-name: Role name is also returned when you do a Get Deployment operation. You should use that. I'm not sure if it is same as the name of your VM.
Can you retry your operation after changing these values?
In my case, deployment name is the name of the first VM I created in this cloud service. So, if I added 3 machines to the same cloud service, all of them have the same deployment name - the name of the first machine.

Trouble adding a new service

I have followed the instructions at https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service and copied the echo service and created my new service. (That document is somewhat out-of-date in that "excluded components" no longer exists.
In any case, my service shows up as running with a gateway and a node when I look at 'vcap status' on the server. However, when I look at 'vmc services' from the client my service is not in the list. Where is this list maintained and why is my service not on the list?
Various services, including blob, filesystem, mongodb, etc, are shown on the 'vcm services' list even though they have never been included in my config. Where is this maintained and why are other services on this list?
The cloud_controller.log file shows a "Create service request:" for echo every minute. This service is not in my config file (it was once but it was removed and I repeated the deployment). What is prompting this request for a service that was not defined in the config?
The _gateway.log for my service shows the following:
INFO -- Sending info to cloud controller: ...api.vcap.me/services/v1/offerings
INFO -- Fetching handles from cloud controller .../offerings/.../handles
ERROR -- Failed registering with cloud controller, status=400
DEBUG -- [GaaS-Provisioner] Connected to node mbus..
ERROR -- Failed fetching handles, status=404
Why does my gateway fail to register with the cloud controller? I have found some reports that suggest that the problem is with domain name mapping. I have verified that the server can find itself:
$curl api.vcap.me
Welcome to VMware's Cloud Application Platform
What can I do to register my service?
You can also try asking your question on the vcap_dev google group.
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups#!forum/vcap-dev
They are focused in answering and discussing OSS subjects for Cloud Foundry!
If you follow the document correctly things should work just fine. I understand that the mechanism for maintaining the excluded list of components has changed and can be a point of confusion when following the steps mentioned in the article (just ignore that step totally).
ERROR -- Failed registering with cloud controller, status=400
Well this is a point of worry. I recently followed the article step by step and was able to add a new service.
Is the echo service showing up in vmc services?
Have you copied the the yml files for node and gateway at ./cloudfoundry/.deployments/devbox/config?
Are the tokens for your gateway unique? and matching in the two files? ./cloudfoundry/.deployments/devbox/config/cloud_controller.yml and ./cloudfoundry/.deployments/devbox/config/**_gateway.yml**
I would recommend that you first concentrate on getting the echo service to be listed in the vmc services output. Once done with this you should replicate the steps (with absolute care to modify things like the token) to get your custom service working.
Cheers,
Ankit
You should follow this guide
It work to me.
regards.