Spring Cloud Consul's gateway server returns Unable to find instance for mall-service-test - spring-cloud

Please help.
I am using spring cloud consul 1.5.3 and spring boot 2.1.6. The default host and port that spring cloud consul uses to connect to the consul agent is localhost:8500. It works so far.
However when I change the consul host from 'localhost' to remote ip '192.168.1.89' and restart the gateway service, remote consul shows that Service Check is failed.
The output description is here:
Get http://PCOS-2019NFFBBI:8503/actuator/health:
dial tcp: lookup PCOS-2019NFFBBI on 192.168.1.1:53: no such host
And when I visited URL http://localhost:8010/mall-service-test/ to redirect to my service mall-service-test, shows the error:
org.springframework.cloud.gateway.support.NotFoundException:
Unable to find instance for mall-service-test
I use zuul gateway instead of Spring Cloud Gateway, it returns the same error.
Here is the spring cloud gateway application.yml:
server:
port: 8010
spring:
application:
name: angelcloud-gateway
cloud:
gateway:
discovery:
locator:
enabled: true
routes:
- id: mall-service-test
uri: lb://mall-service-test
predicates:
- Path= /mall-service-test/**
filters:
- StripPrefix= 1
consul:
host: http://192.168.1.89
port: 8500
healthCheckInterval: 15s

The Consul connection values need to go in bootstrap.yml.

Related

traefik v2 redirect TCP requests

we habe following situation.
We are using Kerberos on a tomcat running in a docker swarm as reverse proxy we use traefik also hosted in docker swarm.
Kerberos needs to passthrough the SSL request via TCP router. This works no problem. But now we want to redirect requests to port 80 also to tcp router port 443?
We tried a lot, but no success at all :-(
Here is the traefik configuration
labels:
traefik.tcp.routers.example.entrypoints: https
traefik.tcp.services.example.loadbalancer.server.port: '8443'
traefik.tcp.routers.example.service: example
traefik.tcp.routers.example.tls.passthrough: 'true'
traefik.constraint-label: traefik-public
traefik.tcp.routers.example.rule: HostSNI(`example.com`)
traefik.docker.network: traefik-public
traefik.enable: 'true'
traefik.tcp.routers.example.tls: 'true'
regards
Meex

Kubernetes(Istio) Mongodb enterprise cluster: HostUnreachable: Connection reset by peer

I have Istio1.6 running in my k8 cluster. In the cluster I have also deployed sharded mongodb cluster with istio-injection disabled.
And I have a different namespace for my app with istio-injection enabled. And from the pod if I try to connect to the mongo I get this connection reset by peer error:
root#mongo:/# mongo "mongodb://mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017,mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017/?ssl=false"
MongoDB shell version v4.2.8
connecting to: mongodb://mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017,mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017/?compressors=disabled&gssapiServiceName=mongodb&ssl=false
2020-06-18T19:59:14.342+0000 I NETWORK [js] DBClientConnection failed to receive message from mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017 - HostUnreachable: Connection reset by peer
2020-06-18T19:59:14.358+0000 I NETWORK [js] DBClientConnection failed to receive message from mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017 - HostUnreachable: Connection reset by peer
2020-06-18T19:59:14.358+0000 E QUERY [js] Error: network error while attempting to run command 'isMaster' on host 'mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017' :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-06-18T19:59:14.362+0000 F - [main] exception: connect failed
2020-06-18T19:59:14.362+0000 E - [main] exiting with code 1
But if I disable the istio-injection to my app(pod) then I can successfully connect and use mongo as expected.
Is there a work around for this, I would like to have istio-proxy injected to my app/pod and use mongodb?
Injecting Databases with istio is complicated.
I would start with checking your mtls, if it´s STRICT, I would change it to permissive and check if it works. It´s well described here.
You see requests still succeed, except for those from the client that doesn’t have proxy, sleep.legacy, to the server with a proxy, httpbin.foo or httpbin.bar. This is expected because mutual TLS is now strictly required, but the workload without sidecar cannot comply.
Is there a work around for this, I would like to have istio-proxy injected to my app/pod and use mongodb?
If changing mtls won´t work, then in istio You can set up database without injecting and then add it to istio registry using ServiceEntry object so it would be able to communicate with the rest of istio services.
To add your mongodb database to istio you can use ServiceEntry.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.
Example of ServiceEntry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-mongocluster
spec:
hosts:
- mymongodb.somedomain # not used
addresses:
- 192.192.192.192/24 # VIPs
ports:
- number: 27018
name: mongodb
protocol: MONGO
location: MESH_INTERNAL
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
If You have mtls enabled You will also need DestinationRule that will define how to communicate with the external service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mtls-mongocluster
spec:
host: mymongodb.somedomain
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
Additionally take a look at this documentation
https://istiobyexample.dev/databases/
https://istio.io/latest/blog/2018/egress-mongo/

Kubernetes Cloud SQL sidecar connection timed out. How to check if credentials work?

I'm trying to setup a Cloud SQL Proxy Docker image for PostgreSQL as mentioned here.
I can get my app to connect to the proxy docker image but the proxy times out. I suspect it's my credentials or the port, so how do I debug and find out if it works?
This is what I have on my project
kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=my-account-credentials.json
My deploy spec snippet:
spec:
containers:
- name: mara ...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<MY INSTANCE NAME>=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
The logs of my cloudsql-proxy show a timeout:
2019/05/13 15:08:25 using credential file for authentication; email=646092572393-compute#developer.gserviceaccount.com
2019/05/13 15:08:25 Listening on 127.0.0.1:5432 for <MY INSTANCE NAME>
2019/05/13 15:08:25 Ready for new connections
2019/05/13 15:10:48 New connection for "<MY INSTANCE NAME>"
2019/05/13 15:10:58 couldn't connect to <MY INSTANCE NAME>: dial tcp <MY PRIVATE IP>:3307: getsockopt: connection timed out
Questions:
I specify 5432 as my port, but as you can see in the logs above,it's hitting 3307. Is that normal and if not, how do I specify 5432?
How do I check if it is a problem with my credentials? My credentials file is from my service account 123-compute#developer.gserviceaccount.com
and the service account shown when I go to my cloud sql console is p123-<somenumber>#gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same? Does that make a difference?
If I make the Cloud SQL instance available on a public IP, it works.
I specify 5432 as my port, but as you can see in the logs above,it's
hitting 3307
The proxy listens locally on the port you specified (in this case 5432), and connects to your Cloud SQL instance via port 3307. This is expected and normal.
How do I check if it is a problem with my credentials?
The proxy returns an authorization error if the Cloud SQL instance doesn't exist, or if the service account doesn't have access. The connection timeout error means it failed to reach the Cloud SQL instance.
My credentials file is from my service account 123-compute#developer.gserviceaccount.com and the service account shown when I go to my cloud sql console is p123-#gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same?
One is just the name of the file, the other is the name of the service account itself. The name of the file doesn't have to match the name of the service account. You can check the name and IAM roles of a service account on the Service Account page.
2019/05/13 15:10:58 couldn't connect to : dial tcp :3307: getsockopt: connection timed out
This error means that the proxy failed to establish a network connection to the instance (usually because a path from the current location doesn't exist). There are two common causes for this:
First, make sure there isn't a firewall or something blocking outbound connections on port 3307.
Second, since you are using Private IP, you need to make sure the resource you are running the proxy on meets the networking requirements.
Proxy listen port 3307. This is mentioned on documentation
Port 3307 is used by the Cloud SQL Auth proxy to connect to the Cloud SQL Auth proxy server. -- https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#troubleshooting
You may need to create a firewall like the following:
Direction: Egress
Action on match: Allow
Destination filters : IP ranges 0.0.0.0/0
Protocols and ports : tcp:3307 & tcp:5432

OpenShift route - Unable to connect to remote host: No route to host

I have deployed a grpc service running on OpenShift Origin. And this backed by a OpenShift service. And the service is exposed with an OpenShift route. I am trying to make this pod available via a service and route that maps the container port (50051) to outside world on port 8080.
The image that the service is trying to expose has, in its Dockerfile:
EXPOSE 50051
The route has the following:
Service Port: 8080/TCP
Target Port: 50051
In the DeploymentConfig I specify the port with:
ports:
- containerPort: 50051
protocol: TCP
However, when I try to access the application via the route and port, I get (from Java)
java.net.NoRouteToHostException: No route to host
And when I try to telnet the service IP:
telnet 172.30.197.247 8080
I am able to connect.
However, when I try to connect via the route it doesnt work:
telnet my.route.com 8080
Trying ...
telnet: connect to address : Connection refused
When I use:
curl -kv my-svc.myproject.svc.cluster.local:8080
I can connect.
So it seems the service is working but the route is not.
I have been going through the troubleshooting guide on https://docs.openshift.org/3.6/admin_guide/sdn_troubleshooting.html#debugging-the-router
The router setups in OpenShift focus on HTTP/HTTPS(SNI)/TLS(SNI). However it appears that you can use an externalIP to expose non-web application ports from the cluster. Because gRPC is an over the wire protocol, you might need to go this path.
There are multiple things to check :
Is you route point to your service ? Here is a example :
apiVersion: v1
kind: Route
spec:
host: my.route.com
to:
kind: Service
name: yourservice
weight: 100
If it's not the case, the route and the service are not connected.
You can check the router configuration. Connect to your router with oc rsh and check if you find your route name in the /var/lib/haproxy/conf/haproxy.config (the backend name format should be backend be_http_NAMESPACE_ROUTENAME). The server part below the backend part should contains the ip of your pod (you can obtain your pod ip with oc get pods -o wide command).
If it's not the case, the route is not registered in the router config. You can try to restart the router end recheck the haproxy.config file.
Can you connect to the pod ip from the router container ?

Eureka Client Registration under some rules

I have a eureka registry server and other 3 microservices. I am able to register any of microservices when I set eureka default zone from eureka.client.serviceUrl.defaultZone property. But I wanted to know if there are other way or some rules which avoids auto register of any microservices to it's registry server?
For the eureka main server I have :
# Configure this Discovery Server
eureka:
instance:
hostname: localhost
client: # Not a client, don't register with yourself
registerWithEureka: false
fetchRegistry: false
server:
enableSelfPreservation: true
server:
port: 1111 # HTTP (Tomcat) port
And in other client(s) i have properties like :
# Spring properties
spring.application.name = #nme#
# Discovery Server Access
eureka.client.serviceUrl.defaultZone= http://localhost:1111/eureka/
eureka.instance.leaseRenewalIntervalInSeconds= 20
#eureka.instance.leaseExpirationDurationInSeconds= 2
Here we have clearly stated serviceUrl in properties.
Thanks