Accessing K8S Service in AWS Private Subnet not working - kubernetes

Scenario in AWS EKS Service:
Pods are running on service "web-ui-svc" in public subnet
Another pods were running on service "web-api-svc" which is in private subnet
Created Ingress for "web-ui", say "web-ui-ing"
Now Ingress "web-ui-ing" routes trafic to "web-ui-svc".
Expected:
"web-ui-svc" pods wants to communicate "web-api-svc" on port 1000 for API calls.
I tried by passing ENV like "web-api-svc:1000". Not working. But i could curl inside the Node.
Also when i access ingress url of "web-ui-ing", UI page is comming up but connectivity to web-api-svc is not happening..
Note: Both subnets are in same VPC which EKS Cluster using.
Do i need to configure proxy?, to route traffic to corresponding svc?
web-ui YAML(in AWS public subnet):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web-ui
namespace: web
name: web-ui
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web-ui
strategy: {}
template:
metadata:
labels:
io.kompose.network/net: "true"
io.kompose.service: web-ui
spec:
containers:
- env:
- name: WEB_API_ENDPOINT
value: "http://web-api:1000"
image: ****.dkr.ecr.us-east-1.amazonaws.com/web/ui:latest
name: web-ui
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
nodeSelector:
web_group: "web_group"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web-ui
namespace: web
name: web-ui
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
type: NodePort
selector:
io.kompose.service: web-ui
status:
loadBalancer: {}
web-api.yaml(in AWS Private Subnet)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web-api
namespace: web
name: web-api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web-api
strategy: {}
template:
metadata:
labels:
io.kompose.network/webui_net: "true"
io.kompose.service: web-api
spec:
containers:
- image: ****.dkr.ecr.us-east-1.amazonaws.com/test/webapi:latest
name: web-api
ports:
- containerPort: 1000
resources: {}
restartPolicy: Always
nodeSelector:
web_api_group: "web_api_group"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web-api
namespace: web
name: web-api
spec:
ports:
- name: "1000"
port: 1000
targetPort: 1000
selector:
io.kompose.service: web-api
status:
loadBalancer: {}
Do i need to create Proxy kind of thing here?

Related

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

using prometheus pod to monitor a golang webapp pod

I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'

Connect redis exporter and prometheus operator

I have a Redis cluster and Redis-exporter in two separate deployments in the same namespace of a Kubernetes cluster. I am using Prometheus operator to monitor the cluster, but I can not find a way to set up the exporter and the operator. I have set up a service targeting the Redis exporter(check below) and a ServiceMonitor(also below). If I port forward to the Redis exporter service I can see the metrics. Also, the Redis exporter logs do not show issues.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: foo
name: redis-exporter
labels:
app: redis-exporter
spec:
replicas: 1
selector:
matchLabels:
app: redis-exporter
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9121"
labels:
app: redis-exporter
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: REDIS_ADDR
value: redis-cluster.foo.svc:6379
ports:
- containerPort: 9121
My service and servicemonitor
kind: Service
metadata:
name: redis-external-exporter
namespace: foo
labels:
app: redis
k8s-app: redis-ext
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
namespace: bi-infra
labels:
app: redis-external-exporter
k8s-app: redis-monitor
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
namespaceSelector:
matchNames:
- foo
endpoints:
- port: redis-ext
interval: 30s
honorLabels: true
If I switch to a sidecar Redis exporter next to the Redis cluster all is working properly. Has anyone faced such issue?
I was missing spec.endpoints.path on the ServiceMonitor
Here is an example manifest from adding new scraping targets and troubleshooting tutorial.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-pili
namespace: monitoring
labels:
app: pili-service-monitor
spec:
selector:
matchLabels:
# Target app service
app: pili
endpoints:
- interval: 15s
path: /metrics <---
port: uwsgi
namespaceSelector:
matchNames:
- default

Deploy same image two different namespaces same port

I have a single node k8s cluster. I have two namespaces, call them n1 and n2. I want to deploy the same image, on the same port but in different namespaces.
How do I do this?
namespace yamls:
apiVersion: v1
kind: Namespace
metadata:
name: n1
and
apiVersion: v1
kind: Namespace
metadata:
name: n2
service yamls:
apiVersion: v1
kind: Service
metadata:
name: my-app-n1
namespace: n1
labels:
app: my-app-n1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n1
and
apiVersion: v1
kind: Service
metadata:
name: my-app-n2
namespace: n2
labels:
app: my-app-n2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n2
deployment yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n1
labels:
app: my-app-n1
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n1
template:
metadata:
labels:
app: my-app-n1
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
and
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n2
labels:
app: my-app-n2
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n2
template:
metadata:
labels:
app: my-app-n2
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
waiter:v1 corresponds to this repo: https://hub.docker.com/r/adamgardnerdt/waiter
Surely I can do this as namespaces are supposed to represent different environments? eg. nonprod vs. prod. So surely I can deploy identically into two different "environments" aka "namespaces"?
For Service you have specified namespaces , that is correct.
For Deployments you should also specify namespaces othervise they will go to default namespace.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n1
namespace: n1
labels:
app: my-app-n1
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n1
template:
metadata:
labels:
app: my-app-n1
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
I want to deploy the same image, on the same port but in different namespaces.
You are already doing that with your configs, except for deployment objects, that should refer to correct namespaces (as mentioned by answer from Ijaz Ahmad Khan), available to other services in the namespaces using DNS names my-app-n1 and my-app-n2 respectively.
Because waiter is a web server, I assume you would like to access both instances of it from the internet. Hence, you should:
change the type of both services to ClusterIP,
add ingress object, one per each namespace, containing a host name, e.g. myapp.com and staging.myapp.com respectively),
put a load balancer in front of your cluster: the load balancer will use ingress objects to know which hostname matches which service (your cloud provider should create a load balancer automatically).

Kubernetes API External Point Fail on Connection

I have currently create this kubernetes file: for deploy two API's in a Cluster on GCloud. I had tried make two kinds of "type" on kind Service.
First of all I had set the service type as a NodePort and couldn't connect to it, after that I had try use LoadBalancer, although, even with the external IP and the Endpoints I'm not able to access any API.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxxxxxxxx
labels:
app: xxxxxxxx
spec:
replicas: 1
selector:
matchLabels:
app: xxxxxxxxx
template:
metadata:
labels:
app: xxxxxxxx
spec:
containers:
- name: xxxxx
image: xxxxxxxxx
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: xxxxxxxxx
spec:
selector:
app: xxxxxxxx
ports:
- protocol: TCP
port: 8080
targetPort: 3000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: yyyyyy
labels:
app: yyyyyy
spec:
replicas: 1
selector:
matchLabels:
app: yyyyyy
template:
metadata:
labels:
app: yyyyyy
spec:
containers:
- name: yyyyyy
image: yyyyyy
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: yyyyyy
spec:
selector:
app: yyyyyy
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Could anyone help me on this issue?
Regards.
There are a lot of examples of deploying Service (type:LoadBalancer) and have it redirect traffic to Deployment on GKE documentation.
Please follow these tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
Also in your question you don't list any "errors" or "events". Please take a look at kubectl describe output of the Service. If you aren't getting the load balancer working, there might be an error like you ran out of IP addresses in your GCP project.