We are creating a deployment in which the command needs the IP of the pre-existing service pointing to a statefulset. Below is the manifest file for the deployment. Currently, we are manually entering the service external IP inside this deployment manifest. Now we would like it to auto-populate during runtime. Is there a way to achieve this dynamically using environment variables or another way?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-api
namespace: app-api
spec:
selector:
matchLabels:
app: app-api
replicas: 1
template:
metadata:
labels:
app: app-api
spec:
containers:
- name: app-api
image: asia-south2-docker.pkg.dev/rnd20/app-api/api:09
command: ["java","-jar","-Dallow.only.apigateway.request=false","-Dserver.port=8084","-Ddedupe.searcher.url=http://10.10.0.6:80","-Dspring.cloud.zookeeper.connect-string=10.10.0.6:2181","-Dlogging$.file.path=/usr/src/app/logs/springboot","/usr/src/app/app_api/dedupe-engine-components.jar",">","/usr/src/app/out.log"]
livenessProbe:
httpGet:
path: /health
port: 8084
httpHeaders:
- name: Custom-Header
value: ""
initialDelaySeconds: 60
periodSeconds: 60
ports:
- containerPort: 4016
resources:
limits:
cpu: 1
memory: "2Gi"
requests:
cpu: 1
memory: "2Gi"
NOTE: The IP in question here is the Internal load balancer IP, i.e. the external IP for the service and the service is in a different namespace. Below is the manifest for the same
apiVersion: v1
kind: Service
metadata:
name: app
namespace: app
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: app
spec:
selector:
app: app
type: LoadBalancer
ports:
- name: container
port: 80
targetPort: 8080
protocol: TCP
You could use the following command instead:
command:
- /bin/bash
- -c
- |-
set -exuo pipefail
ip=$(dig +search +short servicename.namespacename)
exec java -jar -Dallow.only.apigateway.request=false -Dserver.port=8084 -Ddedupe.searcher.url=http://$ip:80 -Dspring.cloud.zookeeper.connect-string=$ip:2181 -Dlogging$.file.path=/usr/src/app/logs/springboot /usr/src/app/app_api/dedupe-engine-components.jar > /usr/src/app/out.log
It first resolves the ip address using dig (if you don't have dig in your image - you need to substitute it with something else you have), then execs your original java command.
As of today I'm not aware of any "native" kubernetes way to provide IP meta information directly to the pod.
If you are sure they exist before, and you deploy into the same namespace, you can read them from environment variables. It's documented here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables.
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see makeLinkVariables) that are compatible with Docker Engine's "legacy container links" feature.
For example, the Service redis-master which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment variables:
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
Note, those wont update after the container is started.
Related
I'm trying to access my Golang Microservice that is running in the Kubernetes Cluster and has following Manifest..
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-application-service
namespace: email-namespace
spec:
selector:
matchLabels:
run: internal-service
template:
metadata:
labels:
run: internal-service
spec:
containers:
- name: email-service-application
image: some_image
ports:
- containerPort: 8000
hostPort: 8000
protocol: TCP
envFrom:
- secretRef:
name: project-secrets
imagePullPolicy: IfNotPresent
So to access this Deployment from the Outside of the Cluster I'm using Service as well,
And I've set up some External IP for test purposes, which suppose to forward HTTP requests to the port 8000, where my application is actually running at.
apiVersion: v1
kind: Service
metadata:
name: email-internal-service
namespace: email-namespace
spec:
type: ClusterIP
externalIPs:
- 192.168.0.10
selector:
run: internal-service
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP
So the problem is that When I'm trying to send a GET request from outside the Cluster by executing curl -f http:192.168.0.10:8000/ it just stuck until the timeout.
I've checked the state of the pods, logs of the application, matching of the selector/template names at the Service and Application Manifests, namespaces, but everything of this is fine and working properly...
(There is also a secret config but It Deployed and also working file)
Thanks...
Making reference to jordanm's solution: you want to put it back to clusterIP and then use port-forward with kubectl -n email-namespace port-forward svc/email-internal-service 8000:8000. You will then be able to access the service via http://localhost:8000. You may also be interested in github.com/txn2/kubefwd
I have this in a selenium-hub-service.yml file:
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
sessionAffinity: None
When I do kubectl describe service on terminal, I get the endpoint of kubernetes service as 192.168.49.2:8443. I then take that and point the browser to 192.168.49.2:30001 but browser is not able to reach that endpoint. I was expecting to reach selenium hub.
When I do minikube service selenium-srv --url, which gives me http://127.0.0.1:56498 and point browser to it, I can reach the hub.
My question is: why am I not able to reach through nodePort?
I would like to do it through nodePort way because I know the port beforehand and if kubernetes service end point remains constant then it may be easy to point my tests to a known endpoint when I integrate it with azure pipeline.
EDIT: output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
selenium-srv NodePort 10.96.34.117 <none> 4444:30001/TCP 2d2h
Posted community wiki based on this Github topic. Feel free to expand it.
The information below assumes that you are using the default driver docker.
Minikube on macOS behaves a bit differently than on Linux. While on Linux, you have special interfaces used for docker and for connecting to the minikube node port, like this one:
3: docker0:
...
inet 172.17.0.1/16
And this one:
4: br-42319e616ec5:
...
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-42319e616ec5
There is no such solution implemented on macOS. Check this:
This is a known issue, Docker Desktop networking doesn't support ports. You will have to use minikube tunnel.
Also:
there is no bridge0 on Macos, and it makes container IP unreachable from host.
That means you can't connect to your service using IP address 192.168.49.2.
Check also this article: Known limitations, use cases, and workarounds - Docker Desktop for Mac:
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Per-container IP addressing is not possible
The docker (Linux) bridge network is not reachable from the macOS host.
There are few ways to setup minikube to use NodePort at the localhost address on Mac, like this one:
minikube start --driver=docker --extra-config=apiserver.service-node-port-range=32760-32767 --ports=127.0.0.1:32760-32767:32760-32767`
You can also use minikube service command which will return a URL to connect to a service.
is your deployment running on port 4444 ?
try this
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
replicas: 1
selector:
matchLabels:
app: selenium-hub
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141
ports:
- containerPort: 4444
resources:
limits:
memory: "1000Mi"
cpu: ".5"
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
ports:
- port: 4444
targetPort: 4444
name: port0
selector:
app: selenium-hub
type: NodePort
sessionAffinity: None
if you want to use to chrome
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-node-chrome
labels:
app: selenium-node-chrome
spec:
replicas: 2
selector:
matchLabels:
app: selenium-node-chrome
template:
metadata:
labels:
app: selenium-node-chrome
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: selenium-node-chrome
image: selenium/node-chrome-debug:3.141
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
env:
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
testing python code
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def check_browser(browser):
driver = webdriver.Remote(
command_executor='http://<IP>:<PORT>/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, browser)
)
driver.get("http://google.com")
assert "google" in driver.page_source
driver.quit()
print("Browser %s checks out!" % browser)
check_browser("CHROME")
This yaml tries to deploy a simple Arangodb architecture in k8s, I know there are operators for ArangoDB, but it is a simple PoC to understand k8s pieces and iterate this db with other apps.
The problem is this YAML file executes correctly but I don't get any IP:PORT to connect, however when I execute that docker image in local it works.
# create: kubectl apply -f ./arango.yaml
# delete: kubectl delete -f ./arango.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nms
name: arangodb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: arangodb-pod
template:
metadata:
labels:
app: arangodb-pod
spec:
containers:
- name: arangodb
image: arangodb/arangodb:3.5.3
env:
- name: ARANGO_ROOT_PASSWORD
value: "pass"
ports:
- name: http
containerPort: 8529
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
namespace: nms
name: arangodb-svc
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: nms
name: arango-storage
labels:
app: arangodb-pod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Description seems clear:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
arangodb-svc LoadBalancer 10.0.150.245 51.130.11.13 8529/TCP 14m
I am executing kubectl apply -f arango.yaml from AKS but I cannot access to any IP:8529. Some recommendations?
I would like to simulate these commands:
docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD=pass -d --name arangodb-instance arangodb/arangodb:3.5.3
docker start arangodb-instance
You must allow the NodePort 31098 at NSG level from your VNet configuration and attach that NSG rule to AKS cluster.
Also please try and update the service manifest with the changes that you went through with the help in comments.
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http --< **Its completely wrong field, the manifest wont be parsed.**
The above manifest is wrong, for NodePort (--service-node-port-range=30000-32767) the manifest should look something like this:
spec:
type: NodePort
selector:
app: arangodb-pod
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- name: http
port: 8529
targetPort: 8529
# Optional field
nodePort: 31044
You can connect at public-NODE-IP:NodePort from outside AKS.
For service type loadbalancer, your manifest should look like:
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- name: http
protocol: TCP
port: 8529
targetPort: 8529
For LoadBalancer you can connect with LoadBalancer-External-IP:external-port
However, in both the above cases NSG whitelist rule should be there. You should whitelist your local machine's IP or the IP of the machine from wherever you are accessing it.
you have to ingress controller or you could also go with loadbalancer type as service assiging a static ip which you prefer. Both will work
I'm having trouble setting up an ingress open only to some specific IPs, checked docs, tried a lot of stuff and an IP out of the source keep accessing. that's a Zabbix web interface on an alpine with nginx, set up a service on node-port 80 then used an ingress to set up a loadbalancer on GCP, it's all working, the web interface is working fine, but how can I make it accessible only to desired IPs?
my firewall rules are ok and it's only accessible through load balancer IP
Also, I have a specific namespace for this deploy.
Cluster version 1.11.5-gke.5
EDIT i'm using GKE standard ingress GLBC
My template is config as follow can someone help enlighten me on what is missing:
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-web
namespace: zabbix-prod
labels:
app: zabbix
tier: frontend
spec:
replicas: 1
template:
metadata:
labels:
name: zabbix-web
app: zabbix
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
defaultMode: 420
secretName: cloudsql-instance-credentials
containers:
- command:
- /cloud_sql_proxy
- -instances=<conection>
- -credential_file=/secrets/cloudsql/credentials.json
image: gcr.io/cloudsql-docker/gce-proxy:1.11
imagePullPolicy: IfNotPresent
name: cloudsql-proxy
resources: {}
securityContext:
allowPrivilegeEscalation: false
runAsUser: 2
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /secrets/cloudsql
name: credentials
readOnly: true
- name: zabbix-web
image: zabbix/zabbix-web-nginx-mysql:alpine-3.2-latest
ports:
- containerPort: 80
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: <user>
name: <user>
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: <pass>
name: <pass>
- name: DB_SERVER_HOST
value: 127.0.0.1
- name: MYSQL_DATABASE
value: <db>
- name: ZBX_SERVER_HOST
value: <db>
readinessProbe:
failureThreshold: 3
httpGet:
path: /index.php
port: 80
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: "zabbix-web-service"
namespace: "zabbix-prod"
labels:
app: zabbix
spec:
ports:
- port: 80
targetPort: 80
selector:
name: "zabbix-web"
type: "NodePort"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zabbix-web-ingress
namespace: zabbix-prod
annotations:
ingress.kubernetes.io/service.spec.externalTrafficPolicy: local
ingress.kubernetes.io/whitelist-source-range: <xxx.xxx.xxx.xxx/32>
spec:
tls:
- secretName: <tls-cert>
backend:
serviceName: zabbix-web-service
servicePort: 80
You can whitelist IPs by configuring Ingress and Cloud Armour:
Switch to project:
gcloud config set project $PROJECT
Create a policy:
gcloud compute security-policies create $POLICY_NAME --description "whitelisting"
Change default policy to deny:
gcloud compute security-policies rules update 2147483647 --action=deny-403 \
--security-policy $POLICY_NAME
On lower priority than the default whitelist all IPs you want to whitelist:
gcloud compute security-policies rules create 2 \
--action allow \
--security-policy $POLICY_NAME \
--description "allow friends" \
--src-ip-ranges "93.184.17.0/24,151.101.1.69/32"
With a maximum of ten per range.
Note you need valid CIDR ranges, for that you can use CIDR to IP Range -> IP Range to CIDR.
View the policy as follows:
gcloud compute security-policies describe $POLICY_NAME
To throw away an entry:
gcloud compute security-policies rules delete $PRIORITY --security-policy $POLICY_NAME
or the full policy:
gcloud compute security-policies delete $POLICY_NAME
Create a BackendConfig for the policy:
# File backendconfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
namespace: <namespace>
name: <name>
spec:
securityPolicy:
name: $POLICY_NAME
$ kubectl apply -f backendconfig.yaml
backendconfig.cloud.google.com/backendconfig-name created
Add the BackendConfig to the Service:
metadata:
namespace: <namespace>
name: <service-name>
labels:
app: my-app
annotations:
cloud.google.com/backend-config: '{"ports": {"80":"backendconfig-name"}}'
spec:
type: NodePort
selector:
app: hello-app
ports:
- port: 80
protocol: TCP
targetPort: 8080
Use the right selectors and point the receiving port of the Service to the BackendConfig created earlier.
Now Cloud Armour will add the policy to the GKE service.
Visible in https://console.cloud.google.com/net-security/securitypolicies (after selecting $PROJECT).
AFAIK, you can't restrict IP addresses through GLBC or on GCP L7 Load Balancer itself. Note that GLBC is also a work in progress as of this writing.
ingress.kubernetes.io/whitelist-source-range works great but when you are using something like an nginx ingress controller because nginx itself can restrict IP addresses.
The general way to restrict/whitelist IP addresses is using VPC Firewall Rules (which seems like you are doing already). Essentially you can restrict/whitelist the IP addresses to the network where your K8s nodes are running on.
One of the best options to accomplish your goal is using firewall rules since you can't restrict IP addresses through the Global LB or on GCP L7 LB itself. However, another option if you are using Ingress on your Kubernetes cluster, it is possible to restrict access to your application based on dedicated IP addresses.
One possible use case would be that you have a development setup and don’t want to make all the fancy new features available to everyone, especially competitors. In such cases, IP whitelisting to restrict access can be used.
This can be done with specifying the allowed client IP source ranges through the ingress.kubernetes.io/whitelist-source-range annotation.
The value is a comma separated list of CIDR block.
For example:
10.0.0.0/24, 1.1.1.1/32.
Please get more information here.
For anyone who stumbles on this question via Google like I did, there is now a solution. You can implement this via a BackendConfig from the cloud.google.com Kubernetes API in conjunction with a GCE CloudArmor policy.
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig
I have a single kubernetes service called MyServices which hold four deployments. Each deployment is running as a single pod and each pod has its own port number.
As mentioned all the pods are running inside one kubernetes service.
I am able to call the services through the external IP Address of that kubernetes service and port number.
Example : 92.18.1.1:3011/MicroserviceA Or 92.18.1.1:3012/MicroserviceB
I am now trying to develop and orchestration layer that calls these services and get a response from them, However, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName. Example: 192.168.1.1/MicroserviceA
How can I achieve above statement?
From architecture perspective, is it a good idea to deploy all microservice inside a single kubenetes service (like my current approach) or each micro-service needs it's own service
Below is the kubernetes deployment file ( I removed the script for micro-service C and D since they are identical to A and B):
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: microservice
ports:
- name: microserviceA
protocol: TCP
port: 3011
targetPort: 3011
- name: microserviceB
protocol: TCP
port: 3012
targetPort: 3012
- name: microserviceC
protocol: TCP
port: 3013
targetPort: 3013
- name: microserviceD
protocol: TCP
port: 3014
targetPort: 3014
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceAdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3011
imagePullSecrets:
- name: regcred
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceBdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3012
There is a way to discover all the port of Kubernetes services.
So you could consider using kubectl get svc, as seen in "Source IP for Services with Type=NodePort"
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services <yourService>)
, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName
Then you need to expose those services through one entry point, typically a reverse-proxy like NGiNX.
The idea is to expose said services using the default ports (80 or 443), and reverse-proxy them to the actual URL and port number.
Check "Service Discovery in a Microservices Architecture" for the general idea.
And "Service Discovery for NGINX Plus with etcd" for an implementation (using NGiNX plus, so could be non-free).
Or "Setting up Nginx Ingress on Kubernetes" for a more manual approach.