I have been experimenting with network policies, and now pods can no longer communicate with each other though I have deleted all the policies.
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
env: staging
Service A
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-a
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: service-a
template:
metadata:
labels:
app: service-a
spec:
containers:
- name: service-a
image: busybox:1.33.1
command: ["nc", "-lkv", "-p", "8080", "-e", "/bin/sh"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-a
namespace: staging
spec:
type: ClusterIP
selector:
app: service-a
ports:
- port: 8080
Service B
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-b
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: service-b
template:
metadata:
labels:
app: service-b
spec:
containers:
- name: service-b
image: busybox:1.33.1
command: ["nc", "-lkv", "-p", "8080", "-e", "/bin/sh"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-b
namespace: staging
spec:
type: ClusterIP
selector:
app: service-b
ports:
- port: 8080
Testing Communication
kubectl -n staging exec service-a-7c66d7cdf8-72gqq -- nc -vz service-b
Expected behaviour is that they can contact each other, but instead there is a timeout. So I tjeck if there are any network policies left.
kubectl -n staging get networkpolicy
>No resources found in staging namespace.
What I have tried
I have deleted the namespace, recreated it and recreated the two services.
I have gone through all namespaces looking for network policies to delete them, but there are none!
Before I started experimenting with the networkspolicies everything worked fine, but now I cannot get things working again. For the network controller I am using cillum.
I am pretty dumb, I simply forgot to write the port the second time around. It should be:
kubectl -n staging exec service-a-7c66d7cdf8-72gqq -- nc -vz service-b 8080
Related
I'm learning K8s, so bear with me as a noob.
I'm running a single-node K3s cluster at home, and have successfully deployed the traefik/whoami application using the command below, but would like to deploy it via ArgoCD.
cat apps/whoami/whoami.yaml | envsubst | kubectl apply -f -
The manifest I created.
---
apiVersion: v1
kind: Namespace
metadata:
name: k3s-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deploy
namespace: k3s-test
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami:v1.8.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami-svc
namespace: k3s-test
labels:
service: whoami
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
selector:
app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: k3s-test
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: whoami.${DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-svc
port:
number: 80
I want to publish my code to GitHub so that ArgoCD can sync it, but don't want to expose information that is not necessarily secret, but not necessarily public. Currently, my domain name is set as an environment variable (because I don't want to commit mydomain.com to my GitHub repo) and I'm using envsubst when running kubectl apply. Does ArgoCD have similar functionality? I found this GitHub issue showing ArgoCD probably doesn't have variable interpolation, but is there an alternative? Or do I need to store my domain name as a full-on K8s secret?
I'm following the tutorial from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.
I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.
The apps running in the containers are .NET Core.
Commands-deply & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Platforms-depl & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
Ingress-srv
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-srv
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-srv
port:
number: 80
I also added this to my hosts file:
127.0.0.1 acme.com
And I applied this from the nginx documentation:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
kubectl get ingress
kubectl describe ing ingress-srv
Dockerfile CommandService
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "PlatformService.dll" ]
kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx
Am I missing something?
I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using NET stop HTTP in an admin cmd.
I noticed that running kubectl get services -n=ingress-nginx resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running kubectl get ingress also didn't show an ADDRESS. Now they both show "localhost" as value for external-ip and ADDRESS.
Reference: Port 80 is being used by SYSTEM (PID 4), what is that?
So this can occur from several reasons:
Pods or containers are not working - try using kubectl get pods -n <your namespace> to see if any are not in 'running' status.
Assuming they are running, try kubectl describe pod <pod name> -n <your namespace> to see the events on your pod just to make sure its running properly.
I have noticed you are not exposing ports in your deployment. please update your deployments like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Hope this helps!
Setup
I have a federated k8s cluster that each cluster has master and workers.
In a federation, each cluster has a different domain for accessing image registry. (e.g. myregistry-1, myregistry-2).
In other words, each cluster has its own registry.
Question
I don't want to change domain for each cluster. Basically, I would like to create a common endpoint that matches to each inner registry, which is internal to that cluster.
Example: Below deployment on all clusters.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.default:5000/nginx:1.14.2
ports:
- containerPort: 80
I tried to implement "Services without selectors" and created an endpoint and updated deployment.yaml but didn't work.
harbor.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-service
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
harbor-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: harbor-service
subsets:
- addresses:
- ip: <INTERNAL_IP_OF_REGISTRY>
ports:
- port: 5000
I have to deploy on my kubernetes cluster two deployments that use the same service for communicate but the two deployments are located into two differents namespaces:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
namespace: namespace1
labels:
app: app1
spec:
replicas: 2
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: eu.gcr.io/direct-variety-20998876/test1:dev
resources:
requests:
cpu: "100m"
memory: "128Mi"
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
...
and an identical second but in another amespace:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
namespace: namespace2
labels:
app: app2
spec:
replicas: 2
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: eu.gcr.io/direct-variety-20998876/test1:prod
resources:
requests:
cpu: "100m"
memory: "128Mi"
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
...
so i have to create a common service for bot deployment that run over the two namespaces:
I try:
kind: Service
apiVersion: v1
metadata:
name: apps-service
namespace: ???
spec:
selector:
app: ???
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: NodePort
Until now i create one service for any app in specific namespace but there is a method for create a single service for manage both deployment (and then associate an unique ingress)?
So many thanks in advance
First, I would like to provide some general explanations.
As we can see in the Ingress documentation:
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
Ingress Controller can be deployed in any namespace and is often deployed in a namespace separate from the application namespace.
Ingress resource (Ingress rules) should be deployed in the same namespace as the services they point to.
It is possible to have one ingress controller for multiple ingress resources.
Deploying an Ingress resource in the same namespace as the Services it points to is the most common approach (I recommend this approach).
However, there is way to have Ingress in one namespace and Services in another namespaces using externalName Services.
I will create an example to illustrate how it may work.
Suppose, I have two Deployments (app1, app2) deployed in two different Namespaces (namespace1, namespace2):
$ cat app1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
namespace: namespace1
spec:
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
$ cat app2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
namespace: namespace2
spec:
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- image: nginx
name: nginx
And I exposed these Deployments with ClusterIP Services:
$ cat svc-app1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app1
name: app1
namespace: namespace1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app1
$ cat svc-app2.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app2
name: app2
namespace: namespace2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app2
We want to have a single Ingress resource in a separate Namespace (default).
First, we need to deploy Services of type ExternalName that map a Service to a DNS name.
$ cat external-app1.yml
kind: Service
apiVersion: v1
metadata:
name: external-app1
spec:
type: ExternalName
externalName: app1.namespace1.svc
$ cat external-app2.yml
kind: Service
apiVersion: v1
metadata:
name: external-app2
spec:
type: ExternalName
externalName: app2.namespace2.svc
Then we can deploy Ingress resource:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: external-app1
servicePort: 80
- path: /app2
backend:
serviceName: external-app2
servicePort: 80
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/app-ingress created
Finally, we can check if it works as expected:
$ curl 34.118.X.207/app1
app1
$ curl 34.118.X.207/app2
app2
NOTE: This is a workaround and may work differently with different ingress controllers. It is ususally better to have two or more Ingress resources in different namespaces.
Given the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: server
selector:
app: nginx
How would one configure the Service and Deployment here (or if needed, an Ingress object) so that when a Pod takes more than n seconds to return a HTTP response, the Service will try the request on another nginx-deployment Pod?
Kubernetes Services are based on simple iptables rules.
Traffic is NAT'ed only to destination pod. There are no layers you can adjust, for example, timeouts and set quality of services based on it.