Deployed port is not getting exposed in kubernetes in nexus - kubernetes

I am working on creating a nexus repo through kubernetes. By browesing i came across this site (https://blog.sonatype.com/kubernetes-recipe-sonatype-nexus-3-as-a-private-docker-registry) I was able to create a service and deployment and pod and everthing created without am issue. But i was not able to open it.
This is my nexus.yaml file
apiVersion: v1
kind: Namespace
metadata:
name: nexus
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexusvolume-local
namespace: nexus
labels:
app: nexus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nexus
namespace: nexus
spec:
replicas: 1
template:
metadata:
labels:
app: nexus
spec:
containers:
- image: sonatype/nexus3
imagePullPolicy: Always
name: nexus
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- mountPath: /nexus-data
name: nexus-data-volume
volumes:
- name: nexus-data-volume
persistentVolumeClaim:
claimName: nexusvolume-local
---
apiVersion: v1
kind: Service
metadata:
name: nexus-service
namespace: nexus
spec:
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
- port: 5000
targetPort: 5000
protocol: TCP
name: docker
selector:
app: nexus
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus
annotations:
ingress.kubernetes.io/proxy-body-size: 100m
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
# CHANGE ME
- docker.testnexusurl.com
- nexus.testnexusurl.com
secretName: nexus-tls
rules:
# CHANGE ME
- host: nexus.testnexusurl.com
http:
paths:
- path: /
backend:
serviceName: nexus-service
servicePort: 80
# CHANGE ME
- host: docker.testnexusurl.com
http:
paths:
- path: /
backend:
serviceName: nexus-service
servicePort: 5000
When i give
kubectl describe service nexus-service -n nexus
Name: nexus-service
Namespace: nexus
Labels: <none>
Annotations: <none>
Selector: app=nexus
Type: ClusterIP
IP: 10.111.212.3
Port: http 80/TCP
TargetPort: 8081/TCP
Endpoints: 172.17.0.11:8081
Port: docker 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.11:5000
Session Affinity: None
Events: <none>
I tried by giving the port and opening but i am getting a error it can reached.
Can someone help me on this???
Thanks in advance.

After looking at the above yml file, there you have written namespace as "nexus", try running by changing namespace to "default".

Related

Azure AKS Application Gateway 502 bad gateway

I have been following the tutorial here:
MS Azure
This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.
Can anyone see anything obvious with this: At this point I don't know where to start.
I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
Output of: kubectl describe ingress strata-2022
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events:
kubectl describe ingress
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
Commands used to create AKS using Azure CLI.
az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
// Get credentials and switch to this context
az aks get-credentials -n myCluster -g david-tutorial
// This line is from the tutorial -- this works as expected
//kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
// This is what i ran. It works locally
kubectl apply -f nano new-deploy.yaml
// Get address
kubectl get ingress
kubectl get configmap
I tried recreating the same setup on my end, and I could identify the following issue right after running the same az aks create command: All the instances in one or more of your backend pools are unhealthy.
Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:
> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port 80, which is the port you are using in your kubernetes resources definition.
After updating your Deployment and Service definitions to use port 8080 instead of 80, everything worked perfectly fine and I started getting the following response in my browser:
Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
Below you can find the updated YAML file that I used to successfully deploy all the resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080

Kubernetes Traefik ingress rule for local host

I am trying to connect to traefik dashboard to localhost. My manifest below will bring up the localhost:port but I only get 404 errors. Now sure how to set the ingress to work locally. The base code was set up to run on AWS NLB, I am trying to setup this to run locally. This manifest below contains the deployment and service for the traefik Kubernetes install.
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-lb
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
# hostNetwork: true
serviceAccountName: traefik-ingress-lb
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.5
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
args:
- --serversTransport.insecureSkipVerify=true
- --providers.kubernetesingress=true
- --providers.kubernetescrd
- --entryPoints.traefik.address=:1080
- --entryPoints.https.address=:443
- --entrypoints.https.http.tls=true
- --entryPoints.https.forwardedHeaders.insecure=true
- --entryPoints.https2.address=:4443
- --entrypoints.https2.http.tls=true
- --entryPoints.https2.forwardedHeaders.insecure=true
- --entryPoints.turn.address=:5349
- --entrypoints.turn.http.tls=true
- --entryPoints.turn.forwardedHeaders.insecure=true
- --api
- --api.insecure
- --accesslog
- --log.level=INFO
- --pilot.dashboard=false
- --entryPoints.http.address=:80
- --entrypoints.http.http.redirections.entryPoint.to=https
- --entrypoints.http.http.redirections.entryPoint.scheme=https
- --entrypoints.http.http.redirections.entrypoint.permanent=true
resources:
limits:
memory: 3072Mi
cpu: 1.5
requests:
memory: 1024Mi
cpu: 1
---
apiVersion: v1
kind: Service
metadata:
name: lb
namespace: kube-system
annotations:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: https2
port: 4443
targetPort: 4443
- name: turn
port: 5349
targetPort: 5349
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: dashboard
port: 1080
targetPort: 1080
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik
namespace: kube-system
# annotations:
# ingress.kubernetes.io/whitelist-x-forwarded-for: "true"
spec:
entryPoints:
- https
# - web
routes:
- kind: Rule
match: "PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
# middlewares:
# - name: internal-ip-whitelist
# namespace: traefik
services:
- kind: Service
name: dashboard
namespace: kube-system
passHostHeader: true
port: 1080
,,,
Your IngressRoute points to the dashboard service in namespace 'kube-system', but the traefik dashboard service is deployed in namespace 'traefik'.
Therefore the route is not working, leading to the 404 in traefik.

Kubernetes Ingress can not access container by path

I am new to Kubernetes, I configured a Ingress and want to access container by minikube ip/path, but it failed to connect.
However, I could access it by using host instead of path, so I thought the problem might be Ingress.
I have no idea how to do, hope someone can help me. Thanks.
Here's my Deployment, Service and Ingress yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-deployment
spec:
replicas: 2
selector:
matchLabels:
app: portainer
template:
metadata:
labels:
app: portainer
spec:
containers:
- name: portainer
image: portainer/portainer:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rancher-deployment
spec:
replicas: 2
selector:
matchLabels:
app: rancher
template:
metadata:
labels:
app: rancher
spec:
containers:
- name: rancher
image: rancher/server:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: portainer-service
spec:
selector:
app: portainer
type: NodePort
ports:
- port: 80
targetPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: rancher-service
spec:
selector:
app: rancher
type: NodePort
ports:
- port: 80
targetPort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: path-based-ingress
spec:
rules:
- http:
paths:
- path: /portainer
backend:
serviceName: portainer-service
servicePort: 80
- path: /rancher
backend:
serviceName: rancher-service
servicePort: 80
Add this to your ingress in metadata section
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /

Expose a redis cluster - with a kubernetes statefulset to the internet

I created a statefulset that deploys a redis image to GCP on kubernetes. The challenge I am having is exposing it using a single domain name. Such that the pods can be accessed in the following order - redis.com/first, redis.com/second, redis.com/third
here are the YAML files
Statefulset
kind: StatefulSet
metadata:
name: app-redis
spec:
selector:
matchLabels:
app: apprenticeship-redis
serviceName: 'redis-service'
replicas: 3
template:
metadata:
labels:
app: app-redis
spec:
terminationGracePeriodSeconds: 10
containers:
- name: app-redis
image: redis
args:
- /etc/redis/redis.conf
volumeMounts:
- mountPath: /etc/redis
name: redis-config
readOnly: false
- name: redis-storage
mountPath: /data
readOnly: false
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
livenessProbe:
exec:
command: ['redis-cli', 'ping']
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
volumes:
- name: redis-config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Headless service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-redis
name: redis-service
namespace: default
spec:
ports:
- name: server-port
port: 80
protocol: TCP
targetPort: 6379
clusterIP: None
selector:
statefulset.kubernetes.io/pod-name: app-redis-0
Loadbalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: redis-service
name: app-redis
spec:
externalTrafficPolicy: Local
ports:
- port: 80
protocol: TCP
targetPort: 6379
selector:
app: app-redis
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xxx
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xxx
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: default
data:
redis.conf: |
dbfilename "dump.rdb"
dir /data
save 3600 1
save 300 10
save 60 100
appendonly yes
appendfilename "appendonly.aof"
Storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: redis-storage
provisioner: kubernetes.io/gce-pd
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'false'
spec:
rules:
- host: app-redis.tk
http:
paths:
- path: /
backend:
serviceName: app-redis
servicePort: 80
Each pod in the StatefulSet will need to have a service linking to it.
This service will need to be created with:
selector:
statefulset.kubernetes.io/pod-name: <POD_NAME>
Then you will be able to set ingress and use it to redirect traffic based on path:
...
spec:
rules:
- http:
paths:
- path: /app-redis-0
backend:
serviceName: redis-service-0
servicePort: 6379
- path: /app-redis-1
backend:
serviceName: redis-service-1
servicePort: 6379
- path: /app-redis-2
backend:
serviceName: redis-service-2
servicePort: 6379
...
You can read about Exposing StatefulSets in Kubernetes and Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

Canot access to sidecar container in Kubernetes

I have the following hello world deployment.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: hello:v0.0.1
imagePullPolicy: Always
args:
- /hello
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: hello
type: NodePort
And I have ingress object deploy with side-car container
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: alb-ingress-controller
template:
metadata:
creationTimestamp: null
labels:
app: alb-ingress-controller
spec:
containers:
- name: server
image: alb-ingress-controller:v0.0.1
imagePullPolicy: Always
args:
- /server
- --ingress-class=alb
- --cluster-name=AAA
- --aws-max-retries=20
- --healthz-port=10254
ports:
- containerPort: 10254
protocol: TCP
- name: alb-sidecar
image: sidecar:v0.0.1
imagePullPolicy: Always
args:
- /sidecar
- --port=5000
ports:
- containerPort: 5000
protocol: TCP
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: alb-ingress
serviceAccount: alb-ingress
---
apiVersion: v1
kind: Service
metadata:
name: alb-ingress-controller-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: alb-ingress-controller
type: NodePort
And I have Ingress here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: test-alb
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: hello-service
servicePort: 80
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80
I would expect to access to /alb-sidecar the same way that I access to /hello, but only /hello endpoint works for me. And keep getting 502 Bad Gateway for /alb-sidecar endpoint. The sidecar container is just a simple web app listening on /alb-sidecar.
Do I need do anything different when the sidecar container runs in a different namespace or how would you run a sidecar next to ALB ingress controller?
If you created the deployment alb-ingress-controller and the service alb-ingress-controller-service in another namespace, you need to create another ingress resource in the exact namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
namespace: alb-namespace
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: alb-service
spec:
rules:
- http:
paths:
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80