I am able to configure Nginx for NodePort and access it, but when I configure Load Balancer and Target Group for Nginx I am not able to access. I think I am doing something wrong and I cannot troubleshoot it as I am quite new to Kubernetes.
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
svc: test-nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30000
Load Balancer and Target group configuration
You need to use ingress controller to do the same thing.
That will automatically create the load balancer and target group from the file itself.
You can watch the videos and tutorials for the same.
Your image attachment indicated that your ALB was attached with the VPC default security group. Your EC2 instance security group will need to have inbound rule for the default security group for connectivity. You can automate this process by using the AWS LB Controller.
Related
I'm write helm chart for deploy web service to eks.
I need to deploy Load balancer for pods running web application.
I'm create service that deploy Network Load Balancer
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
selector:
app: "MyApp"
type: LoadBalancer
ports:
- name: http
port: {{ .Values.app.port }}
protocol: TCP
targetPort: {{ .Values.app.port }}
I'm need sticky session for this application.
I'd tried to add sessionAffinity: ClientIP to chart, but this failed. LoadBalancer wont to be created.
Other way LoadBalancer created, but sticky session in Target group still disabled.
What are correct way to configure sticky session for NLB via helm chart?
This is impossible due to AWS feature sequence.
This feature belongs to TargetGroup of NLB.
When you create LoadBalancer you can't edit attributes of Target group.
After creation attributes of TargetGroup can be modified.
Kubernetes does not implement any annotation to change this flag.
Conclusion: After creating LoadBalancer run awscli script for change sticky session flag.
Is there a way to keep the same external IP that current load balancer has even when I make a new deployment?
When I delete the deployment connected to the load balancer, load balancer still stays so is it possible to connect a new deployment to that existing load balancer?
yes, you can pass the externalIP in the service object's yaml file.
Try following this -
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
loadBalancerIP: 78.11.24.19
type: LoadBalancer
Please refer to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer for more info on this
Here is the config for our current EKS service:
apiVersion: v1
kind: Service
metadata:
labels:
app: main-api
name: main-api-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Cluster
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: main-api
sessionAffinity: None
type: LoadBalancer
is there a way to configure it to use HTTPS instead of HTTP?
To terminate HTTPS traffic on Amazon Elastic Kubernetes Service and pass it to a backend:
1. Request a public ACM certificate for your custom domain.
2. Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.
3. In your text editor, create a service.yaml manifest file based on the following example. Then, edit the annotations to provide the ACM ARN from step 2.
apiVersion: v1
kind: Service
metadata:
name: echo-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
4. To create a Service object, run the following command:
$ kubectl create -f service.yaml
5. To return the DNS URL of the service of type LoadBalancer, run the following command:
$ kubectl get service
Note: If you have many active services running in your cluster, be sure to get the URL of the right service of type LoadBalancer from the command output.
6. Open the Amazon EC2 console, and then choose Load Balancers.
7. Select your load balancer, and then choose Listeners.
8. For Listener ID, confirm that your load balancer port is set to 443.
9. For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.
10. Associate your custom domain name with your load balancer name.
11. Finally, In a web browser, test your custom domain with the following HTTPS protocol:
https://yourdomain.com
You should use an ingress (and not a service) to expose http/s outside of the cluster
I suggest using the ALB Ingress Controller
There is a complete walkthrough here
and you can see how to setup TLS/SSL here
I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.
I don't need load balancing capabilities.
The approach should expose an IP address that is externally available with a dedicated port for each tenant
I don't want to expose the nodes directly to the internet
I have the following service so far:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
spec:
type: NodePort
ports:
- port: 8111
targetPort: 8111
protocol: UDP
name: my-udp
- port: 8222
targetPort: 8222
protocol: TCP
name: my-tcp
selector:
app: my-app
What is the way to go?
Deploy a NGINX ingress controller on your AWS cluster
Change your service my-service type from NodePort to ClusterIP
Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
"8222": your-namespace/my-service:8222
Same for configMap udp-services :
data:
"8111": your-namespace/my-service:8111
Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)
The description provided by #ffledgling is what you need.
But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.
I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.