Say I have the following YAML representing a service:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:91371:certificate/0a389f-4086-4db6-9106-b587c90a3
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
labels:
app: main-api-prod
name: main-api-prod
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
selector:
app: main-api-prod
after I run:
kubectl apply -f <file>
I run:
kbc get svc -o json | grep hostname
and we see:
"hostname":
"a392f200796b8b0279bdd390c-228227293.us-west-2.elb.amazonaws.com"
my question is - is there a way to tell kubectl to use my own id in the hostname? In other words, I would like to tell it to use "abc" instead of "a392..", so it would be:
"abc-228227293.us-west-2.elb.amazonaws.com"
As an aside, if anyone knows what "228227293" represents, please lmk, I know that it is not our AWS accountid, that's for sure.
When you create a service of type "load balancer", a new load balancer is created by your cloud provider (here aws). Load Balancer naming is of the responsibility of the cloud provider. I don't think you can tell amazon how to generate the load balancer name. And it may depends of the load balancer type - alb, internal alb, nlb, ....
But you can use "external-dns" (https://github.com/kubernetes-incubator/external-dns). When configured for a dns providers - for exemple aws route53 - it can automatically creates dns aliases for your load balancer. But you won't be able to create name in amazonws.com domain ...
Related
I have deployed RabbitMQ in Kubernetes using a service with the load balancer type. When creating a service, an external IP is created. Could you please tell me if I can bind another deployment to this IP with other ports? Thanks.
It is possible, you just have to create a service with multiple ports, for example:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: any other port
port: <port-number>
targetPort: <target-port>
selector:
app: app
type: LoadBalancer
And you will output similar to this:
$kubectl get svc
service-name LoadBalancer <Internal-IP> <External-IP> 80:30870/TCP,443:32602/TCP,<other-port>:32388/TCP
Here is the config for our current EKS service:
apiVersion: v1
kind: Service
metadata:
labels:
app: main-api
name: main-api-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Cluster
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: main-api
sessionAffinity: None
type: LoadBalancer
is there a way to configure it to use HTTPS instead of HTTP?
To terminate HTTPS traffic on Amazon Elastic Kubernetes Service and pass it to a backend:
1. Request a public ACM certificate for your custom domain.
2. Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.
3. In your text editor, create a service.yaml manifest file based on the following example. Then, edit the annotations to provide the ACM ARN from step 2.
apiVersion: v1
kind: Service
metadata:
name: echo-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
4. To create a Service object, run the following command:
$ kubectl create -f service.yaml
5. To return the DNS URL of the service of type LoadBalancer, run the following command:
$ kubectl get service
Note: If you have many active services running in your cluster, be sure to get the URL of the right service of type LoadBalancer from the command output.
6. Open the Amazon EC2 console, and then choose Load Balancers.
7. Select your load balancer, and then choose Listeners.
8. For Listener ID, confirm that your load balancer port is set to 443.
9. For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.
10. Associate your custom domain name with your load balancer name.
11. Finally, In a web browser, test your custom domain with the following HTTPS protocol:
https://yourdomain.com
You should use an ingress (and not a service) to expose http/s outside of the cluster
I suggest using the ALB Ingress Controller
There is a complete walkthrough here
and you can see how to setup TLS/SSL here
In my setup, there is a set of containers that were initially built to run with docker-compose. After moving to Kubernetes I'm facing the following challenges:
docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:
they could all be part of one pod and therefore communicate via localhost
they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?
I need to expose an internal Port to a certain NodePort as it has to be publicly available. How does such a service config look like? What I found so far:
something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-nodeport
spec:
type: NodePort
ports:
- name: "3000-30001"
port: 3000
nodePort: 30001
selector:
app: frontend
status:
loadBalancer: {}`
Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?
As you researched, you can, indeed have two approaches:
IF you containers are to be scaled together then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.
IF your containers are to be scaled separaltey, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.
Since you asked this as an introductory question two words of caution:
service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.
Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.
I need to expose an internal Port to a certain NodePort. How does such a service config look like?
You are on a good track, here is nice overview to get you started, and reference relevant to your question is given below:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
selector:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?
First one is, say, api server, we'll call it svc-my-api, it will use pod(s) labeled app: my-api and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:svc-my-api and port:8080
apiVersion: v1
kind: Service
metadata:
name: svc-my-api
labels:
app: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 8080
targetPort: 80
Second one is, say, mysql server, we'll call it svc-my-database. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:svc-my-database and port:3306.
apiVersion: v1
kind: Service
metadata:
name: svc-my-database
labels:
app: my-database
spec:
selector:
app: my-database
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306
1.- You can add some parameters to your pod resource (or any other that is going to create a pod), as follows:
...
spec:
hostname: foo-{1..4} #keep in mind this line
subdomain: bar #and this line
containers:
- image: busybox
...
Note: so imagine you just created 4 pods, with hostname foo-1, foo-2, foo-3 and foo-4. These are separate pods. You can't do foo-{1..4}. So this is just for demo purposes.
If you now create a service with the same name as the subdomain, you would be able to reach the pod from anywhere in the cluster by hostname.service-name.namespace.svc.cluster.local.
Example:
apiVersion: v1
kind: Service
metadata:
name: bar #my subdomain is called "bar", so is this service
spec:
selector:
app: my-app
ports:
- name: foo
port: 1234
targetPort: 1234
Now, say I have the label app: my-app in my pods, so the service is targeting them correctly.
At this point, look what happens (from any pod, within the cluster):
/ # nslookup foo-1.bar.my-namespace.svc.cluster.local
Server: 10.63.240.10
Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.bar.my-namespace.svc.cluster.local
Address 1: 10.60.1.24 foo-1.bar.my-namespace.svc.cluster.local
2.- The second part of your question is almost correct. This is a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: my-app
type: NodePort
This service runs on port 80, so it is reachable on port 80 from within the cluster. It will map the port to a random port over 30000 on the node. Now this same service is available on port 30001 (for example) of the node from outside world. Finally it will forward the requests to the port 8080 of the container.
I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.
I don't need load balancing capabilities.
The approach should expose an IP address that is externally available with a dedicated port for each tenant
I don't want to expose the nodes directly to the internet
I have the following service so far:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
spec:
type: NodePort
ports:
- port: 8111
targetPort: 8111
protocol: UDP
name: my-udp
- port: 8222
targetPort: 8222
protocol: TCP
name: my-tcp
selector:
app: my-app
What is the way to go?
Deploy a NGINX ingress controller on your AWS cluster
Change your service my-service type from NodePort to ClusterIP
Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
"8222": your-namespace/my-service:8222
Same for configMap udp-services :
data:
"8111": your-namespace/my-service:8111
Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)
The description provided by #ffledgling is what you need.
But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.
I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.