I'm looking at the documentation for ECS contexts in docker, but I can't seem to find more than a couple of articles.
Seems like a great idea, but I'm now wondering how to pick the VPC of the ECS cluster, and who knows what else...
Have you seen this page? For example here you can read how to pick a VPC.
Specifically:
Use x-aws-vpc as a top-level element in your Compose file to set the ARN of a VPC when deploying a Compose application.
This link includes examples of how you would use these extensions. For example, for VPC and the cluster name it would be:
x-aws-vpc: "vpc-25435e"
x-aws-cluster: "ClusterName"
services:
app:
image: nginx
ports:
- 80:80
Related
I want to connect to my Postgres DB . I use deployment NodePort IP for the host field and also data from config file :
data:
POSTGRES_DB: postgresdb
POSTGRES_PASSWORD: my_password
POSTGRES_USER: postgresadmin
But I get error . What do I do wrong ? If you need more info - let me know .
Unless you are connected to your cluster through VPN (or direct connect), you can't access 10.121.8.109. It's a private IP address and only available for apps and services within you VPC.
You need to create public access for your node port service. Try kubectl get service to find out the External IP for your service. Then try to connect to your IP address from External IP.
Rather than using NodePort service, you are better off using Load Balancer type service which might give you better flexibility in managing this especially in a production env. But it will cost a little more Likelihood of an IP Address to change is high, but load balancer or ingress service would automatically manage this for you through a fixed DNS. So you need to weigh the pros and cons of using service type based on your workload.
I am looking to have a dynamic etcd cluster running inside my k8s cluster. The best way I can think of doing it dynamically (no hardcoded addresses, names, etc.) is to use DNS discovery, with the internal k8s DNS (CoreDNS).
I find detached information about SRV records created for services in k8s, and some explanations on how etcd DNS discovery works, but no complete howto.
For example:
how does k8s name SRV entries?
should they be named with a specific way for etcd to be able to find them?
should any special CoreDNS setting be set?
Any help on that would be greatly appreciated.
references:
https://coreos.com/etcd/docs/latest/v2/clustering.html#dns-discovery
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
how does k8s name SRV entries?
via the Service.port[].name, which is why almost everything in kubernetes has to be a DNS-friendly name: because a lot of times, it does put them in DNS for you.
A Pod that has dig or a new enough nslookup will then show you:
$ dig SRV kubernetes.default.svc.cluster.local.
and you'll see the names of the ports that the kubernetes Service is advertising.
should they be named with a specific way for etcd to be able to find them?
Yes, as one can see in the page you linked to, they need to be named one of these four:
_etcd-client
_etcd-client-ssl
_etcd-server
_etcd-server-ssl
so something like this on the kubernetes side:
ports:
- name: etcd-client
port: 2379
containerPort: whatever
- name: etcd-server
port: 2380
containerPort: whatever
I'm pretty sure that this is a basic use case when running apps on kubernetes, but till now I wasn't able to find a tutorial, nor understand from the documentation, how to make it work.
I have an application, which is listening on a port 9000. So when run on my localhost, I can access it through a web browser on a localhost:9000. When run in a docker container, which is running on my VPS, it's also accessible on myVPSAddress:9000. Now the question is, how to deploy it on kubernetes running on the very same Virtual Private Server and expose the application to be visible as well, as when deployed on docker. I can access the application from within the VPS on the address of the cluster, but not on the IP address of the server itself. Can somebody show me some basic dockerfile with a description what is it doing or show me some idiot-proof way, how to make it work? Thanks
While one would think that this is a very basic use-case, that is not the case for people running their own kubernetes clusters on bare metal servers. (The way you are on your VPS).
The recommended way of exposing an application to "the world" is to use kubernetes services, see this piece of documentation about exposing services. You define a kubernetes service, either of the type NodePort or of type Loadbalancer *.
Here is what a dead simple service looks like (hint: it's of the default type NodePort):
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 9000
targetPort: 9376
This will expose your service with label name: my-service (interally running on port 9000) on all nodes in your VPS cluster at port 9376.
Assuming your nodes have a public IP (which from your question I assume they do), you can safely do curl localhost:9376.
Because this is usually not ideal UX/UI to expose to users, people use services of type Loadbalancer. This service type provides a unique IP to each of your services instead of a port.
These services are first class citizens on cloud managed clusters, such as Google's GKE, but if you run your own Kubernetes cluster (setup using say kubeadm), then you need to deploy your Loadbalancer service provider. I've used the excellent MetalLB and it works flawlessly once it's been setup, but you need to set it up yourself. If you want dns names for you services as well, you should also look at ExternalDNS.
* Caveat here is that you can also use a service of type ExternalIP if you can somehow make that IP routable, but unless the network is in your control, this is usually not a feasible approach, and I'd recommend looking at an LB provider instead.
I would like to configure traefik via docker-compose to expose more than one port of a component service.
For instance, when serving an ember-cli app, how can I expose both the main port and the live-reload port for development?
If you need to bind multiple ports of a container, you have to use the traefik..* labels described here in the documentation. For example something like this could be used in docker-compose.yml
labels:
- traefik.ember.port=8080
- traefik.ember.frontend.rule=Host:mydomain.com
- traefik.reload.port=37531
- traefik.reload.frontend.rule=Host:mydomain.com;PathPrefixStrip:/reload
I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE.
However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application.
So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem.
My question is that:
How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service?
If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container?
Any advice or information you could provide would be greatly appreciated.
Thank you in advance.
my-ros-service.yaml
kind: Service
apiVersion: v1beta1
id: my-ros-service
port: 11311
selector:
name: my-ros
containerPort: 11311
createExternalLoadBalancer: true
I don't think there is currently a better solution than what you are doing. There is already a related issue, kubernetes issue 1802, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.