Where do I find the host IP address for an app deployed in minikube - minikube

I'm deploying a spring boot app in minikube that connects to a database running on the host. Where do I find the IP address that the app can use to get back to the host? For docker I can use ifconfig and get the IP address from the docker0 entry. ifconfig shows another device with IP address 172.18.0.1. Would that be how my app would get back to the host?

I think I understood you correctly and this is what you are asking for.
Minikube is started as a VM on your machine. You need to know the IP which Minikube starts with. This can be done with minikube status or minikube ip, output might look like:
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.1
This will only provide you the IP address of Minikube not your application.
In order to connect to your app from outside the Minikube you need to expose it as a Service.
Example of a Service might look like this:
apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
type: NodePort
ports:
- nodePort: 31317
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: webapp
You can see results:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres ClusterIP 10.0.0.140 <none> 5432/TCP 32m app=postgres
webapp NodePort 10.0.0.235 <none> 8080:31317/TCP 2s app=webapp
You will be able to connect to the webapp from inside the Cluster using 10.0.0.235:8080 of from outside the Cluster using Minikube IP and port 31317.
I also recommend going through Hello Minikube tutorial.

It was the 172.18.0.1 IP address. I passed it to the Spring app running in minikube with a configmap like this:
kubectl create configmap springdatasourceurl --from-literal=SPRING_DATASOURCE_URL=jdbc:postgresql://172.18.0.1:5432/bookservice
The app also needed SPRING_DATASOURCE_DRIVER_CLASS_NAME to be set in a configmap and that credentials SPRING_DATASOURCE_PASSWORD and SPRING_DATASOURCE_USERNAME be set as secrets.
More information on configmap and secret are here.

Related

Load balancer error during Kubernetes 3 tier configuration [duplicate]

I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
and now I want to expose its port 80 on port 30062 of node, for that I created a service below,
kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancer
this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.
It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.
With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.
If you are using Minikube, there is a magic command!
$ minikube tunnel
Hopefully someone can save a few minutes with this.
Reference link
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.
...
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10
I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.
$ kubectl get svc -n argocd argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server LoadBalancer 10.107.37.153 <pending> 80:30047/TCP,443:31307/TCP 110s
In my case I've patched the service like this:
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
After this, it started serving over the public IP
$ kubectl get svc argo-ui -n argo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-ui LoadBalancer 10.103.219.8 172.31.71.218 80:30981/TCP 7m50s
To access a service on minikube, you need to run the following command:
minikube service [-n NAMESPACE] [--url] NAME
More information here : Minikube GitHub
When using Minikube, you can get the IP and port through which you
can access the service by running:
minikube service [service name]
E.g.:
minikube service kubia-http
If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.
Step 1: Install MetalLB in your cluster
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Step 2: Configure it by using a configmap
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Step 3: Create your service to get an external IP (would be a private IP though).
FYR:
Before MetalLB installation:
After MetalLB installation:
If running on minikube, don't forget to mention namespace if you are not using default.
minikube service << service_name >> --url --namespace=<< namespace_name >>
Following #Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.
$ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'
This will replace that 'pending' with a new patched up IP address you can use for your cluster.
For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes
Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.
If you are using minikube then run commands below from terminal,
$ minikube ip
$ 172.17.0.2 // then
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245
In case someone is using MicroK8s: You need a network load balancer.
MicroK8s comes with metallb, you can enable it like this:
microk8s enable metallb
<pending> should turn into an actual IP address then.
A general way to expose an application running on a set of Pods as a network service is called service in Kubernetes. There are four types of service in Kubernetes.
ClusterIP
The Service is only reachable from within the cluster.
NodePort
You'll be able to communicate the Service from outside the cluster using NodeIP:NodePort.default node port range is 30000-32767, and this range can be changed by define --service-node-port-range in the time of cluster creation.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer.
ExternalName
Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Only the LoadBalancer gives value for the External-IP Colum. and it only works if the Kubernetes cluster is able to assign an IP address for that particular service. you can use metalLB load balancer for provision IPs to your load balancer services.
I hope your doubt may go away.
You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .
Taking reference with above posts , Following worked for me :
kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'
Adding a solution for those who encountered this error while running on amazon-eks.
First of all run:
kubectl describe svc <service-name>
And then review the events field in the example output below:
Name: some-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector: app=some
Type: LoadBalancer
IP: 10.100.91.19
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31022/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 67s service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Review the error message:
Failed to ensure load balancer: could not find any suitable subnets for creating the ELB
In my case, the reason that no suitable subnets were provided for creating the ELB were:
1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).
2: The Subnets weren't tagged according to the requirements mentioned here.
Tagging VPC with:
Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared
Tagging public subnets with:
Key: kubernetes.io/role/elb
Value: 1
If you are using a bare metal you need the NodePort type
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
LoadBalancer works by default in other cloud providers like Digital Ocean, Aws, etc
k edit service ingress-nginx-controller
type: NodePort
spec:
externalIPs:
- xxx.xxx.xxx.xx
using the public IP
Use NodePort:
$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2 --port=5000
$ kubectl expose deployment user-login --type=NodePort --name=user-login-service
$ kubectl describe services user-login-service
(Note down the port)
$ kubectl cluster-info
(IP-> Get The IP where master is running)
Your service is accessible at (IP):(port)
The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service
minikube tunnel
The below solution works in my case.
First of all, try this command:
minikube tunnel
If it's not working for you. follow the below:
I restart minikube container.
docker minikube stop
then
docker minikube start
After that re-run kubernetes
minikube dashboard
After finish execute :
minikube tunnel
I have the same problem.
Windows 10 Desktop + Docker Desktop 4.7.1 (77678) + Minikube v1.25.2
Following the official docs on my side, I resolve with:
PS C:\WINDOWS\system32> kubectl expose deployment sito-php --type=LoadBalancer --port=8080 --name=servizio-php
service/servizio-php exposed
PS C:\WINDOWS\system32> minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service servizio-php.
PS E:\docker\apache-php> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33h
servizio-php LoadBalancer 10.98.218.86 127.0.0.1 8080:30270/TCP 4m39s
The open browser on http://127.0.0.1:8080/
same issue:
os>kubectl get svc right-sabertooth-wordpress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP
os>minikube service list
|-------------|----------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------------|--------------------------------|
| default | kubernetes | No node port |
| default | right-sabertooth-mariadb | No node port |
| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |
| | | http://192.168.99.100:30427 |
| kube-system | kube-dns | No node port |
| kube-system | tiller-deploy | No node port |
|-------------|----------------------------|--------------------------------|
It is, however,accesible via that http://192.168.99.100:30454.
There are three types of exposing your service
Nodeport
ClusterIP
LoadBalancer
When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online
Note not a domain name but a dns.
So loadbalancer type does not work in our local minikube env.
Those who are using minikube and trying to access the service of kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So a good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.
Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.
If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/
but it's in beta yet..
Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.
Finally, delete existing service and create a new one solved my problem.
For your use case best option is to use NordPort service instead of loadbalancer type because loadbalancer is not available.
I was getting this error on the Docker-desktop. I just exit and turn it on again(Docker-desktop). It took few seconds, then It worked fine.
Deleting all older services and creating new resolved my issue. IP was bound to older service. just try "$kubectl get svc" and then delete all svc's one by one "$kubectl delete svc 'svc name' "
May be the subnet in which you are deploying your service, have not enough ip's
If you are trying to do this in your on-prem cloud, you need an L4LB service to create the LB instances.
Otherwise you end up with the endless "pending" message you described. It is visible in a video here: https://www.youtube.com/watch?v=p6FYtNpsT1M
You can use open source tools to solve this problem, the video provides some guidance on how the automation process should work.

NodePort doesn't work in OpenShift CodeReady Container

Install a latest OpenShift CodeReady Container on CentOS VM, and then run a TCP server app written by Java on OpenShift. The TCP Server is listening on port 7777.
Run app and expose it as a service with NodePort, seems that everything runs well. The pod port is 7777, and the service port is 31777.
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tcpserver-57c9b44748-k9dxg 1/1 Running 0 113m 10.128.0.229 crc-2n9vw-master-0 <none> <none>
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tcpserver-ingres NodePort 172.30.149.98 <none> 7777:31777/TCP 18m
Then get node IP, the command shows as 192.168.130.11, I can ping this ip on my VM successfully.
$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
crc-2n9vw-master-0 Ready master,worker 26d v1.14.6+6ac6aa4b0 192.168.130.11 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8
Now, run a client app which is located in my VM, because I can ping OpenShift Node IP, so I think I can run the client app successfully. The result is that connection time out, my client fails to connect server running on OpenShift.
Please give your advice how to troubleshoot the issue, or any ideas for the issue.
I understood your problem. As per what you described, I can see your Node port is 31777.
The best way to debug this problem is going step by step.
Step 1:
Check if you are able to access your app server using your pod IP and port i.e curl 10.128.0.229:7777/endpoint from one of your nodes within your cluster. This helps you with checking if pod is working or not. Even though kubectl describe pod gives you everything.
Step 2:
After that, on the Node which the pod is deployed i.e 192.168.130.11 on this try to access your app server using curl localhost:31777/endpoint. If this works, Nodeport is accessible i.e your service is working fine without any issues.
Step 3:
After that, try to connect to your node using curl 192.168.130.11:31777/endpoint from the vm running your client server. Just to let you know, 192. is class A private ip, so I am assuming your client is within the same network and able to talk to 192.169.130.11:31777 Or make sure you open your the respective 31777 port of 192.169.130.11 to the vm ip that has client server.
This is a small process of debugging the issue with service and pod. But the best is to use the ingress and an ingress controller, which will help you to talk to your app server with a url instead of ip address and port numbers. However, even with ingress and ingress controller the best way to debug all the parts are working as expected is following these steps.
Please feel free to let me know for any issues.
Thanks prompt answer.
Regarding Step 1,
I don't know where I could run "curl 10.128.0.229:7777/endpoint" inside cluster, but I check the status of pod via going to inside pod, port 777 is listening as expected.
$ oc rsh tcpserver-57c9b44748-k9dxg
sh-4.2$ netstat -nap | grep 7777
tcp6 0 0 127.0.0.1:7777 :::* LISTEN 1/java
Regarding Step 2,
run command "curl localhost:31777/endpoint" on Node where pod is deployed, it failed.
$ curl localhost:31777/endpoint
curl: (7) Failed to connect to localhost port 31777: Connection refused
That means, it seems that 31777 is not opened by OpenShift.
Do you have any ideas how to check why 31777 is not opened by OpenShift.
More information about service definition:
apiVersion: v1
kind: Service
metadata:
name: tcpserver-ingress
labels:
app: tcpserver
spec:
selector:
app: tcpserver
type: NodePort
ports:
- protocol: TCP
port: 7777
targetPort: 7777
nodePort: 31777
Service status:
$ oc describe svc tcpserver-ingress
Name: tcpserver-ingress
Namespace: myproject
Labels: app=tcpserver
Annotations: <none>
Selector: app=tcpserver
Type: NodePort
IP: 172.30.149.98
Port: <unset> 7777/TCP
TargetPort: 7777/TCP
NodePort: <unset> 31777/TCP
Endpoints: 10.128.0.229:7777
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

How do you access a pod inside a Kubernetes Cluster in Windows?

I have MariaDB running inside a Kubernetes node in Minikube in a Virtual Box on Windows. I want to try and communicate with the MariaDB pod such that I can read a table and visualize the contents inside Tableau. In order to do this I need to expose the Pod outside of Minikube, and also be able to access it through The Virtual Box.
I have not exposed the pod, but if I understand it correctly I need to write a NodePort Service to expose it outside the Minikube.
apiVersion v1
kind Service
metadata:
name: mariadb
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30008
selector:
app: mariadb
chart: mariadb-6.4.0
component: master
controller-revision-hash: my-release-mariadb-master-7b7cc7895
release: my-release
statefulset.kubernetes.io/pod-name: my-release-mariadb-master-0
If I did not have the minikube inside a VirtualBox I should now be able to connect to the pod through the service. But in my case, how would one "open up" the Virtual Box such that I can communicate with the minikube and then the NodePort?
Thank you for any help!
In order to open the exposed service, the minikube service command can be used:
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.102 <nodes> 8080/TCP 7s
kubernetes 10.0.0.1 <none> 443/TCP 13m
$ minikube service hello-minikube
Opening kubernetes service default/hello-minikube in default browser...
This command will open the specified service in your default browser.
you can also get the url using :
$ minikube service hello-minikube --url
http://192.168.99.100:31167

Kubernetes Service not being assigned an (external) IP address

There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80

Accessing Kubernetes pods/services through one IP (Preferably Master Node)

I have a local Kubernetes installation with a master node and two worker nodes. Is there a way to access all services/pods that will be installed on Kubernetes through master node's ip?
What i mean is say i have a test service running on port 30001 on each worker and i want to access this service like http://master-node:30001. Every help is appreciated.
You can use "the proxy verb" to acces nodes, pods, or services through the master. Only HTTP and HTTPS can be proxied. See these docs and these docs.
There are some ways to do it:
Define a NodePort Kubernetes service
Use kubefwd or port forwarding command
Use proxy command (Only support HTTP & HTTPS)
In this answer, I explain how to define a NodePort Service.
The NodePort service is explained as below (Service - Kubernetes)
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
Here is an example of the NodePort service for PostgreSQL:
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
type: NodePort
selector:
app: postgres
The port field stands for both service port and default target port. There is also a nodePort field that allows you to choose the port to access the service from outside of the cluster (via the node's IP and the nodePort)
To view the node's Port (if you don't specify it from the manifest), you can run the command:
kubectl get services -n postgres
The output should look similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres NodePort 10.96.156.75 <none> 5432:30864/TCP 6d9h app=postgres
In this case, the nodePort is 30864, this is the port to access to the service from outside the cluster.
To find out the node's IP, the command to use is:
kubectl get nodes -o wide
The output should look similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
homedev-control-plane Ready master 30d v1.19.1 172.18.0.2 <none> Ubuntu Groovy Gorilla (development branch) 5.9.1-arch1-1 containerd://1.4.0
If what you need is the IP only:
kubectl get nodes -o wide --no-headers | awk '{print $6}'
In this case, the node's IP is 172.18.0.2. Hence to connect to the Postgres in the local Kubernetes cluster from your host machine, the command would look like this:
psql -U postgres -h 172.18.0.2 -p 30864-d postgres