Cloud Composer unable to connect to Cloud SQL Proxy service - kubernetes

We launched a Cloud Composer cluster and want to use it to move data from Cloud SQL (Postgres) to BQ. I followed the notes about doing this mentioned at these two resources:
Google Cloud Composer and Google Cloud SQL
https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
We launch a pod running the cloud_sql_proxy and launch a service to expose the pod. The problem is that Cloud Composer cannot see the service stating the error when attempting to use an ad-hoc query to test:
cloud not translate host name "sqlproxy-service" to address: Name or service not known"
Trying by the service IP address results in the page timing out.
The -instances passed to cloud_sql_proxy work when used in a local environment or cloud shell. The log files seem to indicate no connection is ever attempted
me#cloudshell:~ (my-proj)$ kubectl logs -l app=sqlproxy-service
me#2018/11/15 13:32:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2018/11/15 13:32:59 using credential file for authentication; email=my-service-account#service.iam.gserviceaccount.com
2018/11/15 13:32:59 Listening on 0.0.0.0:5432 for my-proj:my-ds:my-db
2018/11/15 13:32:59 Ready for new connections
I see a comment here https://stackoverflow.com/a/53307344/1181412 that possibly this isn't even supported?
Airflow
YAML
apiVersion: v1
kind: Service
metadata:
name: sqlproxy-service
namespace: default
labels:
app: sqlproxy
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
selector:
app: sqlproxy
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sqlproxy
labels:
app: sqlproxy
spec:
selector:
matchLabels:
app: sqlproxy
template:
metadata:
labels:
app: sqlproxy
spec:
containers:
- name: cloudsql-proxy
ports:
- containerPort: 5432
protocol: TCP
image: gcr.io/cloudsql-docker/gce-proxy:latest
imagePullPolicy: Always
command: ["/cloud_sql_proxy",
"-instances=my-proj:my-region:my-db=tcp:0.0.0.0:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials

The information you found in the answer you linked is correct - ad-hoc queries from the Airflow web server to cluster-internal services within the Composer environment are not supported. This is because the web server runs on App Engine flex using its own separate network (not connected to the GKE cluster), which you can see in the Composer architecture diagram.
Since that is the case, your SQL proxy must be exposed on a public IP address for the Composer Airflow web server to connect to it. For any services/endpoints listening on RFC1918 addresses within the GKE cluster (i.e. not exposed on a public IP), you will need additional network configuration to accept external connections.
If this is a major blocker for you, consider running a self-managed Airflow web server. Since this web server would run in the same cluster as the SQL proxy you set up, there would no longer be any issues with name resolution.

Related

kubernetes apps avalaible on localhost

I have local and dockerized apps which are working excelent on localhost : java backend at 8080, angular at 4200, activemq at 8161, and postgres on 5432
Now,I am trying also to kubernetize apps to make them work on localhosts.
As far as I know kubernetes provides random Ip on clusters, what should I do do make them work on localhosts to listen to each other ? Is there any way to make them automatically start at those localhosts instead of using port forwariding for each service ?
Every service and deployment has similiar structure :
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
type: LoadBalancer
ports:
- protocol: 8080
port: 8080
targetPort: 8080
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image:
ports:
- containerPort: 8080
Tried port-forwarding, works, but requires lot of manual work ( open few new powershell windows and then do manual port forwarding)
In the kubernetes eco system apps talk to each other through their services.
If they are in the same namespace they can directly go to the service name of not they need to specify the full name which includes the namespace name:
my-svc.my-namespace.svc.cluster-domain.example
Never mind, find a way to do it automaticaly with port - forwarding, with simply running 1 script
I have wrote a .bat script with these steps:
kubernetes run all deployments file
kubernetes run all services file
15 second timeout to give time to change pod state from pending to running
{ do port forwarding for each service. Every forwarding is in new powershell windows without exiting }

Kubernetes Pod can't connect to local postgres server with Hasura image

I'm following this tutorial to connect a hasura kubernetes pod to my local postgres server.
When I create the deployment, the pod's container fails to connect to postgres (CrashLoopBackOff and keeps retrying), but doesn't give any reason why. Here are the logs:
{"type":"pg-client","timestamp":"2020-05-03T06:22:21.648+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(0)."}}
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasura
hasuraService: custom
name: hasura
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hasura
template:
metadata:
creationTimestamp: null
labels:
app: hasura
spec:
containers:
- image: hasura/graphql-engine:v1.2.0
imagePullPolicy: IfNotPresent
name: hasura
env:
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://USER:#localhost:5432/my_db
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
ports:
- containerPort: 8080
protocol: TCP
resources: {}
I'm using postgres://USER:#localhost:5432/MY_DB as the postgres url - is "localhost" the correct address here?
I verified that the above postgres url works when I try (no password):
> psql postgres://USER:#localhost:5432/my_db
psql (12.2)
Type "help" for help.
> my_db=#
How else can I troubleshoot it? The logs aren't very helpful...
If I got you correctly, the issue is that the Pod (from "inside" the Minikube ) can not access Postgres installed on Host machine (the one that runs Minikube itself) via localhost.
If that is the case, please check this thread .
... Minikube VM can access your host machine’s localhost on 192.168.99.1 (127.0.0.1 from Minikube would still be a Minicube's localhost).
Technically, for the Pod the localhost is Pod itself. The Host machine and Minikube are connected via bridge. You can find out exact ip addresses and routes with the infconfig and route -n on your Minikube Host.

How do I publish .NET Core to Digital Ocean Kubernetes

I am trying to publish a .NET Core Web App and a .NET Core API.
I have been googling and can't find a way to deploy 1 let alone 2 .NET Core apps to a Digital Ocean Kubernetes Cluster, I have 2 nodes and have created a valid manifest and build a Docker image locally and it seems to pass the validation. But I can't actually deploy it. I'm new to Kubernetes and anything I find seems to be related to Google's Kubernetes or Azure Kubernetes.
I don't, unfortunately, have more information than this.
I have one. Weird thing is that DO is actually smart to not have docs
since it doesn't have to. You can recycle Google's and Azure's K8
documentation to work on your DO cluster. The key difference is only
in the namings I suppose, there could be more differentiations but so
far, I haven't met a single problem while applying instructions from
GCP's docs.
https://nozomi.one is running on DO's k8 cluster.
Here's an awesome-dotnetcore-digitalocean-k8 for you.
Errors you may/will face:
Kubernetes - Error message ImagePullBackOff when deploy a pod
Utilising .NET Core appsettings in docker k8
Push the secret file here (Recommended only for staging or below, unless you have a super secret way to deploy this):
kubectl create secret generic secret-appsettings --from-file=./appsettings.secrets.json
And then create a deployment configuration similar to this. Notice that we've added the appsettings at the last few lines:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxxxx
spec:
replicas: 3
template:
metadata:
labels:
app: xxxxx
spec:
containers:
- name: xxxxx
image: xxxxx/xxxxxx:latest
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Production"
volumeMounts:
- name: secrets
mountPath: /app/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: secret-appsettings
Deploying this script is as simple as:
kubectl create -f deployment.yaml
And if you want to test locally in docker first:
docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/test-app:v1
All in all, everything above will help you to deploy your pods.
You need to understand that deploying a new project/app works in this systematic way:
Create a deployment, which is something that pulls the image for you and creates pods that will be deployed to the nodes.
Create a service, that will point proper ports and more (Never tried to do more lol) to your app/s.
This is how a service looks like:
apiVersion: v1
kind: Service
metadata:
name: nozweb
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 80
selector:
app: nozweb
Always ensure that spec:selector:app is specifically following:
spec:
replicas: 3
template:
metadata:
labels:
app: xxxxx
In your deployment configuration. That's how they symlink.
Create an ingress (Optional) that will help act as a reverse proxy to your .NET Core app/project. This is optional because we got kestrel running!

Kubernetes pod can't access other pods exposed by a service

New to Kubernetes.
To build our testing environment, I'm trying to set up a PostgreSQL instance in Kubernetes, that's accessible to other pods in the testing cluster.
The pod and service are both syntactically valid and running. Both show in the output from kubectl get [svc/pods]. But when another pod tries to access the database, it times out.
Here's the specification of the pod:
# this defines the postgres server
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: postgres
spec:
hostname: postgres
restartPolicy: OnFailure
containers:
- name: postgres
image: postgres:9.6.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
protocol: TCP
And here is the definition of the service:
# this defines a "service" that makes the postgres server publicly visible
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
type: ClusterIP
ports:
- port: 5432
protocol: TCP
I'm certain that something is wrong with at least one of those, but I'm not sufficiently familiar with Kubernetes to know which.
If it's relevant, we're running on Google Kubernetes Engine.
Help appreciated!

IP Address assignment for the application running inside the pod in kubernetes

I run my application in one pod and the Mongo database in other pod.
For my application successful startup, it needs to know the IP address where the Mongo is running.
I have questions below:
How do I get to know the the Mongo pod IP address so that I can configure this in my application.
My application will run on some IP & port, and this is provided as part of some configuration file. But as these are containerized and Kubernetes assigns the Pod IP address, how can my application pick this IP address as it own IP?
You need to expose mongodb using Kubernetes Services. With the help of Services there is no need for an application to know the actual IP address of the Pod, you can use the service name to resolve mongodb.
Reference: https://kubernetes.io/docs/concepts/services-networking/service/
An example using mysql:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
ports:
- port: 3306
selector:
name: mysql
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: wppassword
ports:
- containerPort: 3306
name: mysql
If there is application container, in the same namespace, trying to use the mysql container, it can directly use mysql:3306 to connect with out using the POD IP address. And mysql.namespace_name:3306 if the app is in a different namespace.