Connect to Google Cloud SQL from Container Engine with Java App - kubernetes

I'm having a tough time connecting to a Cloud SQL Instance from a Java App running in a Google Container Engine Instance.
I whitelisted the external instance IP from the Access Control of CloudSQL. Connecting from my local machine works well, however I haven't managed to establish a connection from my App yet.
I'm configuring the Container as (cloud-deployment.yaml):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APPNAME
spec:
replicas: 1
template:
metadata:
labels:
app: APPNAME
spec:
imagePullSecrets:
- name: APPNAME.com
containers:
- image: index.docker.io/SOMEUSER/APPNAME:latest
name: web
env:
- name: MYQL_ENV_DB_HOST
value: 111.111.111.111 # the cloud sql instance ip
- name: MYQL_ENV_MYSQL_PASSWORD
value: THEPASSWORD
- name: MYQL_ENV_MYSQL_USER
value: THEUSER
ports:
- containerPort: 9000
name: APPNAME
using the connection url jdbc:mysql://111.111.111.111:3306/databaseName, resulting in:
Error while executing: Access denied for user 'root'#'ip adress of the instance' (using password: YES)`
I can confirm that the Container Engine external IP is set on the SQL instance.
I don't want to use the Cloud Proxy Image for now as I'm still in development stage.
Any help is greatly appreciated.

You must use the cloud SQL proxy as described here: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/README.md

Related

Kubernetes on Docker Desktop - local connectivity issue using a service between two pods

I have created an ASP.NET Core C# Kubernetes Microservice (Named: 'DemoApi') with an Angular Frontend app (Named: 'DemoApp').
Despite the fact that it works when I run both containers through Docker for Desktop or Docker Swarm Mode, I seem to have some connectivity issues when I run through Kubernetes on Docker Desktop.
To clarify the issue, I can display both
the front-end application at http://localhost:30005
the back-end application at http://localhost/demo
but while running the pods using Kubernetes, the front-end application does not display the data it should get from the API back-end.
Despite my search, I could not find a similar problem to mine. What could be the problem? Any assistance would be greatly appreciated.
The following is a short summary of the steps I took:
Defined the applications to read one from the other.
a. DemoApi:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddCors(options =>
{
options.AddPolicy("AllowAngularDevClient",
builder =>
{
builder
.WithOrigins("http://localhost:4200")
.AllowAnyHeader()
.AllowAnyMethod();
});
options.AddPolicy("AllowAngularClient",
builder =>
{
builder
.WithOrigins("http://localhost")
.AllowAnyHeader()
.AllowAnyMethod();
});
});
}
b. DemoApp (under app.component.ts):
export class AppComponent {
title = 'DemoApp';
response = "No data loaded, yet";
constructor(private http: HttpClient)
{
this.http.get('http://localhost/demo', {responseType: 'text'}).subscribe((response: any) => {
console.log(response);
this.response = response;
});
}
Built the applications into an image
a. Built the ASP.NET Core C# using the default Visual Studio docker support (Using the solution explorer, Right-click the dockerfile > Build Docker Image).
b. Created a dockerfile for the angular application as follow:
FROM node:14-alpine as build-step
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
RUN npm run build --prod
FROM nginx:1.21.0-alpine
COPY --from=build-step /app/dist/DemoApp /usr/share/nginx/html
then ran the command docker build -t demoapp .
✔️ Worked using Local Docker for Desktop
docker run --name demoapi-container demoapi
docker run --name demoapp-container --publish 4200:80 demoapp
✔️ Worked using Local Docker Swarm Mode
docker service create --name demoapi-svc -p 80:80 demoapi
docker service create --name demoapp-svc -p 4200:80 demoapp
❌ Did not work using Local Kubernetes for Docker Desktop
DemoApi:
apiVersion: v1
kind: Pod
metadata:
name: demo-api-pod
labels:
name: demo-api-pod
app: demo-api
spec:
containers:
- name: demo-api-container
image: demoapi:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demoapi
labels:
name: demo-api-svc
app: demo-api
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: demo-api-pod
app: demo-api
DemoApp:
apiVersion: v1
kind: Pod
metadata:
name: demo-angular-pod
labels:
name: demo-angular-pod
app: demo-angular
spec:
containers:
- name: demo-angular-container
image: demoapp:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demoapp
labels:
name: demo-angular-svc
app: demo-angular
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30005
selector:
name: demo-angular-pod
app: demo-angular
EDIT: I have completely re-configured everything starting from scratch thanks to #darwinawardee's suggestion:
At my DemoApi app I've added to the controller:
options.AddPolicy("AllowAngularKubernetesClient",
builder =>
{
builder
.WithOrigins("http://demoapp").WithOrigins("demoapp").WithOrigins("http://localhost:30005")
.AllowAnyHeader()
.AllowAnyMethod();
});
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCors("AllowAngularKubernetesClient");
[...]
}
At my DemoApp app I've added to app.component.ts:
constructor(private http: HttpClient)
{
this.http.get('http://demoapi/demo', {responseType: 'text'}).subscribe((response: any) => {
console.log(response + " -- from Kubernetes Cluster.");
this.response = response;
});
}
and re-built the images.
I've done the same process, yet the DemoApp does not get the information from the DemoApi.
Here are the logs from the entire process:
This is the result of using native Chrome's Inspect > Network tab:
You will be facing a network routing issue.
In Kubernetes each deployment will run in a pod which is assigned it's own IP address, as is the associated service for each (which load balances traffic across pods). Kubernetes maintains an internal DNS for routing between services. A service can be reached by it's full route servicename.namespace.svc.cluster-domain.example or within the namespace simply via the servicename itself.
So for the traffic to reach the DemoApi you'll the DemoApp to query http://DemoApi/demo rather than http://localhost/demo.

kubernetes fails to pull a private image [Google Cloud Container Registry, Digital Ocean]

I'm trying to setup GCR with kubernetes
and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'
Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
used this to add the secret, and it said created
kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" --docker-email=mygcpemail#gmail.com
How can I debug this, any ideas are welcome!
It looks like the issue is caused by lack of permission on the related service account
XXXXXXXXXXX-compute#XXXXXX.gserviceaccount.com which is missing Editor role.
Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].
Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2].
[1] https://cloud.google.com/container-registry/docs/access-control
[2]https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engine”
If the VM instance for pushing or pulling images and the Container Registry storage bucket are in the same Google Cloud Platform project, the Compute Engine default service account is configured with appropriate permissions to push or pull images.
If the VM instance is in a different project or if the instance uses a different service account, you must configure access to the storage bucket used by the repository.
By default, a Compute Engine VM has the read-only access scope configured for storage buckets. To push private Docker images, your instance must have read-write storage access scope configured as described in Access scopes.
Please have 1 for further reference:
Please follow below table as 2:
Action Permission Role Role Title
Pull (Read Only) - storage.objects.get roles/storage.objectViewer Storage Object Viewer
storage.objects.list
Also, you could share if there having any error code as you are having trouble in any steps.

How to create a mongo database per service with Docker

I am working towards having multiple services (NodeJS, Spring-boot) that each have their own MongoDB Database-server-per-service (eventually targeting GCP & K8s) so that I can keep the data separate. I will be using Docker compose to launch both the service and database together. However, when I run multiple services, naturally I get port collision. Here is a typical docker-compose file:
version: '3'
# Define the services/containers to be run
services:
myapp: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- database
database: # name of the service
image: mongo # specify image to build container from
volumes:
- ./data:/data/db
ports:
- "27017:27017"
I am looking for an example of how to do this. My thinking is that each compose file will have it's own ports and each service will map to those ports internally?
You can make yaml for deployment and explain all your containers in one pod (a Pod is a group of containers). Your deployment may look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-deployment
labels:
app: application
spec:
selector:
matchLabels:
app: application
template:
metadata:
labels:
app: application
spec:
containers:
- name: application
image: application:version
ports:
- containerPort: 3000
name: database
image: database:version
ports:
- containerPort: 27017
It is just deployment inside your cluster. You need to expose it outside the cluster. I recommend you to use Ingress for that.
Here you will have the database inside the pod. Also you can create 2 deployments for database and your app in the same namespace.
Also, you need to build Docker images manually or use the CI tool for that. You can manage environments ( prod, pre-prod, dev, test ) by namespaces. One namespace for one environment will give you full isolation. Also, to manage all this, I recommend you to use tools like Helm or kops.
There are a lot of differences between Kubernetes and Docker-compose, but the main difference is design. In Kubernetes, you have more entities for each level of application, and you can manage them. In Docker-compose, you configure all as one service in one place and usually it is hard to manage some specific things.

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"

Mysql 5.7 image on kubernetes terminates after every 2 to 3 weeks

I noticed that mysql 5.7 images on google container engine terminates itself after every 2 to 3 weeks of running in my cluster . i configured a small cluster as a test environment . I have 3 nodes with one for database , one for api and the other for my node js front end .
This all works well after my configuration i am able to create my database and its accompanying tables , stored procedures and our usual db objects . My back ends all connects to the db and also my front ends are all up and running . Then suddenly after a period i can estimate about 3 weeks my back ends can no longer connect to my databases any more . it just points out that it cant connect to mysql server . I dash to my cmd and check if the mysql pod is running . it actually is running . But i cant connect access my db . I had to redeploy the mysql image luckily because of my persistent volumes could still recover the db files . The second time it occurred it kept saying no root user , i was surprised because i normally do all my db design and all using this user . The third time it just couldn't locate my db any more . I'm also thinking it might be my deployment script i attached it here as well for nay suggestions :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This is what i get in the logs
W1231 11:59:23.713916 14792 cmd.go:392] log is DEPRECATED and will be
removed in a future version. Use logs instead.
Initializing database
2017-12-31T10:57:23.236067Z 0 [Warning] TIMESTAMP with implicit DEFAULT
value is deprecated. Please use --explicit_defaults_for_timestamp server
option (see documentation for more details).
2017-12-31T10:57:23.237652Z 0 [ERROR] --initialize specified but the data
directory has files in it. Aborting.
2017-12-31T10:57:23.237792Z 0 [ERROR] Aborting