I have an application with Spring backend and Angular frontend. I am using docker-compose and it is creating 2 containers.
I have bridge network, so locally I am able to test.
Now I want to deploy to Google Cloud.
Question: (1) Do I need to create any gcp specific yaml file?
The cluster I created seems not good enough, Using GKE in this case
Does not have minimum availability
I haven't seen any examples where spring and angular are deployed using CloudRun individually. But is this possible?
I desperately need to deploy my containers. IS there a way?
Edit:
The backend spring is able to talk to CloudSQL (answered in another post)
The angular app is not running because it doesnt know upstream host
nginx-py-sha256-2
2021/07/14 15:21:13 [emerg] 1#1: host not found in upstream "appserver:2623" in /etc/nginx/conf.d/default.conf:2
In my docker compose -
services:
# App backend service
appserver:
container_name: appserver
image: appserver
pui:
container_name: nginx-py
image: nginx-py
and my nginx.conf refers as appserver
The image I push is
docker push eu.gcr.io/myapp/appserver
what name should I use in nginx.conf so that it can identify host upstream? nice If I can disable prefix
GCP Kubernetes workload "Does not have minimum availability" is unanswered. so not a duplicate
You have a container for the backend and a front end with static file. the best pattern for that is:
Use Cloud Run for your backend
Use Cloud Storage for the frontend (static files) (make the bucket publicly accessible)
Use a load balancer where you route the static request to the bucket backend, and the backend request to Cloud Run
And, of course, forgot Docker Compose.
Note: if you have a container for your frontend, with a webserver (Angular server, NGINX or something else) you can deploy it on Cloud Run also, but you will pay processing for nothing. Cloud Storage is a better solution.
In both cases, a load balancer is recommended to avoid CORS issue. And in addition, you will be able to add CDN on the load balancer if you need it in your business
Related
Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?
I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?
The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).
When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".
You can see this warning in the Cloud SQL proxy example running as a service in k8s, or watch this video on Connecting to Cloud SQL from Kubernetes which explains the reason as well.
The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.
When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.
As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.
Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.
Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.
Refer to the documentation for more information.
My application has it's backend and it's frontend. The frontend is currently hosted in a Google Cloud Storage bucket and I am migrating the backend from Compute Engine VMs to Kubernetes Engine Autopilot.
When migrating, does it make more sense for me to migrate everything to Kubernetes Engine or would I be better off keeping the frontend in the bucket? Backend and frontend are different projects in different git repositories.
I am asking because I saw that it is possible to manage Kubernetes services' exposure, even at the level of URL Maps and Load Balancer, so I thought of perhaps entrusting all my projects' (backend and frontend) hosting to Kubernetes, since I know that Kubernetes is a very complete and powerful solution.
There isn't problem to keep your front on Cloud Storage (or elsewhere) and to have your backend on kubernetes (GKE).
It's not a "perfect pattern" because you can't handle and deploy all the part of your application only with Kubernetes and you haven't a end to end control plane management.
You have, one side to deploy your frontend and to configure your load balancer. On the other hand, you have to deploy your backend on Kubernetes with YAML.
In addition, your application is not portable on other kubernetes cluster (because it's not full kubernetes deployment, but hybrid between Kubernetes and Google Cloud, and you are partly sticky to Google Cloud). But if it's not a requirement, it's fine.
At the end, if you expose your app behind a load balancer with the front on Cloud Storage and the back on GKE, the user will see nothing. If a day, you want to package your front in a container and deploy it on GKE, keep the same load balancer (at least the same domain name) and you user won't notice the difference!!
No worry for now, you can keep going! (and it's cheaper for now! You don't pay processing to serve static resource with Cloud Storage)
I replaced Eureka service with Spring Cloud Kubernetes Discovery to run in kubernetes cluster (microk8s) and it's work fine in k8s without eurika. But how can i use Spring Cloud Kubernetes Discovery for local debug? For example, when i'm starting my microservices local without kubernetes, how can I resolve them by name? Is't necessary to use any local discovery service like Eurika in that case? or is there some other way?
simple way can be to create a network of services via docker-compose file and run docker containers for the applications those need to be communicate with and the main services those you need to debug can be opened in the VSCode like editors.
The service discovery can happen by help of docker-compose and eureka or spring-cloud won't be required.
I'm kind of a newbie at using GCP/Kubernetes. I want to deploy both a GRPC service and a client to GCP.
I have read a lot about it and have tried several things. There's something on cloud endpoints where you compile your proto file and do an api.config.yaml. (https://cloud.google.com/endpoints/docs/grpc/get-started-grpc-kubernetes-engine)
That's not what I'm trying to do. I want to upload a GRPC service with it's .proto and expose its HTTP/2 public IP address and port. Then, deploy a GRPC client that interacts with that address and exposes REST endpoints.
How can I get this done?
To deploy a grpc application to GKE/Kubernetes:
Learn about gRPC, follow one of the quickstarts at https://grpc.io/docs/quickstart/
Learn about how to build Docker images for your application.
Follow this Docker tutorial: https://docs.docker.com/get-started/part2/#conclusion-of-part-two
Once you have a Docker image, follow https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app tutorial to learn how to:
push a container image to Google Container Registry
create a GKE cluster
deploy the container image
expose it on public internet on an IP address.
These should be good to start with.
Note that gRPC apps aren't much different than just HTTP web server apps. As far as Kubernetes is concerned, they're just a container image with a port number. :)
I've followed the instructions given in this link to setup kong on kubernetes container in my local machine. I'm able to access APIs behind kong through Kubernetes (minikube) IP. Now, I've enterprise edition (trial version) of kong. Without Kubernetes, i've downloaded Kong enterprise image and able to run Kong in my local machine. But, my question is how to setup enterprise Kong installation on kubernetes container. I assume that i've to tweak "image section" in .yaml to pull enterprise Kong image. But i'm not sure how to do that. Can you help us how to go ahead with enterprise Kong installation on Kubernetes container?
There are (at least) two answers to your question:
set up a private docker registry -- even one inside your own kubernetes cluster -- and push the image to it, then point the image: at the internal registry
that assumes that your enterprise purchase didn't come with access to an authenticated registry hosted by Mashape, would would absolutely be the preferred mechanism for that problem
or I think you can pre-load the docker image onto the nodes via PodSpec:initContainers: in any number of ways: ftp, http, s3api, nfs, ... because the initContainer will run before the Pod container, I would expect kubelet will delay the image pull of the container until the initContainer has finished. If I had a working cluster in front of me, I'd try it out, so take this one with a grain of salt