I did research but couldn't find any "howTo" for deploying Metabase with Cloud Run on GCP, I only found Q&A the problems by deploying it.
My goal is to deploy Metabase with Cloud run and use Postgres as database. I have already deployed app by Cloud Run and have cloudbuild.yml pipeline on git that I use it for building my app I just wanna add Metabase.
Any directions or solutions?
It is not recommended to deploy Metabase on Cloud Run because:
Computation is scoped to a request. You should only expect to be able
to do computation within the scope of a request: a container instance
does not have any CPU available if it is not processing a request.
Container runtime contract
Therefore you might be facing issues loading the application and database connections timeout.
I think your best option is using AppEngine Flexible.
Install Metabase on Google Cloud with Docker – App Engine
It's very hard to use Metabase on Cloud Run. And Running it on Flexible App Engine is more expensive (~100$/month). Metabase entrypoint doing lot of processes and if Cloud Run decide to create another container instance it will display the loading metabase page so we have to do multiple manually refresh to get the new container ready.
I managed this problem by using more JVM JAVA_OPTS -Xmx4g and limit the MAX_INSTANCES to 1. I set a Cloud Scheduler task who does a get query to the metabase service base URL every 10 minutes.
With this configuration metabase will stay alive and Cloud Run will never create more than 1 container. So the refresh problem will not appear any more.
For the moment I don't have problems with this configuration but I think the best way to host metabase is to run it on Compute Engine VM.
Related
I am testing Google Cloud Run by following the official instruction:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy
Is it possible to deploy or use several containers as one service in Google Cloud Run? For example: DB server container, Web server container, etc.
Short Answer NO. You can't deploy several container on the same service (as you could do with a Pod on K8S).
However, you can run several binaries in parallel on the same container -> This article has been written by a Googler that work on Cloud Run.
In addition, keep in mind
Cloud Run is a serverless product. It scales up and down (to 0) as it wants (but especially according with the traffic). If the startup duration is long and a new instance of your service is created, the query will take time to be served (and your use will wait)
You pay as you use, I means, you are billed only when HTTP requests are processed. Out of processing period, the CPU allocated to the instance is close to 0.
That implies that Cloud Run serves container that handle HTTP requests. You can't run a batch processing out of any HTTP request, in background.
Cloud Run is stateless. You have an ephemeral and in memory writable directory (/tmp) but when the instance goes down, all the data goes down. You can't run a DB server container that store data. You can interact with external services (Cloud SQL, Cloud Storage,...) but store only transient file locally
To answer your question directly, I do not think it is possible to deploy a service that has two different containers: DB server container, and Web server container. This does not include scaling (service is automatically scaled to a certain number of container instances).
However, you can deploy a container (a service) that contains multiple processes, although it might not be considered as best practices, as mentioned in this article.
Cloud Run takes a user's container and executes it on Google infrastructure, and handles the instantiation of instances (scaling) of that container, seamlessly based on parameters specified by the user.
To deploy to Cloud Run, you need to provide a container image. As the documentation points out:
A container image is a packaging format that includes your code, its packages, any needed binary dependencies, the operating system to use, and anything else needed to run your service.
In response to incoming requests, a service is automatically scaled to a certain number of container instances, each of which runs the deployed container image. Services are the main resources of Cloud Run.
Each service has a unique and permanent URL that will not change over time as you deploy new revisions to it. You can refer to the documentation for more details about the container runtime contract.
As a result of the above, Cloud Run is primarily designed to run web applications. If you are after a microservice architecture, which consists of different servers running each in unique containers, you will need to deploy multiple services. I understand that you want to use Cloud Run as database server, but perhaps you may be interested in Google's database solutions, like Cloud SQL, Datastore, BigTable or Spanner.
Is there any advantage if I use Cloud Run instead of deploying a normal service/container in GKE?
I will try to add my perspective.
This answer does not cover running containers in Google Cloud Run Kubernetes. The reason is that we wanted an almost zero cost solution for a legacy PHP website. Cloud Run fit perfectly and we had an easy time both porting the code and learning Cloud Run.
We needed to do something with a legacy PHP website. This website was running on Windows Server 2012, IIS and PHP 7.0x. The cost was over $100.00 per month - mostly for Windows licensing fees for a VM in the cloud. The site was not accessed very much but was needed for various business reasons.
A decision was made Thursday (4/18/2019) was that we needed to learn Google Cloud Run, so we decided to port this site to a container and try to run the container in Google Cloud. Nothing like a real world example to learn the details.
Friday, we ported the PHP code to Apache. Very easy process. We did not worry about SSL as we intend to use Cloud Run SSL.
Saturday we started to learn Cloud Run. Within an hour we had the Hello World PHP example running. Link.
Within two hours we had the containerized website running in Cloud Run. Again, very simple.
Then we learned how to configure Cloud Run SSL with our DNS server.
End result:
Almost zero cost for a PHP website running in Cloud Run.
Approximately 1.5 days of effort to port the legacy code and learn Cloud Run.
Savings of about $100.00 per month (no Windows IIS server).
We do not have to worry about SSL certificates from now on for this site.
For small websites that are static, Cloud Run is a killer product. The learning curve is very small even if you do not know Google Cloud. You just need to configure gcloud for container builds and deployment. This means developers can be independant of needing to master GCP.
There are many distinctions in using Cloud Run to expose a service as compared to running it natively in GKE. The primary of these is that Cloud Run provides more of a serverless infrastructure. Basically you declare that you want to expose a service and then let GCP do the rest. Contrast this with creating a Kubernetes cluster and then defining your service in pods. With a manually created GKE cluster, the nodes and environment are always on which means that you are billed for them regardless of utilization. With Cloud Run, your service is merely available and you are only billed for actual consumption. If your service not being called, your costs are zero. Another advantage is that you don't have to predict your utilization needs and allocate sufficient nodes. Scaling happens automatically for you.
See also these presentations from Google Next 19:
Migrating from a Monolith to Microservices (Cloud Next '19)
What's New in Serverless Compute? (Cloud Next '19)
Run Containers on GCP's Serverless Infrastructure (Cloud Next '19)
Run Cloud Functions Everywhere (Cloud Next '19)
Container Once, Serverless Anywhere (Cloud Next '19)
I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.
I will host my Spring Boot/ MongoDB application, developed with Java-8 in the cloud (in Europe and if possible in Germany - a demand of the customer).
I did a research and I really found a lot of possibilities.
The one that I think fits best are
Microsoft Azure and
AWS
honestly I dont know how to start. Does anyone know if there is a good tutorial to start - e.g. for installing MongoDB, than for uploading my jar file.
And than I would start my application with java -jar myApp.jar.
Is there a good how to do link?
If you're open to using Kubernetes then you could look at the example of https://github.com/nhatthai/spring-mongodb-minikube or https://github.com/elizabetht/kubernetes-mongo-docker-spring-boot You could use Azure's AKS as I'm guessing you wouldn't want to spend much time on cluster management. (AWS's EKS offering is still in preview mode at the moment.) If you did go this route you could test on a local cluster with minikube. It sounds like you're looking for a cloud provider but you might instead want to use Kubernetes as a cloud-agnostic orchestrator for your application. (On this you might want to look at Is Kubernetes + Docker + AWS = Azure + Service Fabric? ) This is just a suggestion - you could instead choose to go for something provider-specific e.g. using Azure's CosmosDB https://github.com/Azure-Samples/azure-cosmos-db-mongodb-spring
We need to use Jenkins to test some web apps that each need:
a database (postgres in our case)
a search service (ElasticSearch in our case, but only sometimes)
a cache server, such as redis
So far, we've just had these services running on the Jenkins master, but this causes problems when we want to upgrade Postgres, ES or Redis versions. Not all apps can move in lock step, and we want to run the tests on new versions before committing to move an app in production.
What we'd like to do is have these services provided on a per-job-run basis, each one running in its own container.
What's the best way to orchestrate these containers?
How do you start up these ancillary containers and tear them down, regardless of whether to job succeeds or not?
how do you prevent port collisions between, say, the database in a run of a job for one web app and the database in the job for another web app?
Check docker-compose and write a docker-compose file for your tests.
The latest network features of Docker (private network) will help you to isolate builds running in parallel.
However, start learning docker-compose as if you only had one build at the same time. When confident with this, look further for advanced docker documentation around networking.