I want to deploy a web API on Google cloud and for test purposes I would just put the API key in the app.yaml file as an environment variable. Is this a security issue?
It's generally problematic to persist secrets to files. Even if the app.yaml were inaccessible from the runtime service, you'd still face challenges that it be exposed in build logs and if you inadvertently commit app.yaml to e.g. github.
For "testing", you can run generally run an App Engine locally. This isn't a perfect replica of the production service but it should be sufficient for testing.
A solution for managing secrets is e.g. Google's Secret Manager. SDKs (encouraged) and the underlying REST API (discouraged) are available.
Related
I have a simple Spring Boot REST app that uses Mongo Atlas as the database and I have a couple of environment variables to pass to the project for the connection URL. I have defined this in the src/main/resources/application.properties which is the standard Spring profile for storing such properties. Here's the property name and value.
spring.data.mongodb.uri=mongodb+srv://${mongodb.username}:${mongodb.password}#....
I use VSCode for local dev and use a launch.json file which is not committed to my github repo to pass these values and I can run this locally. I was able to deploy this app successfully to Heroku and setup these two values in the Heroku console in my App settings and it all works fine on Heroku also. I am trying to deploy the same app to GCP App Engine but I could not find an easy way to pass these values. All the help articles seem to indicate that I need to use some gcp datastore and some cloud provider specific code in my app. Or use some kind of a github action script. That seems a little bit involved and I wanted to know if there is an easy way of passing these values to the app via gcp settings (just like in Heroku) without polluting my repo with cloud provider specific code or yml files.
I would like to ask what people use to provision an ephemeral preview environment in AWS EKS for your service under test. Also in addition, I am curious to know how you provision any dependent services (such as Database).
E.g. I am working on a back-end service and would like to deploy an isolated ephemeral version of this service packaged from my feature branch, including the database. Furthermore, I would also like copy of a front-end service in my isolated environment to test my back-end.
Any thoughts would be appreciated
Thanks
Sachin
You can roll your own solution: by wiring together your own CI/CD (Jenkins, CircleCI, BuildKite, Github Actions, etc) solution to trigger building and deploying of a preview environment by tying in to webhooks on your source repository. This would have to include your building of the modified code, then deploying that code to some staging environments, then of course seeding those environments with some type of data.
There is a bit of nuance to getting this right. You should check out https://ephemeralenvironments.io/ which is a good template of what needs to go in to these environments.
A lot of other folks use services that provide this as a SaaS platform, Shipyard.build, Release, and Velocity.tech are a few of your options.
Disclaimer: I'm on the Operations team at Shipyard
Hope this helps!
We run Google Cloud Functions (python), which require to be deployed from Google Cloud Source Repository. Since all the code is stored on GitHub we resort to first mirroring GitHub into Source Repository. Although this only requires a few mouse clicks, it becomes a burden to repeat over 3+ projects (dev, staging, production) times 5+ repos (5+ apps).
I am looking to automate the mirroring config, preferably to add into the Terraform automation we already use, into a hands-off project configuration. Does the Google API support this mirroring automation? So far on my Google Cloud expedition everything was available in their API!
I fail to find Terraform examples though, and would appreciate a tip.
Come to think of it, if I can take Source Repository out of the equation, that would be just fine with me too. After all, I only use it as a pass-through / empty shell.
The Cloud Source Repository API includes a Repo resource that has a Mirror Config object where you could type in your Github's URL, webhook and credentials to automate this procedure. I would initially test it with the create method, but if you have an existing Cloud Source Repository I believe the patch method will also be worth exploring.
Additionally, there is an open Feature Request in order to connect a repository via the Cloud Build GitHub App that I recommend you to star and follow, as it could further ease your automation needs.
I'successfully set up a CICD pipeline following this tutorial.
It shows clearly how to make Google Cloud Build and Kubernetes work with one environment: production.
For simplicity, this tutorial uses a single environment —production—
in the env repository, but you can extend it to deploy to multiple
environments if needed.
Right, but some details are missing: is there one kubernetes.yaml file by environment? What about kubernetes namespaces?...
More precisely, what would be the way to handle multiple environments (staging...)?
There could be a bizillion ways of doing environments , but what I understand from this line:
env repository: contains the manifests for the Kubernetes Deployment
That the default master/production branch maps to the production environment , then you can create for example testing , and staging branches , where you test and stage your things , and later on port the change to master branch.
Infact if you keep reading that document , it will tell you something:
The env repository can have several branches that each map to a
specific environment (you only use production in this tutorial) and
reference a specific container image, whereas the app repository does
not.
One more thing , if you have access to gitlab and kubernetes , you can implement it without google GKE and clud build.
I have a private gitlab instance with multiple projects and Gitlab CI enabled. The infrastructure is provided by Google Cloud Platform and Gitlab Pipeline Runner is configured in Kubernetes cluster.
This setup works very well for basic pipelines running tests etc. Now I'd like to start with CD and to do that I need some manual acceptance on the pipeline which means the person reviewing it needs to have the access to the current state of the app.
What I'm thinking is having a kubernetes deployment for the pipeline that would be executed once you try to access it (so we don't waste cluster resources) and would be destroyed once the reviewer accepts the pipeline or after some threshold.
So the deployment would be executed in the same cluster as Gitlab Runner (or different?) and would be accessible by unique URI (we're mostly talking about web-server apps) e.g. https://pipeline-58949526.git.mydomain.com
While in theory, it all makes sense to me, I don't really know how to set this up properly.
Does anyone have a similar setup? Is my view on this topic too simple? Let me know!
Thanks
If you want to see how to automate CI/CD with multiple environments on GKE using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.