Pass environment variable values to spring boot profiles (application.properties file) when deploying from Github - mongodb

I have a simple Spring Boot REST app that uses Mongo Atlas as the database and I have a couple of environment variables to pass to the project for the connection URL. I have defined this in the src/main/resources/application.properties which is the standard Spring profile for storing such properties. Here's the property name and value.
spring.data.mongodb.uri=mongodb+srv://${mongodb.username}:${mongodb.password}#....
I use VSCode for local dev and use a launch.json file which is not committed to my github repo to pass these values and I can run this locally. I was able to deploy this app successfully to Heroku and setup these two values in the Heroku console in my App settings and it all works fine on Heroku also. I am trying to deploy the same app to GCP App Engine but I could not find an easy way to pass these values. All the help articles seem to indicate that I need to use some gcp datastore and some cloud provider specific code in my app. Or use some kind of a github action script. That seems a little bit involved and I wanted to know if there is an easy way of passing these values to the app via gcp settings (just like in Heroku) without polluting my repo with cloud provider specific code or yml files.

Related

Skaffold config dependencies with profiles

I have a microservice application in one repo that communicates with another service that's managed by another repo.
This is not an issue when deploying to cloud, however, when devving locally the other service needs to be deployed too.
I've read this documentation: https://skaffold.dev/docs/design/config/#remote-config-dependency and this seems like a clean solution, but I only want it to depend on the git skaffold config if deploying locally (i.e. current context is "minikube").
Is there a way to do this?
Profiles can be automatically activated based on criteria such as environment variables, kube-context names, and the Skaffold command being run.
Profiles are processed after resolving the config dependencies though. But you could have your remote config include a profile that is contingent on a kubeContext: minikube.
Another alternative is to have several skaffold.yamls: one for prod, one for dev.

Programmatically Connecting a GitHub repo to a Google Cloud Project

I'm working on a Terraform project that will set up all the GCP resources needed for a large project spanning multiple GitHub repos. My goal is to be able to recreate the cloud infrastructure from scratch completely with Terraform.
The issue I'm running into is in order to setup build triggers with Terraform within GCP, the GitHub repo that is setting off the trigger first needs to be connected. Currently, I've only been able to do that manually via the Google Cloud Build dashboard. I'm not sure if this is possible via Terraform or with a script but I'm looking for any solution I can automate this with. Once the projects are connected updating everything with Terraform is working fine.
TLDR; How can I programmatically connect a GitHub project with a GCP project instead of using the dashboard?
Currently there is no way to programmatically connect a GitHub repo to a Google Cloud Project. This must be done manually via Google Cloud.
My workaround is to manually connect an "admin" project, build containers and save them to that project's artifact registry, and then deploy the containers from the registry in the programmatically generated project.

How to set up one GitHub repo for multiple Google Cloud Functions?

I would like to setup one GitHub repo which will hold all backend Google Cloud Functions. But I have 2 questions:
How can I set it up so that GCP knows that there are multiple Cloud Functions to be deployed?
If I only change code for one Cloud Functions, how can I setup so that GCP will only deploy the modified Cloud Functions, and NOT to redeployed the unchanged onces?
When saving your cloud function to a github repo just make sure to have in the gitignore file the proper settings. Exclude node_modules and such folder and files you don't want to commit.
Usualy all cloud functions are deployed trough a single index.js file. So here you need to make sure you have that and all files you import into it.
If you want to deploy a single function you can use this command:
firebase deploy --only functions:your_function_name
If you want to have a more structure solution you can read this article to learn more about it.
As I understand, you would like to store cloud functions code in GitHub, and I assume you would like to trigger deployment on some action - push or pull request. At the same time, you would like only modified cloud functions be redeployed, leaving others without redeployment.
Every cloud function is your set is (re)deployed separately, - I mean the for each of them you might need to provide plenty of specific parameters, and those parameters are different for different functions. Details of those parameters are described in the gcloud functions deploy command documentation page.
Most likely the --entry-point - the name of the cloud function as defined in source code - is to be different for each of them. Thus, you might have some kind of a "loop" through all cloud functions for deployment with different parameters (including the name, entry point, etc.).
Such "loop" (or "set") may be implemented by using Cloud Build or Terraform or both of them together or some other tool.
An example how to deploy only modified cloud functions is provided in the How can I deploy google cloud functions in CI/CD without redeploying unchanged SO question/answer. That example can be extended into an arbitrary number of cloud functions. If you don't want to use Terraform, the similar mechanism (based not he same idea) can be implemented by using pure Cloud Build.

Pushing liberty app + server to Bluemix

I want to deploy a liberty application along with server config to Bluemix, I found these options listed in the documentation
https://console.bluemix.net/docs/runtimes/liberty/optionsForPushing.html#options_for_pushing
My question is should we be pushing the app + server always to keep the server config, or it like push app + server for the first time and subsequent pushes can only contain app files ? will the server config be retained?
You need to push the app + server every time.
There are a number of ways to deploy Liberty on the IBM Cloud - the recommended place to get started is on the App Service console:
https://console.bluemix.net/developer/appservice/starter-kits
The documentation has options for Kubernetes / CF Deployment to the Cloud and recommend using the IBM Cloud Dev CLI tooling which containerizes your app to run locally and gives you the option to push the image up when you're ready.
In addition, starter kits set up an example of how you can incorporate DevOps into your app. When you make changes from your Git Repo, it will trigger a hook which will run the app through your testing pipelines, and deploy it to the cloud.
The idea of using containers is so you can package your application with a consistent, reproducible environment, so you can orchestrate and scale your application when necessary.

Switching databases as Azure Project Develops, but Azure Website Keeps pointing to Linked resource

I think I may be missing something and hope you can advise
I have been developing a project using VS2013 with EF6. I use Visual Studio each time I want to deploy the latest version of the system to my Azure Website.
The Azure Website has a linked database resource (SQL Azure database).
This has been going great. However, yesterday I decided to create a Virtual Machine and move the SQL database to a dedicated Azure Virtual Machine. So I did this and now I have a new database as well as the old linked resource one
So, i'm ready to publish the APP and set the new database settings on the VM.
I changed the connection string in the publish wizard and published being sure to have the right settings, i.e. use this connection string at runtime and execute code first migrations etc
However, it took me a while to realise that the APP on the cloud server I just published too is still pointing to the OLD linked resource Azure database
I'm not sure what else I have to do to, I thought it was only about changing the publish setting for the database connection string
Am I missing something, should I delete the linked resource in the Azure Website settings, if i do would that make it work. Just weird because like I say i'm publishing the site again with new settings, or does Azure read the portals publish settings and somehow overidde what I want it to point to database wise
Please advise, many thanks
John
PS I can connect fine to the new database from my local management studio. I have no errors i'm just not sure how to tell Azure to use the connect string in publish profile other than what i am doing
The "linked resource" in the Windows Azure management portal should have no impact on your application's functionality. It is really just a way to help you understand / visualize the resources your application is using.