How to set up one GitHub repo for multiple Google Cloud Functions? - github

I would like to setup one GitHub repo which will hold all backend Google Cloud Functions. But I have 2 questions:
How can I set it up so that GCP knows that there are multiple Cloud Functions to be deployed?
If I only change code for one Cloud Functions, how can I setup so that GCP will only deploy the modified Cloud Functions, and NOT to redeployed the unchanged onces?

When saving your cloud function to a github repo just make sure to have in the gitignore file the proper settings. Exclude node_modules and such folder and files you don't want to commit.
Usualy all cloud functions are deployed trough a single index.js file. So here you need to make sure you have that and all files you import into it.
If you want to deploy a single function you can use this command:
firebase deploy --only functions:your_function_name
If you want to have a more structure solution you can read this article to learn more about it.

As I understand, you would like to store cloud functions code in GitHub, and I assume you would like to trigger deployment on some action - push or pull request. At the same time, you would like only modified cloud functions be redeployed, leaving others without redeployment.
Every cloud function is your set is (re)deployed separately, - I mean the for each of them you might need to provide plenty of specific parameters, and those parameters are different for different functions. Details of those parameters are described in the gcloud functions deploy command documentation page.
Most likely the --entry-point - the name of the cloud function as defined in source code - is to be different for each of them. Thus, you might have some kind of a "loop" through all cloud functions for deployment with different parameters (including the name, entry point, etc.).
Such "loop" (or "set") may be implemented by using Cloud Build or Terraform or both of them together or some other tool.
An example how to deploy only modified cloud functions is provided in the How can I deploy google cloud functions in CI/CD without redeploying unchanged SO question/answer. That example can be extended into an arbitrary number of cloud functions. If you don't want to use Terraform, the similar mechanism (based not he same idea) can be implemented by using pure Cloud Build.

Related

Pass environment variable values to spring boot profiles (application.properties file) when deploying from Github

I have a simple Spring Boot REST app that uses Mongo Atlas as the database and I have a couple of environment variables to pass to the project for the connection URL. I have defined this in the src/main/resources/application.properties which is the standard Spring profile for storing such properties. Here's the property name and value.
spring.data.mongodb.uri=mongodb+srv://${mongodb.username}:${mongodb.password}#....
I use VSCode for local dev and use a launch.json file which is not committed to my github repo to pass these values and I can run this locally. I was able to deploy this app successfully to Heroku and setup these two values in the Heroku console in my App settings and it all works fine on Heroku also. I am trying to deploy the same app to GCP App Engine but I could not find an easy way to pass these values. All the help articles seem to indicate that I need to use some gcp datastore and some cloud provider specific code in my app. Or use some kind of a github action script. That seems a little bit involved and I wanted to know if there is an easy way of passing these values to the app via gcp settings (just like in Heroku) without polluting my repo with cloud provider specific code or yml files.

Automate mirroring GitHub to GCP Source Repository?

We run Google Cloud Functions (python), which require to be deployed from Google Cloud Source Repository. Since all the code is stored on GitHub we resort to first mirroring GitHub into Source Repository. Although this only requires a few mouse clicks, it becomes a burden to repeat over 3+ projects (dev, staging, production) times 5+ repos (5+ apps).
I am looking to automate the mirroring config, preferably to add into the Terraform automation we already use, into a hands-off project configuration. Does the Google API support this mirroring automation? So far on my Google Cloud expedition everything was available in their API!
I fail to find Terraform examples though, and would appreciate a tip.
Come to think of it, if I can take Source Repository out of the equation, that would be just fine with me too. After all, I only use it as a pass-through / empty shell.
The Cloud Source Repository API includes a Repo resource that has a Mirror Config object where you could type in your Github's URL, webhook and credentials to automate this procedure. I would initially test it with the create method, but if you have an existing Cloud Source Repository I believe the patch method will also be worth exploring.
Additionally, there is an open Feature Request in order to connect a repository via the Cloud Build GitHub App that I recommend you to star and follow, as it could further ease your automation needs.

How to complete CI/CD pipelines with Azure DevOps for Azure API Management

I need help to understand better how to create complete CI/CD with Azure Devops for APim. Ok I already has explored the tools and read docs:
https://github.com/Azure/azure-api-management-devops-resource-kit
But I still have questions, my scenario:
APim dev instance with APi and operations created and others settings, as well APim prod instance created but empty.
I ran the extract tool and got the templates (not all), still need the Master (linked templates), and on this point seat my doubt, I already have 2 repos too(dev and prod).
How can I create the Master template, and how my changes from dev environment will be automatically applied to prod?
I didn't used the policyXMLBaseUrl parameters not sure what Path insert there, although seems #miaojiang inserted a folder from azure storage.
After some search and tries I deployed API's and Operations from an environment to another, but we still don't have a full automated scenario, where I make a change in a instance and that is automatically available.Is necessary to edit policies and definitions directly on the repositories or run the extract tool again.

serverless deploy: Stop watching after CloudFormation has the update

I'm using Bitbucket Pipelines to do CD for a Serverless app. I want to use as few "build minutes" as possible for each deployment. The lifecycle of the serverless deploy command, when using AWS as the backing, seems to be:
Push the package to CloudFormation. (~60 seconds)
Sit around watching the logs from CloudFormation until the deployment finishes. (~20-30 minutes)
Because of the huge time difference, I don't want to do step two. So my question is simple: how do I deploy a serverless app such that it only does step one and returns success or failure based on whether or now CloudFormation successfully accepted the new package?
I've looked at the docs for serverless deploy and I can't see any options to enable that. Also, there seem to be AWS specific options in the serverless deploy command already, so maybe this is an option that the serverless team will consider if there is no other way to do this.
N.B. As for, "how will you know if CloudFormation fails?", for that, I would rather set up notifications to come from CloudFormation directly. The build can just have the responsibility of pushing to CloudFormation.
I don't think you can do it with serverless deploy. You can try serverless package command that will store the package in .serverless folder or you can specify the path using --package. Package will create a CloudFormation template file e.g. cloudformation-template-update-stack.json. You can then call Create Stack API action to create the stack. It will return the stack ID without waiting for all the resources to be created.

Clearity on github structure for terraform template

I am trying to understand the readymade template for IBM Cloud at https://cam-proxy-ng.ng.bluemix.net/cam/instances/#!/deployTemplateEditorWithNoParam/7921d773a240309379cf2c31c8004c9a
which is Node.js on a Single VM.
When we go to the source code at git referred in this template https://github.com/camc-experimental/terraform-modules/blob/master/ibmcloud/virtual_guest/ there is a createVirtualGuest.tf file. I am trying to understand why the create virtual guest is on GIT and not on the .tf template on bluemix console? Why there are 2 files which as code for creation of virtual guest?
This has to do with the structure of a Terraform template. You can define fragments of a resource orchestration in so-called modules, which are stored as separate files, and then refer to them from within the template.
The way the CAM service currently works, you can only work on the master template within the service. Modules that are referenced cannot be edited in the service and are pulled in from Github.
This is not ideal, and it should allow browsing and editing modules, too, but that function is currently not supported.