Clearity on github structure for terraform template - ibm-cloud

I am trying to understand the readymade template for IBM Cloud at https://cam-proxy-ng.ng.bluemix.net/cam/instances/#!/deployTemplateEditorWithNoParam/7921d773a240309379cf2c31c8004c9a
which is Node.js on a Single VM.
When we go to the source code at git referred in this template https://github.com/camc-experimental/terraform-modules/blob/master/ibmcloud/virtual_guest/ there is a createVirtualGuest.tf file. I am trying to understand why the create virtual guest is on GIT and not on the .tf template on bluemix console? Why there are 2 files which as code for creation of virtual guest?

This has to do with the structure of a Terraform template. You can define fragments of a resource orchestration in so-called modules, which are stored as separate files, and then refer to them from within the template.
The way the CAM service currently works, you can only work on the master template within the service. Modules that are referenced cannot be edited in the service and are pulled in from Github.
This is not ideal, and it should allow browsing and editing modules, too, but that function is currently not supported.

Related

Deploying plugins from the Hub using Terraform

I've created my DataFusion instance, network, pipelines, secrets, etc.. through Terraform but still have one fundamental gap - my pipelines use plugins that are present in the Hub but not enabled by default, like Python and KinesisStreamingSource - I've found Terraform code that will allow me upload plugins but it assumes I have the jars, which to me suggests that solution is more targeted at custom plugins.
Am I missing something fundamental here? Is there a magic API/Terraform command to do a one step deploy of one of the stock plugins from Hub into my DF instance? I'm convinced I'm doing this wrong as there seems to be nobody else having this same issue.
Any help at all is appreciated :)
I Believe that this module module hub_artifact can be used to deploy an artifact (plugin) from the GCS bucket of a data fusion instance hub.
However, you should install your plugins in a GCS bucket.
I used the same link with the submodule namespace to create namespaces and preferences.
You can also find github links created by Google for data fusion plugins: for example for http plugin you have the github page.
I hope this helps!

CI/CD for multi-tenant application with single repository but multiple clients

I have a database-driven application with a single code base configured for multiple clients using the database setting and config files.
The main code base consists of common/core code/files that are being used by all the clients and some client-specific code/files. Both types of files are in different folders of the same repository.
We have been planning to integrate CI/CD using GitHub and Jenkins. I am new to Jenkins.
In GitHub, we have a single repository that contains all the code/files. I want to use Jenkins to deploy to different client environments but make sure that only files related to a specific client should be deployed to that client environment.
What could be the best way or possible solutions for this?
Edit: Basically I want to deploy specific files that are client related to specific client environments.
Any and all suggestions would be highly appreciated.

How to set up one GitHub repo for multiple Google Cloud Functions?

I would like to setup one GitHub repo which will hold all backend Google Cloud Functions. But I have 2 questions:
How can I set it up so that GCP knows that there are multiple Cloud Functions to be deployed?
If I only change code for one Cloud Functions, how can I setup so that GCP will only deploy the modified Cloud Functions, and NOT to redeployed the unchanged onces?
When saving your cloud function to a github repo just make sure to have in the gitignore file the proper settings. Exclude node_modules and such folder and files you don't want to commit.
Usualy all cloud functions are deployed trough a single index.js file. So here you need to make sure you have that and all files you import into it.
If you want to deploy a single function you can use this command:
firebase deploy --only functions:your_function_name
If you want to have a more structure solution you can read this article to learn more about it.
As I understand, you would like to store cloud functions code in GitHub, and I assume you would like to trigger deployment on some action - push or pull request. At the same time, you would like only modified cloud functions be redeployed, leaving others without redeployment.
Every cloud function is your set is (re)deployed separately, - I mean the for each of them you might need to provide plenty of specific parameters, and those parameters are different for different functions. Details of those parameters are described in the gcloud functions deploy command documentation page.
Most likely the --entry-point - the name of the cloud function as defined in source code - is to be different for each of them. Thus, you might have some kind of a "loop" through all cloud functions for deployment with different parameters (including the name, entry point, etc.).
Such "loop" (or "set") may be implemented by using Cloud Build or Terraform or both of them together or some other tool.
An example how to deploy only modified cloud functions is provided in the How can I deploy google cloud functions in CI/CD without redeploying unchanged SO question/answer. That example can be extended into an arbitrary number of cloud functions. If you don't want to use Terraform, the similar mechanism (based not he same idea) can be implemented by using pure Cloud Build.

How to complete CI/CD pipelines with Azure DevOps for Azure API Management

I need help to understand better how to create complete CI/CD with Azure Devops for APim. Ok I already has explored the tools and read docs:
https://github.com/Azure/azure-api-management-devops-resource-kit
But I still have questions, my scenario:
APim dev instance with APi and operations created and others settings, as well APim prod instance created but empty.
I ran the extract tool and got the templates (not all), still need the Master (linked templates), and on this point seat my doubt, I already have 2 repos too(dev and prod).
How can I create the Master template, and how my changes from dev environment will be automatically applied to prod?
I didn't used the policyXMLBaseUrl parameters not sure what Path insert there, although seems #miaojiang inserted a folder from azure storage.
After some search and tries I deployed API's and Operations from an environment to another, but we still don't have a full automated scenario, where I make a change in a instance and that is automatically available.Is necessary to edit policies and definitions directly on the repositories or run the extract tool again.

Best practice for scripting Azure resource creation

I'm creating a test environment in Azure. I want to have an accurate script of what of the configuration so it's easy to replicate for other test, pre-prod and prod environments later on. The environment has an existing subscription, and I want the entire hierarchy of resources from Resource Groups through to Web Apps to be created by script.
I'm currently rolling my own script in PowerShell utilising AzureRm. This is working well, but I can't help feel I'm reinventing the wheel. What is the existing method for creating an entire Azure environment by script?
Yes, that way is called Azure Resource Manager Templates. Quote:
With Resource Manager, you can create a template (in JSON format) that defines the infrastructure and configuration of your Azure solution. By using a template, you can repeatedly deploy your solution throughout its lifecycle and have confidence your resources are deployed in a consistent state. When you create a solution from the portal, the solution automatically includes a deployment template. You do not have to create your template from scratch because you can start with the template for your solution and customize it to meet your specific needs. You can retrieve a template for an existing resource group by either exporting the current state of the resource group, or viewing the template used for a particular deployment. Viewing the exported template is a helpful way to learn about the template syntax.
Reference: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#template-deployment
Edit: you can use powershell, azure cli, azure cli2, azure sdk to deploy those templates (or simply Azure portal, search for "Template Deployment")