I try to connect terraform to IBM Cloud and I got messed up with
Softlayer and IBM Cloud credentials.
I followed the instruction on IBM sites to connect my terraform to the IBM Cloud and I am confused, because I may use SL and IBM Cloud connec-
tion information like API-keys etc.
I may not run terraform init and/or plan, because there are some
information missing. No I am asked for the organization (var.org).
Sometimes I got asked about the SL credentials. Our account started
in January 2019 and I am sure not to worked with SL at all and only
heard about API key from IBM cloud.
May some one have an example, how terraform.tfvars looks like to work
properly together with IBM Cloud Kubernetes Service, VPC and classic
infrastructure?
Thank you very much.
Jan
I recommend starting to take a look at these two tutorials, dealing with a LAMP stack on classic vertical servers and with Kubernetes and other services. Both provide step by step instructions and guide you through the process of setting up Terraform-based deployments.
They provide the necessary code in GitHub repos. For the Kubernetes sample credentials.tfvars you only need the API key:
ibmcloud_api_key = "your api key"
For the public_key a string containing the public key should be provided instead of a file that contains the key.
$ cat ~/.ssh/id_rsa.pub
ssh-rsa CCCde...
Then in terraform:
resource "ibm_compute_ssh_key" "test_ssh_key" {
public_key = "ssh-rsa CCCde..."
}
Alternatively you can use a key that you created earlier:
data "ibm_compute_ssh_key" "ssh_key" {
label = "yourexistingkey"
}
resource "ibm_compute_vm_instance" "onprem_vsi" {
ssh_key_ids = ["${data.ibm_compute_ssh_key.ssh_key.id}"]
}
Here is what you will need to run an init or plan for IBM Cloud Kubernetes Service clusters with terraform...
In your .tf file
terraform {
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
iaas_classic_username = var.classic_username
iaas_classic_api_key = var.classic_api_key
}
In your shell, set the following environment variables
export IBMCLOUD_API_KEY=<value of your IBM Cloud api key>
export CLASSIC_API_KEY=<Value of you r IBM Cloud classic (i.e. SL) api key>
export CLASSIC_USERNAME=<Value of your IBM Cloud classic username>
Run your init as follows:
terraform init
Run your plan as follows:
terraform plan \
-var ibmcloud_api_key="${IBMCLOUD_API_KEY}" \
-var classic_api_key="${CLASSIC_API_KEY}" \
-var classic_username="${CLASSIC_USERNAME}"
Related
I am Trying to connect to GCP PUSUB Services using IAM authentication. I want to do this without the .json file.
I tried using
GoogleCredentials credential = GoogleCredentials.getApplicationDefault(() -> httpTransport);
But this is too required application_default_credentials.json file to get Authenticated .
Basically , I want to get Authorized using GCP IAM API to use GCP PUBSUB Services .
Plan - I Want to run a sample Java code from my local , connecting to GCP PUBSUB instance and test and after testing , Deploying the same at GCP container and then test the same PUBSUB.
to Connect to GCP PUBSUB instance from my local system I want to do that Via IAM Authentication mechanism .
Any reference ? Please help
thanks
Application default credentials can be setup with gcloud CLI. Once you install gcloud, gcloud auth application-default login will allow all the client libraries and Spring Cloud GCP modules to get your credentials from the environment.
I want to create an action for a trigger in IBM Cloud Functions, but instead of only adding the code in the console, I want the action to deploy from code on a github repository. How can I do this?
Thanks in advance.
I don't believe (or at least have not seen anywhere in the docs) you can just point the Cloud Functions to a GitHub repo. With that said you could do the following:
Make sure ibmcloud CLI is installed and you have the Cloud Functions plugin also installed ibmcloud plugin install cloud-functions
ibmcloud login - You need a valid session in the CLI or use an IBM Cloud API key or a Service ID key that has the right IAM access to deploy to a Cloud Function namespace in IBM Cloud ibmcloud login --apikey "YOUR_API_KEY".
ibmcloud target -r eu-gb - You need to target the right region where the Cloud Function will live.
ibmcloud target -g test-resource-group - Once logged in, you need to make sure you target the right resource group where the Cloud Function will be pushed too.
If you are lazy like me then you can roll all 3 of the above commands into 1 like so: ibmcloud login --apikey "YOUR_API_KEY" -r "eu-gb" -g "test-resource-group"
ibmcloud functions namespace target test-function-namespace - Finally, after logging in you need to use the cloud-functions plugin to target the right namespace where the Cloud Function will be pushed.
There are multiple ways to deploy the Cloud Function. For example, using the CLI to push the cloud function or using a manifest.yml file as a config.
Using IBM Cloud CLI
Creating a trigger assuming test-action is already created.
ibmcloud functions trigger create test-trigger --feed test-action
Using Manifest File
The example below is triggering the test-action Cloud Function every 15 minutes using a Cloud Function trigger.
manifest.yaml
project:
namespace: _
packages:
test-package:
actions:
test-action:
function: ./src/actions/action.js
runtime: nodejs:12
triggers:
test-trigger:
feed: /whisk.system/alarms/interval
inputs:
minutes: 15
rules:
rile-test-trigger:
trigger: test-trigger
action: test-action
To deploy this you essentially just:
ibmcloud functions deploy -m ./manifest.yaml
Both options can essentially be wired into a CD tool like Travis or Jenkins and can automatically deploy latest changes to the Cloud from GitHub.
I'm trying to figure out how to update the JSON config files in my .NET Core web service, based on the deployed resources using Terraform.
I have an existing Azure DevOps pipeline, which builds/deploys a .NET Core web service to an Azure App Service.
In moving to Terraform, I'll be creating a CosmosDb database, Azure Search service, Event Grid, etc. for dev/test/prod environments.
I have a handle on creating these in Terraform, but I'm not clear how to take the outputs from these resources (like the CosmosDb location, key, and database id) and inject these into my JSON config files in my deployed web service.
Has anyone done this sort of thing, and can show a Terraform example? Thanks!
You don't actually inject those into your config file, you set those as app settings in your App Service and that will override those keys in your config file.
So if you have:
{
CosmosDb: {
Key: ""
}
}
In your terraform you would do the following.
resource "azurerm_app_service" "test" {
name = "example-app-service"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "${azurerm_app_service_plan.test.id}"
app_settings = {
"CosmosDb:Key" = "${azurerm_cosmosdb_account.db.primary_master_key}"
}
}
So you would reference your other Terraform resources to pull out the values you need and put those in the app settings section of your App Service in Terraform.
For one process I need this credentials that be in Object Storage in IBM Bluemix, so, How can I get this credentials? Where can I see that?
Thank you very much!
credentials_1 = {
'auth_url':'',
'project':'',
'project_id':'',
'region':'',
'user_id':'',
'domain_id':'',
'domain_name':'',
'username':'',
'password':"",
'container':'',
'filename':''
}
There are a few different Object Storage services on the IBM Cloud. Here is a comparison.
Below is a project with Object Storage. The credentials can be found in the Credentials section and/or Service Credentials of the service instance.
You can create a project with the Object Storage service using the App Service console: https://console.bluemix.net/developer/appservice/create-project?services=Object-Storage
You could also use the cf CLI and run the command cf service-key serviceInstanceName serviceKeyName to get the credentials. You would have to run cf service-keys serviceInstanceName to get a list of the service keys beforehand.
Actually Service Catalog is using CloudFormation template for provisioning the Products/environments.
I tried Provisioning the product with help of AWS document example. In that AWS is having CF template for Creating AWS Instance with public access.
I Provisioned the Product(I mean created the EC2 Instance) but here I need the IP address of EC2 instance which is created through Cloudformation.
Could anyone help me with the AWS Cli command/AWS Powershell command to get the output section of the Product.
I got the answer myself.
Finally I got the IP address of Provisioned Product.
$newProduct = New-SCProvisionedProduct -ProvisionedProductName $productName -ProductId $productId -ProvisioningArtifactId $artifactId -ProvisionToken testToken -ProvisioningParameter #( #{key="KeyName";value="test"} )
$envInfo = Get-SCRecord -Id $newProduct.RecordId
$envIP = $envInfo.RecordOutputs[1].OutputValue
Write-Host $envIP