Cloud Foundry Deployment in Travis - ibm-cloud

I know I can do this: https://docs.travis-ci.com/user/deployment/cloudfoundry
Now in .travis.yml, it will have
deploy:
edge: true
provider: cloudfoundry
username: hulk_hogan#example.com
password: supersecretpassword
api: https://api.run.pivotal.io
organization: myawesomeorganization
space: staging
Altough password can be encrypted by running
travis encrypt --add deploy.password
I don't want to put username and password(even it's encrypted) in yml file, is there another way for Travis to deploy apps to Cloud Foundry (or IBM Bluemix)?

There are several ways of passing credentials with Cloud Foundry. Putting them in your .yml file is just one option.
You can set them manually with the command cf set-env, as explained here: https://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html#view-env
If you are afraid of the CLI, Bluemix also allows you to create user-defined environment variable with its GUI : https://github.com/ibm-cds-labs/simple-data-pipe/wiki/Create-a-user-defined-environment-variable-in-Bluemix#use-the-bluemix-user-interface
I don't want to put username and password(even it's encrypted) in yml file
FYI, the .yml file does not leave your computer/CI server and is just read once by Cloud Foundry.

Related

How can I create an ibm cloud function that deploy from code on a github repo?

I want to create an action for a trigger in IBM Cloud Functions, but instead of only adding the code in the console, I want the action to deploy from code on a github repository. How can I do this?
Thanks in advance.
I don't believe (or at least have not seen anywhere in the docs) you can just point the Cloud Functions to a GitHub repo. With that said you could do the following:
Make sure ibmcloud CLI is installed and you have the Cloud Functions plugin also installed ibmcloud plugin install cloud-functions
ibmcloud login - You need a valid session in the CLI or use an IBM Cloud API key or a Service ID key that has the right IAM access to deploy to a Cloud Function namespace in IBM Cloud ibmcloud login --apikey "YOUR_API_KEY".
ibmcloud target -r eu-gb - You need to target the right region where the Cloud Function will live.
ibmcloud target -g test-resource-group - Once logged in, you need to make sure you target the right resource group where the Cloud Function will be pushed too.
If you are lazy like me then you can roll all 3 of the above commands into 1 like so: ibmcloud login --apikey "YOUR_API_KEY" -r "eu-gb" -g "test-resource-group"
ibmcloud functions namespace target test-function-namespace - Finally, after logging in you need to use the cloud-functions plugin to target the right namespace where the Cloud Function will be pushed.
There are multiple ways to deploy the Cloud Function. For example, using the CLI to push the cloud function or using a manifest.yml file as a config.
Using IBM Cloud CLI
Creating a trigger assuming test-action is already created.
ibmcloud functions trigger create test-trigger --feed test-action
Using Manifest File
The example below is triggering the test-action Cloud Function every 15 minutes using a Cloud Function trigger.
manifest.yaml
project:
namespace: _
packages:
test-package:
actions:
test-action:
function: ./src/actions/action.js
runtime: nodejs:12
triggers:
test-trigger:
feed: /whisk.system/alarms/interval
inputs:
minutes: 15
rules:
rile-test-trigger:
trigger: test-trigger
action: test-action
To deploy this you essentially just:
ibmcloud functions deploy -m ./manifest.yaml
Both options can essentially be wired into a CD tool like Travis or Jenkins and can automatically deploy latest changes to the Cloud from GitHub.

Create Service Connection from Azure DevOps to GCP Artifact Registry

Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.

Add username and password to a yaml config file with Azure Pipelines

I am trying to deploy a prometheus exporter with Azure DevOps, however, the configuration has a username and a password which I wish to populate through the Azure Pipelines as I dont want to store the credentials in my repository. My YAML config looks like this:
version: 3
max_repetitions: 25
timeout: 10s
auth:
security_level: authPriv
username: admin
password: password123
you'd need to use the token replace step or just a script that would do that. there is nothing build-in that would handle that for you. another alternative would be to use some sort of "offloading". so move these value to secrets and reference them from there and\or some sort of key vault

How to inject username and password stored in vault to use in jenkinsfile (pipeline as a code)?

I have my username and password stored in Vault server. While using jenkins pipeline I want to use those credentials in my jenkinspipeline file to run adn ansible play that will use those credentials on the target machine to log in and perform tasks. How can i do that in jenkinsfile ?
Well I could figure out... the official documentation itself is wrong.
Correct usage is described here: https://issues.jenkins-ci.org/browse/JENKINS-45685

jhipster application-prod.yml stores sensitive info on github

I want to keep my smtp login information private. And its a hassle to edit application-prod.yml every time I have to deploy to production.
What is the correct method to avoid storing sensitive details in application-prod.yml on github ?
Many different ways depending on where and how you deploy to:
Don't store application-prod.yml in git and don't package it in your jar
Don't store secrets in application-prod.yml, use environment variables or command line options. See Spring Boot doc.
Encrypt secrets using git-crypt in application-prod.yml
Store secrets in external Spring Cloud Config Server (e.g. JHipster Registry) or HashiCorp Vault
and many other ways...