How to change credentials for a given service in Cloud Foundry - mlab

I would like to know how (and if it's possible) to change the URI credentials from a service in Cloud Foundry. More specifically, mLab service (free plan) from Pivotal Cloud Foundry.
Background
I created and pushed a nodejs app to Pivotal Cloud Foundry.
This app is bound to a mLab service using free plan.
When the mLab service was created using Pivotal website, it created a database with an user and password automatically.
Opening app settings inside Pivotal website, I can see the following environment variables. Please notice the mongo uri inside credentials and name inside mLab.
{
"staging_env_json": {},
"running_env_json": {},
"system_env_json": {
"VCAP_SERVICES": {
"mlab": [
{
"label": "mlab",
"provider": null,
"plan": "sandbox",
"name": "users",
"instance_name": "users",
"binding_name": null,
"credentials": {
"uri": "mongodb://CloudFoundry_someusergenerated:apasswordgeneratedautomatically#somehost.mlab.com:someport/CloudFoundry_database_name"
}
}
]
}
},
"application_env_json": {
"VCAP_APPLICATION": {
"cf_api": "https://donotuseapi.run.pivotal.io",
"application_name": "website",
"application_uris": [
"xxx.cfapps.io"
],
"name": "website",
"space_name": "space1",
"uris": [
"xxx.cfapps.io"
]
}
}
}
The connection with the database works fine with this default user and password. In order to get the mongodb uri from the environment variables, I am using npm package cfenv
const appEnv = require('cfenv').getAppEnv();
const env = process.env;
keys = {
mongodb: {
dbURI: appEnv.getServiceURL(env.MONGO_SERVICE_NAME)
}
};
In my manifest.yaml file I have this MONGO_SERVICE_NAME specified accordingly with the service name inside the environment variables.
---
applications:
- name: website
memory: 128M
disk_quota: 256M
random-route: true
buildpack: nodejs_buildpack
health-check-type: port
env:
MONGO_SERVICE_NAME: 'users'
Again, the db connection works fine.
===
Then I opened mLab website for this particular database and created a new database user.
Now I want to update the credentials.uri from VCAP_SERVICES (environment variables) for this particular service to use the new user and password.
As far as I coould see, cf update-service CLI command is not meant by that so I am wondering if this is a limitation from Cloud Foundry, Pivotal or mLab. I could bet that this limitation is due to the fact I am using Pivotal trial account and mLab free plan, however my question still the same if I upgrade plans.
Thanks,

As a user, it's not possible for you to change VCAP_SERVICES entries that are generated by a service brokers. These are fixed and not possible to change.
If you cf create-service'd a service, then it was created by a service broker and you get exactly what the service provider gives you. As was mentioned in the comments, depending on the service broker you may be able to pass arguments to the broker with cf create-service -c. Check the documentation of your service provider to see if there is an option to influence the credentials generated in the way that you would like.
If your service provider does not provide options to do what you want, you can create a service key instead of binding your service to an app. This will give you a set of credentials to your service that will last for the duration of the service key. The service key credentials are not automatically passed into an app, but you can feed them into your app in a variety of ways.
You can pass them in through environment variables.
You can create a user provided service (cf cups).
You can pass them through in application config or through a config server.
With all of these options, you are passing the credentials through to the app so you could in theory adjust or alter them before they get to your app.
Hope that helps!

Related

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

IBM Cloud: How to bind Db2 Warehouse to Code Engine app?

I have an existing instance of Db2 Warehouse on Cloud which is deployed to an org and space. Now, I would like to bind that service to an app for deployment with IBM Cloud Code Engine.
ibmcloud ce application bind --name henriks-app --service-instance myDb2
myDb2 does not exist as IAM resource because it is a CF resource. How would I bind the two together? It seems that I would need to create some form of custom wrapper.
The best way to manually connect services to your Code Engine application is to add service credentials to a Code Engine secret, and then attach that secret to your application using environment variables or volume mounting.
While you're correct that Db2 Warehouse isn't a typical IAM-Enabled service type, based on the IBM Cloud Db2 Warehouse docs, it's possible to create a client connection with Db2 Warehouse using an IAM Service ID & API Key.
Here's how I'd "bind" the Db2 instance to a Code Engine app:
Create a new service ID from the IAM Service IDs page
Under "Assign Access" > "Access service ID additional access" > "IAM Service", you'll find "Db2 Warehouse" as an option, and you can configure exact permissions from there (e.g. which instance(s) to grant permissions to, which roles, etc)
Finish the configuration by clicking "Assign access"
Using the CLI, log in to your account and generate a new API Key, e.g. ibmcloud iam service-api-key-create mydb2key SERVICE_ID_NAME --output JSON > mydb2.json where SERVICE_ID_NAME is the name of the service ID created in Step 1
Target your Code Engine project, then create a new secret using the API Key JSON, e.g. ibmcloud ce secret create --name mydb2 --from-file MYDB2=mydb2.json
Attach the secret to your application as an environment variable, e.g. ibmcloud ce app update --name myapp --env-from-secret mydb2
After the app update goes through, your application will have access to an environment variable named MYDB2, which will have the value of a JSON object string containing your API Key.
You'll find more information about creating secrets and using secrets with applications in the Code Engine docs.

SAP Cloud Platform : Basic Authentication showing when accessing service in WEB IDE app. Why?

I'm using a the same destination on a number of apps, which are connecting fine.
Created a new app (using the same SAP WEB IDE template).
The Service is retrieved fine when selecting New/OData service from the project menu, proving my Destination credentials are fine.
Now, when I run the app. I'm getting a basic authentication window. Cancelling this means I can't connect to the metadata of the service and therefore can't retrieve any data.
https://webidetesting0837185-s0015641139trial.dispatcher.hanatrial.ondemand.com/SAPUI5-ABAP-SFI/sap/opu/odata/sap/ZSV_SURVEY_SRV/$metadata?sap-language=EN 401 (Unauthorized)
My username and password is not being accepted even though it's correct.
Any ideas?
If you User/Password is not accepted I think you missing some configuration in the backend, check the logs like ST22 or SLG1 for authorization issues. Also check if your destinations in Cloud Connector work properly.
To solve this in generell not using basic authentication, you need to work with SAP CP's destination service. Retrieving from onPremise or via AppToAppSSO as Type/Mode of the destination OR work with API Service on SAP CP. For first way change (destination service) reference in your SAPUI5 instead of relative paths in neo-app.json like this:
{
"routes": [
{
"path": "/destinations/SFSF_ODATA_PROXY",
"target": {
"type": "destination",
"name": "sap_hcmcloud_core_odata"
},
"description": "SFSF Proxy OData"
}
],
"cacheControl": [
{
"directive": "public",
"maxAge": 0
}
]
}
Make sure you enter the credentials for your backend (and not for your CP account for example). You can also try and maintain the credentials in the destination itself by setting AuthenticationType as BasicAuthentication.
I have already solved this issue with change Authentication as Basic Authentication

how to view the cf rest api requests submitted from the bluemix web console?

Most of my experience with Bluemix so far has been using the web management console. I would now like to start using the cloud foundry rest API.
I've had a look through the cf rest API documentation for creating a service instance and see this:
{
"space_guid": "bbbeed31-f908-477a-aab9-8cdcd19e1348",
"name": "my-service-instance",
"service_plan_guid": "fe173a83-df28-4891-8d91-46334e04600d",
"parameters": {
"the_service_broker": "wants this object"
},
"tags": [
"accounting",
"mongodb"
]
}
I have no idea what I need to set for the tags or parameters for a Bluemix service. How can I find this out on for each bluemix service?
When I instantiate a service using the Bluemix web console, is it possible to view the rest API requests that are submitted in the background so that I can reverse engineer the API calls??
You won't be able to see the commands sent by the BlueMix console directly, but you could replicate the commands with the Cloud Foundry CLI and set an environment variable of CF_TRACE=true to output all requests to STDOUT. You can also set it as CF_TRACE=/path/to/file.
The UUIDs could be changed. If you're going to use the API, you'll need to look things up by name, find their UUIDs, and then use them in subsequent requests. I've been working on something similar, that really should have been implemented as a Terraform provider: https://github.com/EngineerBetter/cf-converger

Cannot connect to secured Azure Service Fabric Cluster via Powershell or Visual Studio

I've created a Service Fabric Application currently consisting of two Reliable Services and a Reliable Actor. For development, I created an SQL Server and database in Azure, and hardcoded the connection string into my application, which I was running on my local SF cluster. This worked fine, and I was able to run my application locally whilst manipulating the database in the cloud.
I now want to publish my service to the cloud, and run it all remotely (so that I can set up and test the Web API is exposes), and this is where the problems start.
Following Azure docs:
Create a Service Fabric cluster in Azure using Azure Resource Manager
Connect to a secure cluster
Configure secure connections to a Service Fabric cluster from Visual Studio
Service Fabric cluster security scenarios
Publish an application to a remote cluster by using Visual Studio
Add or remove certificates for a Service Fabric cluster in Azure
I have taken the following steps:
Used Powershell (with ServiceFabricRPHelpers cmdlets) to create a KeyVault resource group, and within that a KeyVault.
Used New-SelfSignedCertificate with -DnsName set to api.mydomain.co.uk, which I have already purchased and created a CNAME record for api leading to mycluster.northeurope.cloudapp.azure.com:19000 (though of course it doesn't exist at this stage of the process), followed by Export-PfxCertificate to create the .pfx file. The .pfx was then imported to cert:\CurrentUser\TrustedPeople and cert:\CurrentUser\My.
Called Invoke-AddCertToKeyVault to add the newly generated certificate to my KeyVault.
Used the SetupApplications.ps1 script to configure AAD.
Placed all resulting strings etc. into azuredeploy.json and azuredeploy.parameters.json, resolved errors (some of which seemed to contradict the documentation..), and successfully deployed the cluster. It is now visible on my Azure Portal.
Assigned User Roles (admin to myself) from the classic portal.
Used Invoke-AddCertToKeyVault to (this time create and) add a second, "admin client" certificate to the cluster (as opposed to the first which was a cluster certificate).
So, with all of that done, I believe I should have done everything I need to in order to be able to connect to the cluster to publish via VS2015, and access the management interface from api.mydomain.co.uk:19080. Alas, that doesn't happen...
Connection to the database within the resource group my cluster still works from VS via the SQL Server Explorer using SQL authentication, however, any attempt to communicate with the server itself using AAD or X509 based authentication results in a wait while it tries to connect, and then failure. A few examples:
Trying to connect to the management console says it's blocked, which implies to me it is there, but all the documentation ends before telling me how to access it.
Attempting to connect using Connect-ServiceFabricCluster also fails, and searching the error messages hasn't given me any indication of what to do.
After spending two days absorbing all of this and trying to get it working, I'm all out of ideas on what to try and change. Can anyone find a problem in what I have done, or suggest anything I could try?
If you need more details from me then please just ask!
I too had a nightmare attempting to deploy a secure cluster, using much of the same documentation you too have tried to consume. After spending days getting my hands dirty I managed to finally get it working.
Here is my own helper and template: SecureCluster
The key things to watch are:
Make sure your client and cluster certificates are both in your key vault and referenced within your ARM template under the OSProfile of the VM scale set (I noticed in your example that you were adding the client admin certificate after modifying the ARM template):
"osProfile": {
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"computernamePrefix": "[parameters('vmNodeType0Name')]",
"secrets": [
{
"sourceVault": {
"id": "[parameters('sourceVault')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "[parameters('clusterCertificateUrl')]"
},
{
"certificateStore": "My",
"certificateUrl": "[parameters('adminCertificateUrl')]"
}
]
}
]
},
This will make sure all your certificates are installed onto each node within the cluster.
Next is to make sure that the Service Fabric extension within the scale set also has your certificate:
"extensions": [
{
"name": "[concat(parameters('vmNodeType0Name'),'_ServiceFabricNode')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": false,
"protectedSettings": {
"StorageAccountKey1":
"[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2":
"[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[parameters('vmNodeType0Name')]",
"dataPath": "D:\\\\SvcFab",
"durabilityLevel": "Bronze",
"certificate": {
"thumbprint": "[parameters('clusterCertificateThumbPrint')]",
"x509StoreName": "My"
}
},
"typeHandlerVersion": "1.0"
}
},
Finally, under the Service Fabric resource section within the ARM template make sure you specify which certificates to use for node to node security and which is for client to node security.
certificate": {
"thumbprint": "[parameters('clusterCertificateThumbPrint')]",
"x509StoreName": "My"
},
"clientCertificateCommonNames": [],
"clientCertificateThumbprints": [{
"CertificateThumbprint": "[parameters('adminCertificateThumbPrint')]",
"IsAdmin": true
}],
You should then be able to securely connect to the cluster in the way you are attempting to. Although one thing I have found is that the URL shouldbn't be prefixed with "http" within the publish profile and when trying you browse to the explorer you will need the URL to be https://[n]:19080/Explorer/index.html
Hopefully you will find this of some help.