SAP Cloud Platform : Basic Authentication showing when accessing service in WEB IDE app. Why? - sapui5

I'm using a the same destination on a number of apps, which are connecting fine.
Created a new app (using the same SAP WEB IDE template).
The Service is retrieved fine when selecting New/OData service from the project menu, proving my Destination credentials are fine.
Now, when I run the app. I'm getting a basic authentication window. Cancelling this means I can't connect to the metadata of the service and therefore can't retrieve any data.
https://webidetesting0837185-s0015641139trial.dispatcher.hanatrial.ondemand.com/SAPUI5-ABAP-SFI/sap/opu/odata/sap/ZSV_SURVEY_SRV/$metadata?sap-language=EN 401 (Unauthorized)
My username and password is not being accepted even though it's correct.
Any ideas?

If you User/Password is not accepted I think you missing some configuration in the backend, check the logs like ST22 or SLG1 for authorization issues. Also check if your destinations in Cloud Connector work properly.
To solve this in generell not using basic authentication, you need to work with SAP CP's destination service. Retrieving from onPremise or via AppToAppSSO as Type/Mode of the destination OR work with API Service on SAP CP. For first way change (destination service) reference in your SAPUI5 instead of relative paths in neo-app.json like this:
{
"routes": [
{
"path": "/destinations/SFSF_ODATA_PROXY",
"target": {
"type": "destination",
"name": "sap_hcmcloud_core_odata"
},
"description": "SFSF Proxy OData"
}
],
"cacheControl": [
{
"directive": "public",
"maxAge": 0
}
]
}

Make sure you enter the credentials for your backend (and not for your CP account for example). You can also try and maintain the credentials in the destination itself by setting AuthenticationType as BasicAuthentication.

I have already solved this issue with change Authentication as Basic Authentication

Related

How to change credentials for a given service in Cloud Foundry

I would like to know how (and if it's possible) to change the URI credentials from a service in Cloud Foundry. More specifically, mLab service (free plan) from Pivotal Cloud Foundry.
Background
I created and pushed a nodejs app to Pivotal Cloud Foundry.
This app is bound to a mLab service using free plan.
When the mLab service was created using Pivotal website, it created a database with an user and password automatically.
Opening app settings inside Pivotal website, I can see the following environment variables. Please notice the mongo uri inside credentials and name inside mLab.
{
"staging_env_json": {},
"running_env_json": {},
"system_env_json": {
"VCAP_SERVICES": {
"mlab": [
{
"label": "mlab",
"provider": null,
"plan": "sandbox",
"name": "users",
"instance_name": "users",
"binding_name": null,
"credentials": {
"uri": "mongodb://CloudFoundry_someusergenerated:apasswordgeneratedautomatically#somehost.mlab.com:someport/CloudFoundry_database_name"
}
}
]
}
},
"application_env_json": {
"VCAP_APPLICATION": {
"cf_api": "https://donotuseapi.run.pivotal.io",
"application_name": "website",
"application_uris": [
"xxx.cfapps.io"
],
"name": "website",
"space_name": "space1",
"uris": [
"xxx.cfapps.io"
]
}
}
}
The connection with the database works fine with this default user and password. In order to get the mongodb uri from the environment variables, I am using npm package cfenv
const appEnv = require('cfenv').getAppEnv();
const env = process.env;
keys = {
mongodb: {
dbURI: appEnv.getServiceURL(env.MONGO_SERVICE_NAME)
}
};
In my manifest.yaml file I have this MONGO_SERVICE_NAME specified accordingly with the service name inside the environment variables.
---
applications:
- name: website
memory: 128M
disk_quota: 256M
random-route: true
buildpack: nodejs_buildpack
health-check-type: port
env:
MONGO_SERVICE_NAME: 'users'
Again, the db connection works fine.
===
Then I opened mLab website for this particular database and created a new database user.
Now I want to update the credentials.uri from VCAP_SERVICES (environment variables) for this particular service to use the new user and password.
As far as I coould see, cf update-service CLI command is not meant by that so I am wondering if this is a limitation from Cloud Foundry, Pivotal or mLab. I could bet that this limitation is due to the fact I am using Pivotal trial account and mLab free plan, however my question still the same if I upgrade plans.
Thanks,
As a user, it's not possible for you to change VCAP_SERVICES entries that are generated by a service brokers. These are fixed and not possible to change.
If you cf create-service'd a service, then it was created by a service broker and you get exactly what the service provider gives you. As was mentioned in the comments, depending on the service broker you may be able to pass arguments to the broker with cf create-service -c. Check the documentation of your service provider to see if there is an option to influence the credentials generated in the way that you would like.
If your service provider does not provide options to do what you want, you can create a service key instead of binding your service to an app. This will give you a set of credentials to your service that will last for the duration of the service key. The service key credentials are not automatically passed into an app, but you can feed them into your app in a variety of ways.
You can pass them in through environment variables.
You can create a user provided service (cf cups).
You can pass them through in application config or through a config server.
With all of these options, you are passing the credentials through to the app so you could in theory adjust or alter them before they get to your app.
Hope that helps!

Using passport-http on Hyperledger composer REST API

I would like to know if it is possible to use passport-http to secure the REST API of Hyperledger Composer generated with the composer-rest-server and what would be the export COMPOSER_PROVIDERS='{}' configuration.
The idea is to use the identities previously generated and assigned to participants with the composer to authenticate the GET and POST requests on the API.
If it were possible, how would the userID and userSecret be passed, as a special http header, in the body or as a simple basic auth header?
I've not tried, but it should be able to. The Composer REST server uses the open source Passport authentication middleware, its a matter of configuration. Multiple Passport strategies can be selected, allowing clients of the REST server to select a preferred authentication mechanism.
The strategy for passport-http is here -> https://github.com/jaredhanson/passport-http
You can try something like:
export COMPOSER_PROVIDERS='{
"basic": {
"provider": "basic",
"module": "passport-http",
"clientID": "REPLACE_WITH_CLIENT_ID",
"clientSecret": "REPLACE_WITH_CLIENT_SECRET",
"authPath": "/auth/local",
"callbackURL": "/auth/local/callback",
"successRedirect": "/",
"failureRedirect": "/login"
}
}'
I assume you know how to configure your passport-http strategy.
and check out RESTful Node.js Application with passport-http - and see an example (right near the end) of an app consuming REST Endpoints right near the end.

how to view the cf rest api requests submitted from the bluemix web console?

Most of my experience with Bluemix so far has been using the web management console. I would now like to start using the cloud foundry rest API.
I've had a look through the cf rest API documentation for creating a service instance and see this:
{
"space_guid": "bbbeed31-f908-477a-aab9-8cdcd19e1348",
"name": "my-service-instance",
"service_plan_guid": "fe173a83-df28-4891-8d91-46334e04600d",
"parameters": {
"the_service_broker": "wants this object"
},
"tags": [
"accounting",
"mongodb"
]
}
I have no idea what I need to set for the tags or parameters for a Bluemix service. How can I find this out on for each bluemix service?
When I instantiate a service using the Bluemix web console, is it possible to view the rest API requests that are submitted in the background so that I can reverse engineer the API calls??
You won't be able to see the commands sent by the BlueMix console directly, but you could replicate the commands with the Cloud Foundry CLI and set an environment variable of CF_TRACE=true to output all requests to STDOUT. You can also set it as CF_TRACE=/path/to/file.
The UUIDs could be changed. If you're going to use the API, you'll need to look things up by name, find their UUIDs, and then use them in subsequent requests. I've been working on something similar, that really should have been implemented as a Terraform provider: https://github.com/EngineerBetter/cf-converger

Cannot connect to secured Azure Service Fabric Cluster via Powershell or Visual Studio

I've created a Service Fabric Application currently consisting of two Reliable Services and a Reliable Actor. For development, I created an SQL Server and database in Azure, and hardcoded the connection string into my application, which I was running on my local SF cluster. This worked fine, and I was able to run my application locally whilst manipulating the database in the cloud.
I now want to publish my service to the cloud, and run it all remotely (so that I can set up and test the Web API is exposes), and this is where the problems start.
Following Azure docs:
Create a Service Fabric cluster in Azure using Azure Resource Manager
Connect to a secure cluster
Configure secure connections to a Service Fabric cluster from Visual Studio
Service Fabric cluster security scenarios
Publish an application to a remote cluster by using Visual Studio
Add or remove certificates for a Service Fabric cluster in Azure
I have taken the following steps:
Used Powershell (with ServiceFabricRPHelpers cmdlets) to create a KeyVault resource group, and within that a KeyVault.
Used New-SelfSignedCertificate with -DnsName set to api.mydomain.co.uk, which I have already purchased and created a CNAME record for api leading to mycluster.northeurope.cloudapp.azure.com:19000 (though of course it doesn't exist at this stage of the process), followed by Export-PfxCertificate to create the .pfx file. The .pfx was then imported to cert:\CurrentUser\TrustedPeople and cert:\CurrentUser\My.
Called Invoke-AddCertToKeyVault to add the newly generated certificate to my KeyVault.
Used the SetupApplications.ps1 script to configure AAD.
Placed all resulting strings etc. into azuredeploy.json and azuredeploy.parameters.json, resolved errors (some of which seemed to contradict the documentation..), and successfully deployed the cluster. It is now visible on my Azure Portal.
Assigned User Roles (admin to myself) from the classic portal.
Used Invoke-AddCertToKeyVault to (this time create and) add a second, "admin client" certificate to the cluster (as opposed to the first which was a cluster certificate).
So, with all of that done, I believe I should have done everything I need to in order to be able to connect to the cluster to publish via VS2015, and access the management interface from api.mydomain.co.uk:19080. Alas, that doesn't happen...
Connection to the database within the resource group my cluster still works from VS via the SQL Server Explorer using SQL authentication, however, any attempt to communicate with the server itself using AAD or X509 based authentication results in a wait while it tries to connect, and then failure. A few examples:
Trying to connect to the management console says it's blocked, which implies to me it is there, but all the documentation ends before telling me how to access it.
Attempting to connect using Connect-ServiceFabricCluster also fails, and searching the error messages hasn't given me any indication of what to do.
After spending two days absorbing all of this and trying to get it working, I'm all out of ideas on what to try and change. Can anyone find a problem in what I have done, or suggest anything I could try?
If you need more details from me then please just ask!
I too had a nightmare attempting to deploy a secure cluster, using much of the same documentation you too have tried to consume. After spending days getting my hands dirty I managed to finally get it working.
Here is my own helper and template: SecureCluster
The key things to watch are:
Make sure your client and cluster certificates are both in your key vault and referenced within your ARM template under the OSProfile of the VM scale set (I noticed in your example that you were adding the client admin certificate after modifying the ARM template):
"osProfile": {
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"computernamePrefix": "[parameters('vmNodeType0Name')]",
"secrets": [
{
"sourceVault": {
"id": "[parameters('sourceVault')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "[parameters('clusterCertificateUrl')]"
},
{
"certificateStore": "My",
"certificateUrl": "[parameters('adminCertificateUrl')]"
}
]
}
]
},
This will make sure all your certificates are installed onto each node within the cluster.
Next is to make sure that the Service Fabric extension within the scale set also has your certificate:
"extensions": [
{
"name": "[concat(parameters('vmNodeType0Name'),'_ServiceFabricNode')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": false,
"protectedSettings": {
"StorageAccountKey1":
"[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2":
"[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[parameters('vmNodeType0Name')]",
"dataPath": "D:\\\\SvcFab",
"durabilityLevel": "Bronze",
"certificate": {
"thumbprint": "[parameters('clusterCertificateThumbPrint')]",
"x509StoreName": "My"
}
},
"typeHandlerVersion": "1.0"
}
},
Finally, under the Service Fabric resource section within the ARM template make sure you specify which certificates to use for node to node security and which is for client to node security.
certificate": {
"thumbprint": "[parameters('clusterCertificateThumbPrint')]",
"x509StoreName": "My"
},
"clientCertificateCommonNames": [],
"clientCertificateThumbprints": [{
"CertificateThumbprint": "[parameters('adminCertificateThumbPrint')]",
"IsAdmin": true
}],
You should then be able to securely connect to the cluster in the way you are attempting to. Although one thing I have found is that the URL shouldbn't be prefixed with "http" within the publish profile and when trying you browse to the explorer you will need the URL to be https://[n]:19080/Explorer/index.html
Hopefully you will find this of some help.

Get auth token for accesing Orion FI-LAB instance

i'm trying to make a request to orion broker using REST Client, for example a NGSI10 queryContext with a payload like this one:
{
"entities": [
{
"type": "*",
"isPattern": "false",
"id": "Sevilla:01727449"
}
]
}
and I always receive the same result:
Auth-token not found in request header
The orion context broker that i´m using is fi-ware lab context broker and I want to know how to make a authorized request to this CB using REST Client, if it is possible.
Thanks
The Orion instance at FI-LAB uses OAuth authentication. Thus, you need to include a valid X-Auth-Token HTTP header in your requests to Orion.
Your application should implement OAuth and negotiate with the security framework a valid token for that. However, for debug or quick testing you can use the following shell script in order to get a fresh X-Auth-Token:
https://github.com/fgalan/oauth2-example-orion-client/blob/master/token_script.sh
The script will ask you your FI-LAB user and password.
Please, have a look to https://wiki.fi-ware.org/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_User_and_Programmers_Guide#FI-LAB_context_management_platform to get more detail on Orion FI-LAB deployment.
EDIT: the recently published Orion Quick Start guide also includes an example on how to use the token_script.sh script that can be useful.