I'm trying to import my blockchain business network on cloud
using IBM bluemix
I already have all the dockers up and accessed the composer playground on cloud successfully, imported my bna file successfully, imported my admin card successfully.
But when I try to connect to the network I get the error below.
Error: Error trying login and get user Context. Error: Error trying to
enroll user or load channel configuration. Error: Enrollment failed
with errors [[{"code":400,"message":"Authorization failure"}]]
I also tried to create a new card using playground directly and gave it admin privileges but I got the same error.
The error means that Composer attempted to enrol the specified identity against the Fabric CA. This identity is not registered with the Fabric CA, so you get the authorization error. Firstly, you can review the docker logs of your CA server to see errors eg. docker logs ca.org1.example.com to get info on the auth failure.
Its possible you tried to connect with a card with no credentials set (certificate/key, not enrol id + secret). You say you 'accessed playground on cloud successfully' (I assume you mean connected to your business network that you deployed (ie rather than imported) - as 'admin' - or some other 'network admin', is that correct?
When you issue an identity, (say, connected as an admin to the biz network), add it to your wallet in Playground, then switch identity (use now) to that id. This activates the identity. Next go to 'My business networks' and connect as that identity card - it will have credentials set in your local wallet (depending on where you're connecting from - local playground or playground inside your Bluemix environment). If you return to the 'My Business Networks' you can use the 'export' icon - alongside "that user's" business network card and save it to disk (with credentials) as a .card file. That card is the one you would share, to connect to Playground elsewhere as that identity. If you continue to have issues I would remove the cards in question using composer card delete (having exported it first) from your wallet location and import the exported .card (ie with credentials set) again, then try connect from Playground.
Related
I am trying to create a linked service in Azure Data Factory to an Azure Data Lake Storage Gen2 data store. Below is my linked service configuration:
I get the following error message when I test the connection:
Error code 24200 Details ADLS Gen2 operation failed for: Storage
operation '' on container 'testconnection' get failed with 'Operation
returned an invalid status code 'Forbidden''. Possible root causes:
(1). It's possible because some IP address ranges of Azure Data
Factory are not allowed by your Azure Storage firewall settings. Azure
Data Factory IP ranges please refer
https://learn.microsoft.com/en-us/azure/data-factory/azure-integration-runtime-ip-addresses..
I have found a very similar question here, but I'm not using Managed Identity as my authentication method. Perhaps I should be using that method. How can I overcome this error?
I tried to create a linked service to my Azure Data Lake storage and when I test its connection, it gives me the same error.
Error code 24200 Details ADLS Gen2 operation failed for: Storage
operation '' on container 'testconnection' get failed with 'Operation
returned an invalid status code 'Forbidden''. Possible root causes:
(1). It's possible because some IP address ranges of Azure Data
Factory are not allowed by your Azure Storage firewall settings. Azure
Data Factory IP ranges please refer
https://learn.microsoft.com/en-us/azure/data-factory/azure-integration-runtime-ip-addresses
As indicated by the Possible root causes in the error details, this occurred because of the Azure data lake storage account firewall settings.
Navigate to your data lake storage account, go to Networking -> Firewalls and virtual networks.
Here, when the public network access is either disabled or enabled from selected virtual networks and IP addresses, the linked service creation fails with the above specified error message.
Change it to Enabled from all networks save the changes and try creating the linked service again.
When we test the connection before creating the linked service, it will be successful, and we can proceed to create it.
UPDATE:
In order to proceed with a data lake storage with public access enabled from selected virtual netowrks and IP addresses to create a successful connection via linked service, you can use the following approach.
Assuming your data lake storage has public network access enabled from selected virtual netowrks and IP addresses, first create an integration runtime in your azure data factory.
In your data factory studio, navigate to Manage -> Integration Runtime -> New. Select Azure,self hosted as the type of integration runtime.
Select Azure in the next window and click continue. Enter the details for integration runtime
In the virtual network tab, enable the virtual network configuration and check the interactive authoring checkbox.
Now continue to create the Integration runtime. Once it is up and running, start creating the linked service for data lake storage.
In Connect via integration runtime, select the above created IR. In order to complete the creation, we also need to create a managed private endpoint (It will be prompted as shown in the image below).
Click Create new, with account selection method as From azure subscription, select the data lake storage you are creating the linked service to and click create.
Once you create this, a private endpoint request will be sent to your data lake storage account. Open the storage account, navigate to Networking -> Private endpoint connections. You can see a pending request. Approve this request.
Once this is approved, you can successfully create the linked service where your data lake storage allows access on selected virtual networks and IP addressess.
The error has occurred because of firewall and network access restriction. One way to overcome this error is by adding your client ip to the firewall and network setting of your storage account. Navigate to your data lake storage account, go to Networking -> Firewalls and virtual networks. Under firewall option click on "Add your client ip address"
I am running Kubeflow pipeline(docker approach) and cluster uses the endpoint to navigate to the dashboard. The Clusters is created followed by the instructions mentioned in this link Deploy Kubeflow. Everything is successfully created and the cluster generated the endpoints and its working perfectly.
Endpoint link would be something like this https://appname.endpoints.projectname.cloud.goog.
Every workload of the pipeline is working fine except the last one. In the last workload, I am trying to submit a job to the cloud-ml engine. But it logs shows that the application has no access to the project. Here is the full image of the log.
ERROR:
(gcloud.ml-engine.versions.create) PERMISSION_DENIED: Request had
insufficient authentication scopes.
ERROR:
(gcloud.ml-engine.jobs.submit.prediction) User
[clustername#project_name.iam.gserviceaccount.com]
does not have permission to access project [project_name]
(or it may not exist): Request had insufficient authentication scopes.
From the logs, it's clear that this service account doesn't have access to the project itself. However, I tried to give access for Cloud ML Service to this service account but still, it's throwing the same error.
Any other ways to give Cloud ML service credentials to this application.
Check two things:
1) GCP IAM: if clustername-user#projectname.iam.gserviceaccount.com has ML Engine Admin permission.
2) Your pipeline DSL: if the cloud-ml engine step calls apply(gcp.use_gcp_secret('user-gcp-sa')), e.g. https://github.com/kubeflow/pipelines/blob/ea07b33b8e7173a05138d9dbbd7e1ce20c959db3/samples/tfx/taxi-cab-classification-pipeline.py#L67
Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)
I use this tutorial to deploy a business network on a free bluemix cluster: https://ibm-blockchain.github.io/
I also deploy the REST Server and communicate via Web apps.
All went fine till yesterday. The REST Server was not accessible anymore.
I deleted everything on the cluster using the script delete_all available in the ibm-container-service repository.
I followed the install procedure using the create_all script. I could access the composer playground (port 31080) again but was not really able to deploy an online business network using the "profile" hlfv1. Now it asks at the bottom of the "deploy UI" for credentials.
I don't know what to fill in. I tried to use ID+Password. On this way I was able to deploy but I got access error by clicking on "connect now". I was able to start the REST server then but if i try to access it in the browser (port 31090), I get the feedback that I'm not authorized.
Any ideas?
And do you know which changes have been made in the last month, which could bring these troubles?
Thx
Phil
The tutorial pointed to only covers playground when used with a Web Browser connection not a real fabric. When you deploy to a real fabric you have to provide an initial identity that you want bound to an initial participant in the business network. The initial participant will be of type org.hyperledger.composer.system.NetworkAdmin and given a name of the initial identity name you provide.
This dialog looks like this
To get you started you should select the ID and Secret radio button. Then for Enrollment ID enter admin and for the Enrollment Secret enter adminpw.
This is the name and secret of the bootstrap identity that exists in the fabric-ca server that has been deployed as part of the scripts.
By providing this information that identity will be enrolled and it's public certificate will be bound to a NetworkAdmin participant which will be called admin. This identity admin will then have access to the business network as only identities that are bound to a participant in the business network can have any sort of access.
I have a test console app that successfully retrieves a cert from the local computer Cert store and use this cert to get a token from AAD.
However, when I run this inside of a Windows service, AcquireTokenAsync() does not run and breaks the execution, although the cert is retrieved from the store.
I did notice a private key error : PrivateKey = '_certCred.Certificate.PrivateKey' threw an exception of type 'System.Security.Cryptography.CryptographicException'
Any advice would be helpful
It would be good if we have a source code and information about the accounts you are using so we can see where the certificate is stored, but based on your description:
It is possible that the user on which the service account is running does not have access to the certificate you are trying to access.
One possibility is to configure the service to run as System Account and then select the 'Allow the service to interact with Desktop'.