I'm trying to execute the spark+oozie+bluemix liberty example on the OozieWorkflowSparkGroovyBluemixDeploy branch against a BigInsights for Apache Hadoop basic cluster.
The error I get when I try to access the application from a browser:
There was an unexpected error (type=Internal Server Error, status=500).
javax.net.ssl.SSLKeyException: RSA premaster secret error
What is causing this issue?
The issue appears to be due to the webHDFS certificate not being in the liberty truststore. See here for more information: Steps to configure Bluemix Liberty application to add a certificate to the Liberty trust store using a cf CLI workflow?
UPDATE:
The issue was due to liberty not having unlimited encryption policies. Fixed with commit: https://github.com/IBM-Bluemix/BigInsights-on-Apache-Hadoop/commit/b78d12d5ea3ce5e43395cf8e7c1d094e1a9fc012
Related
I am creating New Kubernetes service connection in Azure DevOps Server 2020 Update 1 via KubeConfig.
When I click to Verify that the connection it says that Verification Failed with the generic error:
Failed to query service connection API: 'https://ekm.mpu.cz/k8s/clusters/c-qmcrb/api/v1/nodes'. Error Message: 'An error occurred while sending the request.'
Please note that the Kubernetess instance is in the other domain.
I have the notion that the error could be with the certs are not imported somewhere on the machine, where the Azure DevOps is hosted, but I am unsure where. The MS documentation is silent about that as well.
So far I've tried to:
Import CA certs to the MMC under trusted publishers.
Import CA certs under cacerts in JAVA-HOME via keytool.
Import CA certs into azureTrustsStore.jks in JAVA-HOME via keytool.
For all 3 I've checked that the CA certs are imported correctly. But to no avail. Could you please advice or redirect me to the method, how to do it?
Additional Info:
While I cannot Verify and Save the connection, I still can Save it and then use it in the pipeline and it works OK! (sucesfully connect and execute the command).
Connection issues can occur for many reasons, but the root cause is often related to an error with one of these items: Network, Authentication, Authorization. You may refer to Basic troubleshooting of cluster connection issues for detailed troubleshooting steps.
I got the following error when trying to use a Notary client to get the digest of a signed image in my IBM Container Registry. Can anyone advise how to solve it?
# notary -s https://us.icr.io:4443 lookup us.icr.io/securek8s/hello-world latest
* fatal: unauthorized: The login credentials are not valid, or your IBM Cloud account is not active.
BTW, I built the Notary client from https://github.com/theupdateframework/notary
Notary uses your credentials from your Docker login cache. The error message that you received suggests that your login to us.icr.io isn't valid. This usually means that your credentials have expired.
If you have the ibmcloud CLI and the container-registry plugin installed, you can refresh your login by making sure that you're targeting the US South registry (ibmcloud cr region-set us.icr.io) and then logging in with ibmcloud cr login.
If you don't have the CLI plugin installed, you can log in using Docker commands directly. For more information, see Automating access to IBM Cloud Container Registry
in the IBM Cloud docs.
I am trying to publish a Service Fabric service to my local cluster, but it never goes out of this state:
There was an error during activation.Failed to configure certificate
permissions. Error: FABRIC_E_CERTIFICATE_NOT_FOUND
Do you know what is this error related to?
How can I fix it?
As the error says, SF is unable to find the required cert in Cert store. You can find the missing cert info from the event error logs in Event Viewer-
%SystemRoot%\System32\Winevt\Logs\Microsoft-ServiceFabric%4Admin.evtx
Check using Certificate Manager if this cert is present and not expired. You can use this script also.
More info regarding the required certs can be found in this file. -
C:\SfDevCluster\Data\_App\_Node_0\{AppNameFromSf}\App.1.0.xml
I have a paid cluster with the IBM Cloud Kubernetes Service and a container / service deployed. I have a valid wildcard certificate which I imported into the Certificate Manager. Now I want to apply or deploy that certificate to my cluster:
bx cs alb-cert-deploy --secret-name henrik-xxxx --cluster henrik-bla-bla --cert-crn crn:v1:bluemix:public:cloudcerts:us-south:a/lotsofnumbers:certificate:morenumbers
The above command returns without an error. But when I check the certificate deployment with alb-cert-get it reports a "create_failed". I looked at the troubleshooting guide and tried to update and remove the certificate resp. secret. However, it seems the secret is still around and I cannot really remove it.
Are there command options I can use to get more diagnostic data? Any logs I can see? Any command I can use to clean up the environment?
There are several ways to debug the issue:
Use export BLUEMIX_TRACE=true; bx cs alb-cert-deploy ... to trace the command.
Use the Activity Tracker service and account-level events.
In my case I could see the following in the Activity Tracker logs:
"responseData_str": "{\"code\":\"IAMERR403-01\",\"message\":\"Forbidden\"}",
It was part of an event related to:
"action_str": "cloudcerts.certificate.read",
"target": {
"name_str": "cloudcerts",
"id_str": "crn:v1:bluemix:public:cloudcerts:us-south:a/lotsofnumberhere::",
"typeURI_str": "certificate/read"
},
This points to an authorization issue.
I need some help with deploying a Service fabric app from Team Services to Azure.
I’m getting the following error from the Agent in Team Services (see screenshot below):
2018-06-22T13:17:13.3007613Z ##[error] An error occurred attempting to
import the certificate. Ensure that your service endpoint is
configured properly with a correct certificate value and, if the
certificate is password-protected, a valid password.
Error message: Exception calling "Import" with "3" argument(s):
"Cannot find the requested object.
Please advise.
Here is my Service Fabric Security security page, don't remember where I set up the password needed on the VSTS side but I took note of it and believe it's correct.
Here is the Endpoint page on the VSTS side:
Issue resolved with the help of MS Support by creating a new Certificate in the Key Vault and Adding it to the Service Fabric, steps:
Azure Portal:
Home > Key vaults > YourKeyVault - Certificates: Generate/Import
Generate new key with a CertificateName of your choosing and CN=CertificateName as Subject.
Home > Key vaults > YourKeyVault - Certificates > CertificateName
Select the only version available and Download in PFX/PEM format.
Power Shell: Convert to Base64 string, CertificateBase64
[System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes("c:\YourCertificate.pfx"))
Home > YourServicefabric - Security: Add
Add the Certificate you created as Admin Client by providing 's thumbprint.
VSTS/TFS:
Build and release > Your pipeline: Edit
In the Deployment Process Service Fabric Environment click Manage for Cluster Connection and add a new connection. Besides the other information, in the Client Certificate paste the previous CertificateBase64.
Check the Service Endpoint in VSTS:
Whether it has a properly base64 encoded certificate, with a private key.
Also, check if the provided passphrase is correct.
Also, check if the service endpoint is configured as tcp://mycluster.region.cloudapp.azure.com:19000.
Check if the thumbprint is correct.