Framework error: code: 60 reason: SSL certificate problem: unable to get local issuer certificate in solaris 11.3 - solaris

pkg set-publisher: The origin URIs for 'solarisstudio' do not appear to point to a valid pkg repository.
Please verify the repository's location and the client's network configuration.
Additional details:
Unable to contact valid package repository: https://pkg.oracle.com/solarisstudio/release
Encountered the following error(s):
Transport errors encountered when trying to contact repository.
Reported the following errors:
Framework error: code: 60 reason: SSL certificate problem: unable to get local issuer certificate
URL: 'https://pkg.oracle.com/solarisstudio/release'

1.Make sure that ca-certificates service is running on solaris
svcs -xv
if not try starting using the below commands
svcadm disable svc:/system/ca-certificates:default
svcadm enable svc:/system/ca-certificates:default
Make sure that the below permission is set for all the certificates
If the above solution doesnot work
2. Take backup of all the certificates under /etc/certs/CA. Check for the corrupted certificates, by moving the certificates one by one to /etc/certs/CA in the location and starting ca-certificate service. The point when the service doesn't start is the certificate which is corrupted.
Make sure that the certificates in the location have below permissions
sudo chown root:sys /etc/certs/CA/*.pem

Related

Azure DevOps on-premise cannot verify Kubernetess service connection

I am creating New Kubernetes service connection in Azure DevOps Server 2020 Update 1 via KubeConfig.
When I click to Verify that the connection it says that Verification Failed with the generic error:
Failed to query service connection API: 'https://ekm.mpu.cz/k8s/clusters/c-qmcrb/api/v1/nodes'. Error Message: 'An error occurred while sending the request.'
Please note that the Kubernetess instance is in the other domain.
I have the notion that the error could be with the certs are not imported somewhere on the machine, where the Azure DevOps is hosted, but I am unsure where. The MS documentation is silent about that as well.
So far I've tried to:
Import CA certs to the MMC under trusted publishers.
Import CA certs under cacerts in JAVA-HOME via keytool.
Import CA certs into azureTrustsStore.jks in JAVA-HOME via keytool.
For all 3 I've checked that the CA certs are imported correctly. But to no avail. Could you please advice or redirect me to the method, how to do it?
Additional Info:
While I cannot Verify and Save the connection, I still can Save it and then use it in the pipeline and it works OK! (sucesfully connect and execute the command).
Connection issues can occur for many reasons, but the root cause is often related to an error with one of these items: Network, Authentication, Authorization. You may refer to Basic troubleshooting of cluster connection issues for detailed troubleshooting steps.

How to make OpenSearch Dashboard allow self-signed certs for OpenID Connect URLs?

The problem is that the OpenID Connect URL I'm trying to reach uses self-signed certs. The plugin securityDashboards doesn't seem to like that:
Error: unable to verify the first certificate\ n at TLSSocket.onConnectSecure(_tls_wrap.js: 1088: 34)\ n at TLSSocket.emit(events.js: 198: 13)\ n at TLSSocket._finishInit(_tls_wrap.js: 666: 8)\ n code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
...
Client request error: unable to verify the first certificate
Since this seems to be a JavaScript error, my first approach was to point npm to the same keystore which also curl uses and which has no problem with the URL. Via npm config set cafile /etc/ssl/certs/ca-certificates.crt
After that didn't work I tried to disable the SSL verification altogether just to see if it works. Via npm config set strict-ssl false
That failed so I read the docs about certificate validation, tried to set up pemtrustedcas_filepath with the keystore above... didn't work.
Then tried to download the cert and use pemtrustedcas_content, but that didn't work either.
Out of options. Thanks for any suggestion!
Setting opensearch_security.openid.root_ca: /etc/ssl/certs/ca-certificates.crt in opensearch_dashboards.yml worked for me.

Installation error in Service mesh Linkerd service mesh in aks

I have followed the getting started instructions here: https://linkerd.io/2/getting-started/ for installing linkerd but i am not able to install cli of linkerd.
Please see the command below: curl -sL https://run.linkerd.io/install | sh
Please see the error below:
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option
Can anyone please help me to solve it.
The provides installation instructions from the Linkerd website are indeed vague, they provide instruction for Linux (shell) users as well as a brew install command for OSX users.
If you are interested in installing LinkerD on your Windows machine, the recommandation is to download the binary (.exe - for Windows) directly form their release page: https://github.com/linkerd/linkerd2/releases
After you have downloaded the binary, you should be able to update your %PATH% environment variable to add the location of the binary, this will allow you to refer to the linkerd directly from your command prompt.
Linkerd started supporting Windows with a Chocolatey package: https://chocolatey.org/packages/Linkerd2
To use it, make sure that you have Chocolatey installed and run:
choco install linkerd2
After the installation, verify that the install was successful with:
linkerd --help
You should see the list of commands available to the Linkerd CLI.

Gitlab runner fail to use cache with minio

I installed a self-hosted Gitlab using the Helm chart on a Kubernetes cluster.
Everything is working fine except one thing: the cache.
In my .gitlab-ci.yml file I have
cache:
paths:
- .m2/repository/
- target/
But when running the job I have this warning when trying to download the cache:
WARNING: Retrying...
error=Get https://minio.mydomain.com/runner-cache/gitlab-runner/project/6/default?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxx: x509: certificate signed by unknown authority
And when uploading I have:
WARNING: Retrying... error=received: 501 Not Implemented
Uploading cache.zip to https://minio.mydomain.com/runner-cache/gitlab-runner/project/6/default
FATAL: received: 501 Not Implemented
But the certificate is provided by LetsEncrypt so it's not an unknown authority. When I go on minio.mydomain.com I can see that the connection is secure
I've also check that the runner is using the right credentials and yes it is.
I'm kind of lost here. Any hints is welcome.
Thanks.
You need to add the CA to the image that is hosting the cache.
You can follow these instructions from this gitlab issue for a workaround:
Update the helper image to have the ca chain for the self-signed certificate trusted.
FROM gitlab/gitlab-runner-helper:x86_64-latest
RUN apk add --no-cache ca-certificates
COPY ca.crt /usr/local/share/ca-certificates/ca.crt
RUN update-ca-certificates
RUN rm /usr/local/share/ca-certificates/ca.crt
docker build -t registry.gitlab.com/namespace/project/tools/gitlab-runner-helper:$SOME_TAG
Override the helper image used by GitLab by updating the config.toml to use the image you just build with the correct CA trusted.
If you are using the helm chart you can define KUBERNETES_HELPER_CPU_LIMIT environment variable and define it in envVars
Hope this helps.

Service Fabric not starting service Error: FABRIC_E_CERTIFICATE_NOT_FOUND

I am trying to publish a Service Fabric service to my local cluster, but it never goes out of this state:
There was an error during activation.Failed to configure certificate
permissions. Error: FABRIC_E_CERTIFICATE_NOT_FOUND
Do you know what is this error related to?
How can I fix it?
As the error says, SF is unable to find the required cert in Cert store. You can find the missing cert info from the event error logs in Event Viewer-
%SystemRoot%\System32\Winevt\Logs\Microsoft-ServiceFabric%4Admin.evtx
Check using Certificate Manager if this cert is present and not expired. You can use this script also.
More info regarding the required certs can be found in this file. -
C:\SfDevCluster\Data\_App\_Node_0\{AppNameFromSf}\App.1.0.xml