I have installed all related certificates in my local system. But unable to figure-out why it is coming like this. Could you please help on this issue.
enter image description here
After spending lot of time to investigate, one of my certificate got expired. After installing new cert into my local system the error is no longer.
Related
I am creating New Kubernetes service connection in Azure DevOps Server 2020 Update 1 via KubeConfig.
When I click to Verify that the connection it says that Verification Failed with the generic error:
Failed to query service connection API: 'https://ekm.mpu.cz/k8s/clusters/c-qmcrb/api/v1/nodes'. Error Message: 'An error occurred while sending the request.'
Please note that the Kubernetess instance is in the other domain.
I have the notion that the error could be with the certs are not imported somewhere on the machine, where the Azure DevOps is hosted, but I am unsure where. The MS documentation is silent about that as well.
So far I've tried to:
Import CA certs to the MMC under trusted publishers.
Import CA certs under cacerts in JAVA-HOME via keytool.
Import CA certs into azureTrustsStore.jks in JAVA-HOME via keytool.
For all 3 I've checked that the CA certs are imported correctly. But to no avail. Could you please advice or redirect me to the method, how to do it?
Additional Info:
While I cannot Verify and Save the connection, I still can Save it and then use it in the pipeline and it works OK! (sucesfully connect and execute the command).
Connection issues can occur for many reasons, but the root cause is often related to an error with one of these items: Network, Authentication, Authorization. You may refer to Basic troubleshooting of cluster connection issues for detailed troubleshooting steps.
Is it possible to have Wazuh Manager served through custom SSL certificates? The wazuh-certs-tool gives you a self cert, and every other way to get it served through SSL has failed.
The closest I've gotten to getting this to work is I've had the dashboard being served by a custom SSL, I had agents connecting to it successfully and providing a heartbeat, but had zero log flows or events happening. When I had it in this state, I saw the API calls were coming from what appeared to be a Java instance, erroring out complaining about receiving certificate. I saw a keystore file located at /etc/wazuh-indexer. Do I also need to add the root-ca cert here as well?
It seems that your indexer's excepted certificates do not match the certificates in your manager or the dashboard.
If you follow the normal installation guide, it shows how and where to place your certificates, that are created using the wazuh-cert-tool. But, certificates can be created from any other source, as long as they have the expected information, you can check that informationenter link description here here.
I would recommend you follow the installation steps in the installation guide, from scratch to make sure you copy each excepted certificate in it's place and that the configuration files for your indexer, dashboard, and manager take into account the correct files. All you would need to change, the creation of the certificates, to have your own custom certs.
In case of further doubt, do not hesitate to ask.
I have implemented SSO login to argocd through Active Directory.
When I try to access argocd, I get error :
Your connection is not private
Attackers might be trying to steal your information from argo-cd.daa.pks.dell.com (for example, passwords, messages, or credit cards). Learn more
NET::ERR_CERT_AUTHORITY_INVALID
When I check the logs of argocd pod, I see this error :
finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = no session information" grpc.code=Unauthenticated grpc.method=List grpc.service=application.ApplicationService grpc.start_time="2022-05-02T02:06:34Z" grpc.time_ms=5.178 span.kind=server system=grpc
But when I open ArgoWorkflow and try to open argocd, it works.
Please help me in understanding what is the issue?
You have to use a trusted certificate (by a certificate authority like letsencrypt for example if you want to use it on internet)
Example: (sorry it's in French but you got the point)
https://blog.blaisot.org/letsencrypt-wildcard-part1.html
https://blog.blaisot.org/letsencrypt-wildcard-part2.html
However if it's in your enterprise network, just ask for an SSL certificate from the authority of certification and use it. ( https://argo-cd.readthedocs.io/en/stable/operator-manual/tls/ )
You can also disable TLS/ssl to avoid this kind of error if you want.
Bguess
I am trying to publish a Service Fabric service to my local cluster, but it never goes out of this state:
There was an error during activation.Failed to configure certificate
permissions. Error: FABRIC_E_CERTIFICATE_NOT_FOUND
Do you know what is this error related to?
How can I fix it?
As the error says, SF is unable to find the required cert in Cert store. You can find the missing cert info from the event error logs in Event Viewer-
%SystemRoot%\System32\Winevt\Logs\Microsoft-ServiceFabric%4Admin.evtx
Check using Certificate Manager if this cert is present and not expired. You can use this script also.
More info regarding the required certs can be found in this file. -
C:\SfDevCluster\Data\_App\_Node_0\{AppNameFromSf}\App.1.0.xml
I have a cluster that I've been using without issue for a few months. It has 2 certificates created about the same time - so nowhere near expired. Last week, builds stopped working due to a certificate error (CertificateNotMatched) on deployment. At the same time, I was unable to access or publish with Visual Studio. I checked the cert and it appears to not be revoked or invalid in any way. The secondary certificate is still working fine.
What could cause a certificate to stop working spontaneously? Is there any fix apart from just getting a new certificate when this occurs?