azure pipeline - how to pass certificate to pipeline agent - azure-devops

I'm trying to push docker image to registry on nexus using azure pipeline.
my nexus uses self signed certificate.
when I tried to push I got the following error:
x509: certificate signed by unknown authority
Since I don't have root privileges on the pipeline agent,
I CANNOT (for example) create a 'command line script' task to run the commands:
openssl s_client -showcerts -connect myserver:port < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/local/share/ca-certificates/ca.crt
update-ca-certificates -f
how can I make the agent to trust my self signed certificate?

Generally, we can use the Secure Files to store the signing certificates. Secure files are defined and managed in the Library tab in Azure Pipelines.
Then we can use the Download Secure File Utility task to consume secure files within a Build or Release Pipeline. You can have a try for that.
If that doesn't work. I am afraid that you have to setup a self-hosted agent to do that.

Related

Artifactory: x509 certificate signed by unknown authority

Our working environment is Azure DevOps. We actually use PowerShell scripts to upload our artifacts to Artifactory.
Right now we are trying to download those artifacts using the Artifactory Service Connection in Azure DevOps, but not the PowerShell script. So now we are having the error
[error] Post https://art..... : x509 certificate signed by unknown authority
How can we solve this issue?
It seems that you are accessing Artifacory via HTTPS and with a Self-signed certificate therefore the Artifactory service connection is not trusting the certs. I would recommend referring to this JFrog Wiki and by adding the certs to the trusted directory of the JFrog CLI which is used in most of the Artifactory Azure tasks.

Rundeck Job Notification Webhook Certificate Issue

I have a job that Sends Notification On success or On failure. It uses the Webhook option. The webhook is a Rundeck API that executes a job.
Here is my notification setup
I've check on the rundeck.log. It has the following error:
ERROR services.NotificationService [quartzScheduler_Worker-6] - Notification failed [onsuccess,succeeded,238621]; URL https://client-dns/api/33/job/cd3b3a1b-90c9-4c99-bf29-46c5aad1b4ff/run?authtoken=6XpW50hvZoPUTtlwucKGJ7ERKOxeJCTR&option.rd_exec_id=238621: Unable to POST notification after 1 tries: success for execution 238621 (succeeded): Error making request: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
How can I fix this issue? I have already a certificate but how can I tell rundeck to use it. Thank in advance to those who'll help me.
You need to add the webhook service certificate to the Java cacert to make recognizable by Rundeck. Alternatively, if you are using Rundeck over SSL, you can add that certificate to Rundeck truststore file in the following way:
Stop the Rundeck Service.
Extract the service certificate:
echo -n | openssl s_client -connect your_service_host:your_service_port > cert.out
Add it to your Rundeck truststore file:
keytool -importcert -trustcacerts -file certs.out -alias my_service -keystore your/path/to/rundeck/truststore
Start Rundeck service.

Gitlab runner fail to use cache with minio

I installed a self-hosted Gitlab using the Helm chart on a Kubernetes cluster.
Everything is working fine except one thing: the cache.
In my .gitlab-ci.yml file I have
cache:
paths:
- .m2/repository/
- target/
But when running the job I have this warning when trying to download the cache:
WARNING: Retrying...
error=Get https://minio.mydomain.com/runner-cache/gitlab-runner/project/6/default?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxx: x509: certificate signed by unknown authority
And when uploading I have:
WARNING: Retrying... error=received: 501 Not Implemented
Uploading cache.zip to https://minio.mydomain.com/runner-cache/gitlab-runner/project/6/default
FATAL: received: 501 Not Implemented
But the certificate is provided by LetsEncrypt so it's not an unknown authority. When I go on minio.mydomain.com I can see that the connection is secure
I've also check that the runner is using the right credentials and yes it is.
I'm kind of lost here. Any hints is welcome.
Thanks.
You need to add the CA to the image that is hosting the cache.
You can follow these instructions from this gitlab issue for a workaround:
Update the helper image to have the ca chain for the self-signed certificate trusted.
FROM gitlab/gitlab-runner-helper:x86_64-latest
RUN apk add --no-cache ca-certificates
COPY ca.crt /usr/local/share/ca-certificates/ca.crt
RUN update-ca-certificates
RUN rm /usr/local/share/ca-certificates/ca.crt
docker build -t registry.gitlab.com/namespace/project/tools/gitlab-runner-helper:$SOME_TAG
Override the helper image used by GitLab by updating the config.toml to use the image you just build with the correct CA trusted.
If you are using the helm chart you can define KUBERNETES_HELPER_CPU_LIMIT environment variable and define it in envVars
Hope this helps.

How do I connect to a secure cluster from YAML pipeline?

That's it. Plain and simple.
The first step in my pipeline is to remove services that are no longer supported. To do that I need to use Connect-ServiceFabricCluster to connect to the cluster. But that requires a certificate installed on the local machine. I won't have a local machine in a hosted pipeline and I have a problem with installing the certificate on the hosted VM for security reasons.
So how do I connect?
1,
Dont know if you tried azure cli sfctl cluster select which allows you to specify a certificate, check here for more information.
In order to use the certificate in your pipeline. You need to go to the Library under Pipelines and click secure files and add your certificate from local. Make sure Authorize for use in all pipelines is checked when adding your certificate.
Then you can add a Download secure file task to download your certificate in your pipeline.
Then you can consume it in your next task by referring to the download location "$(Agent.TempDirectory)\yourcertificatefilename", check here for more information
sfctl cluster select --endpoint https://testsecurecluster.com:19080 --cert "$(Agent.TempDirectory)\yourcertificatefilename" --key ./keyfile.key
2,
If above sfctl cluster select is not working, You can install the certificate which is already uploaded with a powershell task to the hosted agent
Import-Certificate -FilePath ""$(Agent.TempDirectory)\yourcertificatefilename"" -CertStoreLocation cert:\LocalMachine\Root
3,
If the hosted agent has security concern. You can create your own self-hosted agent on your local machine. You can then install the certificate in your on-premises agent.
To create self-hosted agent.
You need to get a PAT and assign the scope to Agent Pool. click here for detailed steps. You will need the PAT to config your self-hosted agent later.
Then go to Project setting, select Agent Pools under Pipelines, Create a self-defined agent pool if you donot have one, Then select your agent pool, click new agent, and follow the steps to create your own agent.
Hope above can be helpful to you!

Azure Management Certificate is forbidden (ForbiddenError)

My goal is to use a powershell script to get an instance of an azure service with Get-HostedService.
As a preparation, I created a self-signed certificate and uploaded as a Management Certificate via the Azure Management portal, according to http://msdn.microsoft.com/en-us/library/gg551722.aspx.
After that, I
See my certificate and its thumbprint in PS cert:\CurrentUser\My
The thumbprint matches the thumbprint I can see in manage.windowsazure.com > settings > management certificates
My test script just tries to create a $service instance and goes like this:
Add-PSSnapin WAPPSCmdlets
$cert = Get-Item cert:\CurrentUser\My\<thumbprint of certificate>
$service = Get-HostedService <my service name> -Certificate $cert -SubscriptionId <my subscription id>
The 3rd line (Get-HostedService command) results into:
Get-HostedService : HTTP Status Code: ForbiddenError - HTTP Error Message: The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription.
The machine (A) I am trying to do this on is behind a corporate firewall. According to our IT there is no event in the firewall log that may indicate a problem. The same script works on a different machine (B) that is behind another firewall (in a DMZ). However, both machines have access to the internet.
I tried to use the same certificate (exported from machine A and imported into machine B ) as well as to create a new certificate for the new machine A.
On both machines, double-cllcking on the certificate (in the mmc snap-in) shows:
This CA Rot certificate is not trusted. To enable trust, install certificate in the trusted Root Certification Authorities store
and
You have a private key that corresponds to this certificate
I tried to add the certificate to "Trusted Root CAs" but that did not help.
The certificate is
sha1 RSA 2048 bits
not expired
What can I do to solve the problem or get more debugging information?