Keycloak-gatekeeper: 'aud' claim and 'client_id' do not match - keycloak

What is the correct way to set the aud claim to avoid the error below?
unable to verify the id token {"error": "oidc: JWT claims invalid: invalid claims, 'aud' claim and 'client_id' do not match, aud=account, client_id=webapp"}
I kinda worked around this error message by hardcoding aud claim to be the same as my client_id. Is there any better way?
Here is my docker-compose.yml:
version: '3'
services:
keycloak-proxy:
image: "keycloak/keycloak-gatekeeper"
environment:
- PROXY_LISTEN=0.0.0.0:3000
- PROXY_DISCOVERY_URL=http://keycloak.example.com:8181/auth/realms/realmcom
- PROXY_CLIENT_ID=webapp
- PROXY_CLIENT_SECRET=0b57186c-e939-48ff-aa17-cfd3e361f65e
- PROXY_UPSTREAM_URL=http://test-server:8000
ports:
- "8282:3000"
command:
- "--verbose"
- "--enable-refresh-tokens=true"
- "--enable-default-deny=true"
- "--resources=uri=/*"
- "--enable-session-cookies=true"
- "--encryption-key=AgXa7xRcoClDEU0ZDSH4X0XhL5Qy2Z2j"
test-server:
image: "test-server"

With recent keycloak version 4.6.0 the client id is apparently no longer automatically added to the audience field 'aud' of the access token.
Therefore even though the login succeeds the client rejects the user.
To fix this you need to configure the audience for your clients (compare doc [2]).
Configure audience in Keycloak
Add realm or configure existing
Add client my-app or use existing
Goto to the newly added "Client Scopes" menu [1]
Add Client scope 'good-service'
Within the settings of the 'good-service' goto Mappers tab
Create Protocol Mapper 'my-app-audience'
Name: my-app-audience
Choose Mapper type: Audience
Included Client Audience: my-app
Add to access token: on
Configure client my-app in the "Clients" menu
Client Scopes tab in my-app settings
Add available client scopes "good-service" to assigned default client scopes
If you have more than one client repeat the steps for the other clients as well and add the good-service scope.
The intention behind this is to isolate client access. The issued access token will only be valid for the intended audience.
This is thoroughly described in Keycloak's documentation [1,2].
Links to recent master version of keycloak documentation:
[1] https://github.com/keycloak/keycloak-documentation/blob/master/server_admin/topics/clients/client-scopes.adoc
[2] https://github.com/keycloak/keycloak-documentation/blob/master/server_admin/topics/clients/oidc/audience.adoc
Links with git tag:
[1] https://github.com/keycloak/keycloak-documentation/blob/f490e1fba7445542c2db0b4202647330ddcdae53/server_admin/topics/clients/oidc/audience.adoc
[2] https://github.com/keycloak/keycloak-documentation/blob/5e340356e76a8ef917ef3bfc2e548915f527d093/server_admin/topics/clients/client-scopes.adoc

This is due to a bug: https://issues.jboss.org/browse/KEYCLOAK-8954
There are two workarounds described in the bug report, both of which appear to do basically the same thing as the accepted answer here but can be applied to the Client Scope role, so you don't have to apply them to every client individually.

If, like me, you want to automate the keycloak config, you can use kcadm
/opt/jboss/keycloak/bin/kcadm.sh \
create clients/d3170ee6-7778-413b-8f41-31479bdb2166/protocol-mappers/models -r your-realm \
-s name=audience-mapping \
-s protocol=openid-connect \
-s protocolMapper=oidc-audience-mapper \
-s config.\"included.client.audience\"="your-audience" \
-s config.\"access.token.claim\"="true" \
-s config.\"id.token.claim\"="false"

Its works to me:
In my SecurityConfiguration class:
#Bean
public CorsConfigurationSource corsConfigurationSource() {
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true);
config.setAllowedOrigins(Arrays.asList("http://localhost:3000"));
config.setAllowedMethods(Arrays.asList(CorsConfiguration.ALL));
config.setAllowedHeaders(Arrays.asList(CorsConfiguration.ALL));
config.setAllowCredentials(true);
source.registerCorsConfiguration("/**", config);
return source;
}

Related

Using Keycloak for defining subjects in policies in Eclispe Ditto

My current use case is: I have a frontend application where a user is logged in via Keycloak. I would like to implement some parts of the Ditto HTTP API in this frontend (https://www.eclipse.org/ditto/http-api-doc.html).
For example I want to create policies (https://www.eclipse.org/ditto/basic-policy.html) for authorization. I've read in the documentation that one can use an OpenID Connect compliant provider and the form is : (https://www.eclipse.org/ditto/basic-policy.html#who-can-be-addressed).
There's basic auth example at the bottom of the page, it seems to use the username in this case.
{
"policyId": "my.namespace:policy-a",
"entries": {
"owner": {
"subjects": {
"nginx:ditto": {
"type": "nginx basic auth user"
}
},
...
}
My question is: What exactly would be the sub-claim if I want to use Keycloak? Is it also the username of the user I want to grant rights to? And how would I get this in my frontend where I want to specify the policy for sending it to Ditto afterwards?
UPDATE 1:
I tried to enable keycloak authentication in Ditto like suggested below and as stated here: https://www.eclipse.org/ditto/installation-operating.html#openid-connect
Because I'm running Ditto with Docker Compose, I added the following line as an environment variable in ditto/deployment/docker/docker-compose.yml in line 136: - Dditto.gateway.authentication.oauth.openid-connect-issuers.keycloak=http://localhost:8090/auth/realms/twin
This URL is the same as in the issuer claim of my token which I'm receiving from keycloak.
Now if I try to make for example a post request with Postman to {{basePath}}/things I get the following error:
<html>
<head>
<title>401 Authorization Required</title>
</head>
<body bgcolor="white">
<center>
<h1>401 Authorization Required</h1>
</center>
<hr>
<center>nginx/1.13.12</center>
</body>
</html>
I chose Bearer Token as Auth in Postman and pasted a fresh token. Basic Auth with the default ditto user is still working.
Do I have to specify the new subject/my user in Ditto before?
UPDATE 2:
I managed to turn basic auth in nginx off by commenting out "auth_basic" and "auth_basic_user_file" in nginx.conf!
It seems to be forwarded to Ditto now, because now I get the following error with Postman:
{
"status": 401,
"error": "gateway:jwt.issuer.notsupported",
"message": "The JWT issuer 'localhost:8090/auth/realms/twin' is not supported.",
"description": "Check if your JWT is correct."
}
UPDATE 3:
My configuration in gateway.conf looks now like this:
oauth {
protocol = "http"
openid-connect-issuers = {
keycloak = "localhost:8090/auth/realms/twin"
}
}
I also tried to add these two lines in the docker-compose.yml:
- Dditto.gateway.authentication.oauth.protocol=http
- Dditto.gateway.authentication.oauth.openid-connect-issuers.keycloak=localhost:8090/auth/realms/twin
Unfortunately I still had no luck, same error as above :/ It seems like an user had a similar problem with keycloak before (https://gitter.im/eclipse/ditto?at=5de3ff186a85195b9edcb1a6), but sadly he mentioned no solution.
EDIT: It turns out that I specified these variables in the wrong way, the correct solution is to add them as part of command: java ... more info here
UPDATE 4:
I tried to build Ditto locally instead of using the latest docker images and I think I might be one step further now, it seems like my oauth config is working. I get now:
{
"status": 503,
"error": "gateway:publickey.provider.unavailable",
"message": "The public key provider is not available.",
"description": "If after retry it is still unavailable, please contact the service team."
}
The error message from the log is:
gateway_1 | 2020-11-05 15:33:18,669 WARN [] o.e.d.s.g.s.a.j.DittoPublicKeyProvider - Got Exception from discovery endpoint <http://localhost:8090/auth/realms/twin/.well-known/openid-configuration>.
gateway_1 | akka.stream.StreamTcpException: Tcp command [Connect(localhost:8090,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
gateway_1 | Caused by: java.net.ConnectException: Connection refused
...
gateway_1 | java.util.concurrent.CompletionException: org.eclipse.ditto.services.gateway.security.authentication.jwt.PublicKeyProviderUnavailableException [message='The public key provider is not available.', errorCode=gateway:publickey.provider.unavailable, statusCode=SERVICE_UNAVAILABLE, description='If after retry it is still unavailable, please contact the service team.', href=null, dittoHeaders=ImmutableDittoHeaders [{}]]
...
gateway_1 | Caused by: org.eclipse.ditto.services.gateway.security.authentication.jwt.PublicKeyProviderUnavailableException [message='The public key provider is not available.', errorCode=gateway:publickey.provider.unavailable, statusCode=SERVICE_UNAVAILABLE, description='If after retry it is still unavailable, please contact the service team.', href=null, dittoHeaders=ImmutableDittoHeaders [{}]]
...
gateway_1 | Caused by: akka.stream.StreamTcpException: Tcp command [Connect(localhost:8090,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
gateway_1 | Caused by: java.net.ConnectException: Connection refused
My keyloak is definitely running, I'm able to get tokens. If I'm opening http://localhost:8090/auth/realms/twin/.well-known/openid-configuration which is in the first error message, I'm able to see my openid-configuration from keycloak config.
Edit: It seems that my gateway container cannot reach my keycloak container, will try to figure this out.
FINAL UPDATE:
Unreachable keycloak docker container from the gateway docker container was the issue. I'm now using traefik:
Keycloak container has the following alias: keycloak.localhost
Oauth configuration in the gateway looks like this:
oauth {
protocol = "http"
openid-connect-issuers = {
keycloak = "keycloak.localhost/auth/realms/twin"
}
}
Now the gateway can find the keycloak container via the alias and I can still use the keycloak admin ui from my localhoast: http://keycloak.localhost:8090/auth/admin/
Additional info: Traefic Blog
What exactly would be the sub-claim if I want to use Keycloak?
Keycloak provides you a JWT.
A JWT is an encrypted JSON which contains multiple fields called "claims". You can check how your token looks like by visiting https://jwt.io and pasting your token there. One of those fields is called sub. This is the sub claim.
To enable your keycloak authentication in eclipse ditto you need to add the issuer to the ditto configuration.
An example can be founde here.
The address must match the URL in the issuer claim of your JWT token.
ditto.gateway.authentication {
oauth {
protocol = "http"
openid-connect-issuers = {
some-name = "localhost:8090/auth/realms/twin"
}
}
}
Is it also the username of the user I want to grant rights to?
In eclipse ditto there is not really a concept of "user names". Eclipse ditto authentication is based on authorization subjects. For the basic authentication example you provided, the authorization subject which is generated within ditto is nginx:ditto.
For JWT authentication the authorization subject is generated as a combination of the name for the open id connect issuer which you configured (in my case some-name) and the value of the sub claim. An authorization subject could look like this: some-name:8d078113-3ee5-4dbf-8db1-eb1a6cf0fe81.
And how would I get this in my frontend where I want to specify the policy for sending it to Ditto afterwards?
I'm not sure if I understand the question correctly. If you mean how to authenticate your frontend HTTP requests to eclipse ditto, you need to provide the JWT to eclipse ditto by adding it to the authorization header of your HTTP requests in the following form:
authorization: Bearer yourJWT
If you mean how you would know the sub claim of a JWT, you need to parse the JWT to a JSON object and then read the sub claim out of the payload section.

How to configure Keycloak to work with Guacamole's OpenID plugin?

I'm trying to setup Apache Guacamole with KeyCloak as OpenID Connect Authorization Server.
Guacamole is redirecting me to KeyCloak, I can Log in with my user I created on KeyCloak and I get redirected back to Guacamole, but there it says that my token is invalid
08:08:11.477 [http-nio-4432-exec-7] INFO o.a.g.a.o.t.TokenValidationService - Rejected invalid OpenID token: Unable to process JOSE object (cause: org.jose4j.lang.UnresolvableKeyException: Unable to find a suitable verification key for JWS w/ header {"alg":"RS256","typ" : "JWT","kid" : "8ZNpgh_vnmG0HMMNYdOz1lw4ECoWxmsiUGte1mJfvyI"} due to an unexpected exception (javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty) while obtaining or using keys from JWKS endpoint at https://172.16.47.229:12345/auth/realms/Guacamole-test/protocol/openid-connect/certs): JsonWebSignature{"alg":"RS256","typ" : "JWT","kid" : "8ZNpgh_vnmG0HMMNYdOz1lw4ECoWxmsiUGte1mJfvyI"}->eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI4Wk5wZ2hfdm5tRzBITU1OWWRPejFsdzRFQ29XeG1zaVVHdGUxbUpmdnlJIn0.eyJleHAiOjE2MDIzOTczODgsImlhdCI6MTYwMjM5NjQ4OCwiYXV0aF90aW1lIjoxNjAyMzk2NDcwLCJqdGkiOiI5Y2RiZDVjZC01MDJhLTRjNmItYTM3Mi1jZDIxMTNjNTE1NTMiLCJpc3MiOiJodHRwczovLzE3Mi4xNi40Ny4yMjk6MTIzNDUvYXV0aC9yZWFsbXMvR3VhY2Ftb2xlLXRlc3QiLCJhdWQiOiJHdWFjYW1vbGUiLCJzdWIiOiI1YzQ3N2NiZC04ZjIzLTRlMjEtYmNhMi1kMzNlMTRhZGY0ZDYiLCJ0eXAiOiJJRCIsImF6cCI6Ikd1YWNhbW9sZSIsIm5vbmNlIjoiaTQyZDBpZTc4c2s0MjRjMHJzMmJvdTM4YnUiLCJzZXNzaW9uX3N0YXRlIjoiMjNlZjdhMTYtMDhhNS00YTNkLTgxYTItYTQ2ZmE1NmM1NjE3IiwiYWNyIjoiMCIsImlzX3N1cGVydXNlciI6IlRydWUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJ0ZXN0IHRlc3QiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJ0ZXN0dXNlciIsImdpdmVuX25hbWUiOiJ0ZXN0IiwiZmFtaWx5X25hbWUiOiJ0ZXN0IiwiZW1haWwiOiJ0ZXN0QHRlc3QuY29tIn0.eOhkDqcgfdJnO12PRDqLIHACRNVdVHoSDFjThHWc6Ug1gdoz9t_T2K7F_B6dJSbNygAJrGvc5BVRx9XCJH1fVFSYhpXVqCO0jrHm0XJKhw_kBce4x3ZluGAtktx614j9qFzUwZHXOkFAUGPtyPQKuRTfdzHqQUILLJhVdSRPmou40rX31-l7VwqWZk_Yp1JCdQsA61XvJcQrU_aiKivZFaDGiY5GrnpL8zcEwJcFemptVoGKrG63O_LjxDCxhLpO1C1fi8GjngMSfco9aAp4AaGpHWy8ofJAu-TWbLGf-UPLUhC3lf903-Q_BU3eehYxtMyN1eet0HeGm0x_gV_wvA
In KeyCloak I created a Client as follows:
(Will change the Valid Redirect URI`s once I have it working)
And my guacamole.properites look like this:
guacd-port: 4822
guacd-hostname: localhost
# OpenID Connect Properties
openid-authorization-endpoint: https://172.16.47.229:12345/auth/realms/Guacamole-test/protocol/openid-connect/auth
openid-jwks-endpoint: https://172.16.47.229:12345/auth/realms/Guacamole-test/protocol/openid-connect/certs
openid-issuer: https://172.16.47.229:12345/auth/realms/Guacamole-test
openid-client-id: Guacamole
openid-redirect-uri: http://172.16.47.229:4432/guacamole/
# Postgresql Properties
postgresql-hostname: 172.16.47.229
postgresql-port: 4444
postgresql-database: guacamoledb
postgresql-username: guacamoleuser
postgresql-password: test
What do I have to change for guacamole to accept the token?
Update: I found the configuration to be working, if I use KeyCloak with HTTP instead of HTTPS, but that is not desirable. I have now also configured Guacamole, or more precisely the tomcat that's hosting guacamole, to use https, but I still can not get it to work (without having to use HTTP for KeyCloak).
I've caught the same issue. Most probably you just have to provide valid SSL certificate for your IdP (Keycloak).
Possible workaround was found here: How to configure Keycloak to work with Guacamole's OpenID plugin?.
I've re-compiled guacamole-auth-openid extension with this change:
diff --git a/extensions/guacamole-auth-openid/src/main/java/org/apache/guacamole/auth/openid/token/TokenValidationService.java b/extensions/guacamole-auth-openid/src/main/java/org/apache/guacamole/auth/openid/token/TokenValidationService.java
index 5efb09dab..27d818ee5 100644
--- a/extensions/guacamole-auth-openid/src/main/java/org/apache/guacamole/auth/openid/token/TokenValidationService.java
+++ b/extensions/guacamole-auth-openid/src/main/java/org/apache/guacamole/auth/openid/token/TokenValidationService.java
## -79,6 +79,7 ## public class TokenValidationService {
// Create JWT consumer for validating received token
JwtConsumer jwtConsumer = new JwtConsumerBuilder()
+ .setSkipSignatureVerification()
.setRequireExpirationTime()
.setMaxFutureValidityInMinutes(confService.getMaxTokenValidity())
.setAllowedClockSkewInSeconds(confService.getAllowedClockSkew())
And this solved the issue. Don't think it's applicable for production needs but in production self-signed certificates should not be used.
With Guacamole 1.4.0 and Keycloak 15.0.2 I fixed the HTTPS issue by mounting a custom cacerts keystore in the the Guacamole container. This custom keystore is just the OpenJDK 8 cacerts with Let's Encrypt CA bundle https://letsencrypt.org/certs/isrgrootx1.pem imported. Because my Keycloak and Guacamole instance use Let's Encrypt certificates.
On the host I had OpenJDK 8 installed. So Docker mount was
/etc/ssl/certs/java/cacerts:/usr/local/openjdk-8/jre/lib/security/cacerts

Error - Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity

I am connecting an On-premise S/4 HANA with SAP Cloud Platform trial account. I am using SAP Cloud SDK to fetch all Business Partners from S/4 HANA.
My Cloud Connector is set
My Destination at Sub-Account level is set and can ping to my on-premise system
My Service instances - XSUAA/Destination/Connectivity is set with the application
But I have the following error
Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity: no JWT bearer found in the 'Authorization' header of the request. Continuing without a header. Connecting to on-premise systems may not be possible
The code which I am using is -
final List<BusinessPartner> businessPartners =
new DefaultBusinessPartnerService()
.getAllBusinessPartner()
.select(BusinessPartner.BUSINESS_PARTNER)
.execute(destination);
It seems AppRouter is the recommended for Authorization and Access and hence I tried implementing one- but my approuter shows - Not Found
Approuter App -Name - approuter-demo
Below is the xs-app.json
{
"routes": [
{
"source": "^/s4ext/(.*)",
"target": "/s4ext/$1",
"destination": "******"
}
]
}
The Manifest file is as below:
---
applications:
- name: approuter-demo
routes:
- route: approuter-demo-*****trial.cfapps.eu10.hana.ondemand.com
path: approuter
memory: 128M
env:
TENANT_HOST_PATTERN: 'approuter-demo-(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"******", "url" :"https://s4ext-***.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true }]'
services:
- xsuaa-demo
- connectivity-demo
- destination-demo
Kindly guide me. Thanks.
Your destination type might be wrong. The authorization header is set via the destination.
Try other types in sap cp -> connectivity.
Reading your question again I can identify two issues:
This error message in your log:
Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity: no JWT bearer found in the 'Authorization' header of the request. Continuing without a header. Connecting to on-premise systems may not be possible
It may be that this error message is actually superfluous and hence indicating a problem which is actually none. In your case this header is possibly not necessary and the SAP Cloud SDK should not try to add it. But in any case, this will not influence the actual connection, so this error message is at most confusing, but not harmful in the sense of altering functionality.
Still, I am asking you to add the stack trace of this exception to your question to be very sure here.
Your app router shows "Not Found":
Here I am missing more information. When does what exactly show "Not Found"? Is it that your browser cannot find your app router, or can your app router not find the target URL of the application?

CloudRun: Debug authentication error from curl

I am spinning up a container (pod/Job) from a GKE.
I have set up the appropriate Service Account on the cluster's VMs.
Therefore, when I manually perform a curl to a specific CloudRun service endpoint, I can perform the request (and get authorized and have 200 in my response)
However, when I try to automate this by setting an image to run in a Job as follows, I get 401
- name: pre-upgrade-job
image: "google/cloud-sdk"
args:
- curl
- -s
- -X
- GET
- -H
- "Authorization: Bearer $(gcloud auth print-identity-token)"
- https://my-cloud-run-endpoint
Here are the logs on Stackdriver
{
httpRequest: {
latency: "0s"
protocol: "HTTP/1.1"
remoteIp: "gdt3:r787:ff3:13:99:1234:avb:1f6b"
requestMethod: "GET"
requestSize: "313"
requestUrl: "https://my-cloud-run-endpoint"
serverIp: "212.45.313.83"
status: 401
userAgent: "curl/7.59.0"
}
insertId: "29jdnc39dhfbfb"
logName: "projects/my-gcp-project/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2019-09-26T16:27:30.681513204Z"
resource: {
labels: {
configuration_name: "my-cloud-run-service"
location: "us-east1"
project_id: "my-gcp-project"
revision_name: "my-cloudrun-service-d5dbd806-62e8-4b9c-8ab7-7d6f77fb73fb"
service_name: "my-cloud-run-service"
}
type: "cloud_run_revision"
}
severity: "WARNING"
textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating"
timestamp: "2019-09-26T16:27:30.673565Z"
}
My question is how can I see if an "Authentication" header does reach the endpoint (the logs do not enlighten me much) and if it does, whether it is appropriately rendered upon image command/args invocation.
In your job you use this container google/cloud-sdk which is a from scratch installation of gcloud tooling. It's generic, without any customization.
When you call this $(gcloud auth print-identity-token) you ask for the identity token of the service account configured in the gcloud tool.
If we put together this 2 paragraphs, you want to generate an identity token from a generic/blank installation of gcloud tool. By the way, you don't have defined service account in your gcloud and your token is empty (like #johnhanley said).
For solving this issue, add an environment variable like this
env:
- GOOGLE_APPLICATION_CREDENTIAL=<path to your credential.json>
I don't know where is your current credential.json of your running environment. Try to perform an echo of this env var to find it and forward it correctly to your gcloud job.
If you are on compute engine or similar system compliant with metadata server, you can get a correct token with this command:
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=<URL of your service>"
UPDATE
Try to run your command outside of the gcloud container. Here the update of your job
- name: pre-upgrade-job
image: "google/cloud-sdk"
entrypoint: "bash"
args:
- -c
- "curl -s -X GET -H \"Authorization: Bearer $(gcloud auth print-identity-token)\" https://my-cloud-run-endpoint"
Not sure that works. Let me know
In your Job, gcloud auth print-identity-token likely does not return any tocken.
The reason is that locally, gcloud uses your identity to mint a token, but in a Job, you are not logged into gcloud.

JENKINS Authentication Fails

I am getting the following error while trying to trigger Jenkins job from any REST Client
Authentication required
<!-- You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't):
hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
</body> </html>
The request is getting triggered while using curl from terminal
I am using the following syntax
http://user:apiToken#jenkins.yourcompany.com/job/your_job/build?token=TOKEN
[ref :https://wiki.jenkins-ci.org/display/JENKINS/Authenticating+scripted+clients]
ie. curl -X POST http://user:apiToken#jenkins.yourcompany.com/job/your_job/build?token=TOKEN
Check this "This build is parameterized " , select the credentials parameter from drop down.
Use this
curl -X POST http://jenkins.rtcamp.com/job/Snapbox/buildWithParameters --user "username:password"
It solved my authentication problem.
I hope it will help others too.
My development team's configuration settings were matrix-based security so I had to find my group and give my group workspace access.
1.Click on Manage Jenkins .
2.Click on Configure Global Security .
3.in matrix-based security change:
Overall - Read
Job - Build
Job - Read
Job - Workspace
Then
POST jobUrl/buildWithParameters HTTP/1.1
Host: user:token
Authorization: Basic dWdlbmxpazo4elhjdmJuTQ==
Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded
Branch=develop
For me
https://user:password#jenkins.mycompany.org/job/job_name/build?token=my_token
in https://jenkins.mycompany.org/configureSecurity
disable CORS
hope this help
Try using the -u parameter to specify the credentials:
curl -u user:apiToken -X POST http://jenkins.yourcompany.com/job/your_job/build?token=TOKEN
I provided header Authorization parameter with value :
BASIC base_64encoded(username:password) and it worked fine.
Authorization Basic bmltbWljdjpqZX*********
Simply disable "CSRF Protection" in the global Security Options, because those URLs don't send post data identification.
focal point :
username:password#
curl -u user:apiToken -X POST http://username:password#jenkins.yourcompany.com/job/your_job/build?key1=value1&key2=value2 ...
If you are encountering this problem with jenkins api client in ruby.
I figured Jenkins is blocking all the get request, instead use api_post_request.
Also, you have to generate api token because normal password is not working anymore.
#client = JenkinsApi::Client.new(
server_url: "",
username: '',
password: ""
)
SITE_FILE_PATH = 'artifact/target/site'.freeze
#jenkins_uri=''
#jenkins_job_name=''
def latest_test_file_path
"/job/#{#jenkins_job_name}/job/master/lastSuccessfulBuild/#{SITE_FILE_PATH}/Test-Results/test-results.html"
end
puts #client.api_post_request(latest_test_file_path,{},true).body
you can set the parameter true if you want the raw response.
default parameter or passing false will just return response code.
Also make sure to construct the right prefix.
You can refer to the above snipped.