How do I properly expose my public signing key for CAS generated JWT tickets - jwt

UPDATE:
After further testing it seems as though the RSA setup I thought was working, actually isn't. Until CAS can support asymmetric keys for JWT tickets, this question is rendered irrelevant.
My use case:
CAS VERSION: 6.2.0-RC2
Using CAS for single sign on for a number of applications. The backend identity provider is LDAP. The client of interest is an SPA that redirects to CAS for login. Upon successful login to CAS, a JWT is issued via a configured service provider. I have setup the service provider to sign the JWT using asymmetric keys via RSA. This is all working. What I can't get to work is the "jwtTicketSigningPublicKey" actuator endpoint.
I want to be able to publish the public key so that my SPA is able to dynamically grab the public key for signage validation so that I can rotate the RSA keys if necessary without having to change anything on the SPA side. I assumed this was the purpose for this feature, but when I hit the endpoint after exposing it as directed here, I get a 404.
My config:
Here is what my cas.config file looks like as it relates to this endpoint:
# Expose it
management.endpoints.web.exposure.include=jwtTicketSigningPublicKey
# Enable it
management.endpoint.jwtTicketSigningPublicKey.enabled=true
# Allow access to it
cas.monitor.endpoints.endpoint.jwtTicketSigningPublicKey.access=ANONYMOUS
I then bounce the CAS server and I can see the endpoint in the actuator links at http://mycas.com/cas/actuator like so:
"jwtTicketSigningPublicKey":{"href":"http://mycas.com/cas/actuator/jwtTicketSigningPublicKey","templated":false}
As the document refers to, I can pass an optional service parameter like so to get the public key associated to a "per-service" implementation, which is what I have. I hit the endpoint like so:
http://mycas.com/cas/actuator/jwtTicketSigningPublicKey?service=http://example.org
At which point I receive a 404. I also get a 404 if I hit the endpoint without the service parameter. But I would expect that since I don't actually have a globally defined RSA pair.
My attempt at a solution:
The most logical place I can imagine this public key should be provided is in the service configuration along with where I am providing the private key. However I can find no documented parameter by which to define the public key. This is what I have tried to no avail.
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId" : "^http://.*",
"name" : "Sample",
"id" : 10,
"properties" : {
"#class" : "java.util.HashMap",
"jwtAsServiceTicket" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "true" ] ]
},
"jwtSigningSecretAlg" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "RS256" ] ]
},
"jwtAsServiceTicketSigningKey" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "MyPrivateKeyGoesHere" ] ]
},
"jwtAsServiceTicketSigningPublicKey" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "MyPublicKeyGoesHere" ] ]
}
}
}
The signing key works and is a documented parameter. Also, the signing secret algorithm is documented here. But the last "...SigningPublicKey" parameter was a complete shot in the dark because I have not found any docs on the matter other than what is defined here.
Summary:
So what I am hoping to find by this question, is someone that is familiar with this endpoint and how to configure it properly in order to make the signing public key available to my SPA.

Related

Auth0 - Customizing SAML Assertions not working

I'm using Auth0 as an idP, my Service Provider requires that i add a custom attribute in the assertion.
I've tried doing this on the Dashboard. Dashboard > Applications > Applications -> AddOns. Following this article. https://auth0.com/docs/authenticate/protocols/saml/saml-configuration/customize-saml-assertions
I've added my_custom_attr in the mapping object, screenshot below.
However when i 'Debug', my custom attribute isn't showing in the assertion xml and my Service Provider isn't receiving the custom attribute. They're only receiving the default attributes. email, nickname etc
When using Auth0 as a SAML identity provider, you can customize the outgoing claims using mapping. Consider you have the user profile that looks like this:
RAW JSON
{
"user_id": "auth0|qwer-1234-zxcv-0987",
"email": "john.doe#example.com"
"picture": "https://placeholder.img/user",
"name": "John Doe"
}
If you need the picture attribute to be in the outgoing claims, you would do a mapping like this:
"mappings": {
"picture": "http://schemas.auth0.com/picture"
}
Note that the each property name on the left side represents a property in the Auth0 profile. Each "value" on the right side is the name for the resulting SAML attribute in the assertion.
If you don't have a my_custom_attr property in the user profile, this mapping won't work. The workaround is to use an Auth0 Rule to add that value during the user log in time. You can read more about it here.
Here's an example.
function customizeMappings(user, context, callback) {
// we are altering the user profile
user.my_custom_attr = "My Custom Attribute";
context.samlConfiguration.mappings = {
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/color": "my_custom_attr"
};
callback(null, user, context);
}
Note that using context.samlConfiguration.mappings in a Rule will override the configuration you've set in your SAML add-on. Therefore, all the mappings you set in the add-on will be lost if you're using a Rule to customize the SAML assertions.

When creating a REST resource, must the representation of that resource be used?

As an example imagine a dynamic pricing system which you can ask for offers moving boxes from 1 place to another. Since the price does not exist before you ask for it it's not as simple as retrieving data from some data-base, but an actual search process needs to run.
What I often see in such scenario's is a request/response based endpoint:
POST /api/offers
{
"customerId" : "123",
"origin" : {"city" : "Amsterdam"},
"destination" : {"city" : "New York"},
"boxes": [{"weight" : 100},{"weight": "200"}]
}
201:
{
"id" : "offerId_123"
"product" : {
"id" : "product_abc",
"name": "box-moving"
}
"totalPrice" : 123.43
}
The request has nothing to do with the response except that one is required to find all information for the other.
The way I interpret "manipulation of resources through representations" I think that this also goes for creation. Following that I would say that one should create the process of searching in stead:
POST /api/offer-searches
{
"request" : {
"customerId" : "123",
"origin" : {"city" : "Amsterdam"},
"destination" : {"city" : "New York"},
"boxes": [{"weight" : 100},{"weight": "200"}]
}
}
201:
{
"id" : "offerSearch_123"
"request" : {
"customerId" : "123",
"origin" : {"city" : "Amsterdam"},
"destination" : {"city" : "New York"},
"boxes": [{"weight" : 100},{"weight": "200"}]
}
offers: [
"id" : "offerId_123"
"product" : {
"id" : "product_abc",
"name": "box-moving"
}
"totalPrice" : 123.43
]
}
Here the request and the response are the same object, during the process it's enhanced with results, but both are still a representation of the same thing, the search process.
This has the advantage of being able to "track" the process, by identifying it it can be read again later. You could still have /api/offers/offerId_123 return the created offer to not have to go through the clutter of the search resource. But it also has quite the trade-off: complexity.
Now my question is, is this first, more RPC like approach something we can even call REST? Or to comply to REST constraints should the 2nd approach be used?
Now my question is, is this first, more RPC like approach something we can even call REST? Or to comply to REST constraints should the 2nd approach be used?
How does the approach compare to how we do things on the web?
For the most part, sending information to a server is realized using HTML forms. So we are dealing with a lot of requests that look something like
POST /efc913bf-ac21-4bf4-8080-467ca8e3e656
Content-Type: application/x-www-form-urlencoded
a=b&c=d
and the responses then look like
201 Created
Location: /a2596478-624f-4775-a490-09edb551a929
Content-Location: /a2596478-624f-4775-a490-09edb551a929
Content-Type: text/html
<html>....</html>
In other words, it's perfectly normal that (a) the representations of the resource are not the information that was sent to the server, but intead something the server computed from the information it was sent and (b) not necessarily of the same schema as the payload of the request... not necessarily even the same media type.
In an anemic document store, you are more likely to be using PUT or PATCH. PATCH requests normally have a patch document in the request-body, so you would expect the representations to be different (think application/json-patch+json). But even in the case of PUT, the server is permitted to make changes to the representation when creating its resource (to make it consistent with its own constraints).
And of course, when you are dealing with responses that contain a representation of "the action", or representations of errors, then once again the response may be quite dissimilar from the request.
TL;DR REST doesn't care if the representation of a bid "matches" the representation of the RFP.
You might decided its a good idea anyway - but it isn't necessary to satisfy REST's constraints, or the semantics of HTTP.

Security of cloudant query from OpenWhisk

I'm building an Angular SPA with a Cloudant data store on Bluemix.
Since the Bluemix implementation of OpenWhisk doesn't use VCAP services, I see 3 options to use OpenWhisk as my api provider for cloudant queries for my Angular app:
Follow the pattern of passing credentials as seen here: https://github.com/IBM-Bluemix/openwhisk-visionapp (very interesting approach btw)
Include the credentials as though I'm running locally as seen here: https://github.com/IBM-Bluemix/nodejs-cloudant/blob/master/app.js
Use the http API as seen here: https://docs.cloudant.com/api.html (which highlights the security problem passing credentials.
Since my service is not intended for publishing (it's intended for my own app) I'm thinking option 2 is my "least of all evils" choice. Am I missing something? My thinking is such that while fragile to changes it would be the most secure since credentials aren't passed in the open. The serverless infrastructure would have to be hacked...
Thanks in advance!
(lengthy) Update: (apologies in advance)
I've gotten a little farther along but still no answer - stuck in execution right now.
To clarify, my objective is for the app to flow from Angular Client -> OpenWhisk -> Cloudant.
In this simplest use case, I want to pass a startTime parameter and an endTime parameter, have OpenWhisk fetch all the records in that time range with all fields, and passing back selected fields. In my example, I have USGS earthquake data in a modified GeoJSON format.
Following information from the following articles below, I've concluded that I can invoke the wsk command line actions and use the bindings I've setup from within my Javascript function and therefore not pass my credentials to the database. This gives me a measure of security (still question the rest endpoint of my OpenWhisk action) but I figure once I get my sample running I think through that part of it.
My command line (that works):
wsk action invoke /my#orgname.com_mybluemixspace/mycfAppName/exec-query-find --blocking --result --param dbname perils --param query {\"selector\":{\"_id\":{\"$gt\":0},\"properties.time\":{\"$gt\":1484190609500,\"$lt\":1484190609700}}}
This successfully returns the following:
{
"docs": [
{
"_id": "eq1484190609589",
"_rev": "1-b4fe3de75d9c5efc0eb05df38f056a65",
"dbSaveTime": 1.484191201099e+12,
"fipsalpha": "AK",
"fipsnumer": "02",
"geometry": {
"coordinates": [
-149.3691,
62.5456,
0
],
"type": "Point"
},
"id": "ak15062242",
"properties": {
"alert": null,
"cdi": null,
"code": "15062242",
"detail": "http://earthquake.usgs.gov/earthquakes/feed/v1.0/detail/ak15062242.geojson",
"dmin": null,
"felt": null,
"gap": null,
"ids": ",ak15062242,",
"mag": 1.4,
"magType": "ml",
"mmi": null,
"net": "ak",
"nst": null,
"place": "45km ENE of Talkeetna, Alaska",
"rms": 0.5,
"sig": 30,
"sources": ",ak,",
"status": "automatic",
"time": 1.484190609589e+12,
"title": "M 1.4 - 45km ENE of Talkeetna, Alaska",
"tsunami": 0,
"type": "earthquake",
"types": ",geoserve,origin,",
"tz": -540,
"updated": 1.484191127265e+12,
"url": "http://earthquake.usgs.gov/earthquakes/eventpage/ak15062242"
},
"type": "Feature"
}
]
}
The action I created in OpenWhisk (below) returns an Internal Server Error. I'm passing the input value as
{
"startTime": "1484161200000",
"endTime": "1484190000000"
}
Here's the code for my action:
`var openWhisk = require('openwhisk');
var ow = openWhisk({
api_key:'im really a host'
});
function main(params) {
return new Promise(function(resolve, reject) {
ow.actions.invoke({
actionName:'/my#orgname.com_mybluemixspace/mycfAppName/exec-query-find',
blocking:true,
parameters:{
dbname: 'perils',
query: {
"selector": {
"_id": {
"$gt": 0
},
"properties.time": {
"$gt": params.startTime,
"$lt": params.endTime
}
}
}
}
}).then(function(res) {
//get the raw result
var raw = res.response.result.rows;
//lets make a new one
var result = [];
raw.forEach(function(c) {
result.push({id:c.docs._id, time:c.docs.properties.time, title:c.docs.properties.title});
});
resolve({result:result});
});
});
}`
Here are the links to my research:
http://infrastructuredevops.com/08-17-2016/news-openwhisk-uniq.html
Useful because of the use of the exec-query-find and selector syntax usage but also cool for the update function I need to build for populating my data!
https://www.raymondcamden.com/2016/12/23/going-serverless-with-openwhisk
The article referenced by #csantanapr
Am I overlooking something?
Thanks!
I'm assuming what you are trying to do is to access your Cloudant DB directly from your angular client side code from the Browser.
If you don't need any business logic, or you can get away by using Cloudant features (design docs, views, map, reduce, etc..) and you are generating Cloudant API keys with certain access (i.e. write vs. read), then you don't need a server or serveless middlewear/tier.
But now let's get real, most people need that tier, and if you are looking a OpenWhisk, then you are in good luck this is very easy to do.
OpenWhisk on Bluemix support VCAP service credentials, but in a different way.
Let's name you have a Bluemix Org carlos#example.com and space dev that would translate to OpenWhisk namespace carlos#example.com_dev
If you add a Cloudant service under the space dev in Bluemix, this will generate service key credentials for this Cloudant Account. This credentials give you super power access meaning you are admin.
If you want to use this Cloudant credentials in OpenWhisk, you can use the automatic binding generated with the cloudant package.
To do this using the OpenWhisk CLI run wsk package refresh this will pull the Cloudant credentials and create you a new package with the credentials binded as default parameter for all the cloudant actions under that package. This is modified version of #1 above
Another alternative is to bind the credentials manually to a package or an action as default parameters, this makes sense when you don't want to use the super power admin credentials, and you generated a Cloudant API key for a specific database. This is option #1 above.
I would not recommend to put the credentials in source code #2
For option #3, what's insecure is to pass your credentials as part of the URL like https://username:password#user.cloudant.com, but passing the username and password in the Authorization header over https is secured.
This is because even if you are using secure transport https everything in the URI/URL is not encrypted anyone can see that value, but passing secrets in body or header is standard practice as this is transfer after secure connection is established.
Then you create actions that use the credentials as parameters in your OpenWhisk actions to build your business logic for your backend.
Then how to do you access this backend from the Browser, well OpenWhisk has a API Gateway feature in experimental that allows your to expose your actions as public APIs with CORS enable.
Only a url is expose, your credentials as default parameters are never expose.
If you want to see an example on check out Raymond Camden Blog posts where he show Ionic/Angular App accessing his Cloudant Database of Cats
https://www.raymondcamden.com/2016/12/23/going-serverless-with-openwhisk

How to decode keys from Keycloak openid-connect cert api

I'm trying to get the key from Keycloak open-id connect certs endpoint that allow me to validate a JWT token. The api to fetch the keys seam to work :
GET http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/certs
{
"keys": [
{
"kid": "MfFp7IWWRkFW3Yvhb1eVrtyQQNYqk6BG-6HZFpl_JxI",
"kty": "RSA",
"alg": "RS256",
"use": "sig",
"n": "qDWXUhNtfuHNh0lm3o-oTnP5S8ENpzsyi-dGrjSeewxV6GNiKTW5INJ4hDQ7ZWkUFfJJhfhQWJofqgN9rUBQgbRxXuUvEkrzXQiT9AT_8r-2XLMwRV3eV_t-WRIJhVWsm9CHS2gzbqbNP8HFoB_ZaEt2FYegQSoAFC1EXMioarQbFs7wFNEs1sn1di2xAjoy0rFrqf_UcYFNPlUhu7FiyhRrnoctAuQepV3B9_YQpFVoiUqa_p5THcDMaUIFXZmGXNftf1zlepbscaeoCqtiWTZLQHNuYKG4haFuJE4t19YhAZkPiqnatOUJv5ummc6i6CD69Mm9xAzYyMQUEvJuFw",
"e": "AQAB"
}
]
}
but where is the key and how to decode it ?
$.keys[0].n does not look like base64 and I cannot figure out what it is ?
...if someone can tell me how to get the public key from that payload it will be great !
Looking at https://github.com/keycloak/keycloak/blob/master/core/src/main/java/org/keycloak/jose/jwk/JWKParser.java it seams that returned key are pem encoded using :
modulus
exponent
Look at the mentionned java class to get a public key in java or https://github.com/tracker1/node-rsa-pem-from-mod-exp to get the public key in javascript.
Type of the key (or keys) is JSON Web Key (JWK). List of supported library is on OpenID web page. I am using jose.4.j for retrieve keys from Keycloak.

Testing HATEOAS URLs

I'm developing a service that has a RESTful API. The API is JSON-based and uses HAL for HATEOAS links between resources.
The implementation shouldn't matter to the question, but I'm using Java and Spring MVC.
Some example requests:
GET /api/projects
{
"_links" : {
"self" : {
"href" : "example.org/api/projects"
},
"projects" : [ {
"href" : "example.org/api/projects/1234",
"title" : "The Project Name"
}, {
"href" : "example.org/api/projects/1235",
"title" : "The Second Project"
} ]
},
"totalProjects" : 2,
}
GET /api/projects/1234
{
"_links" : {
"self" : {
"href" : "example.org/api/projects/1234"
},
"tasks" : [ {
"href" : "example.org/api/projects/1234/tasks/543",
"title" : "First Task"
}, {
"href" : "example.org/api/projects/1234/tasks/544",
"title" : "Second Task"
} ]
},
"id" : 1234,
"name" : "The Project Name",
"progress" : 60,
"status" : "ontime",
"targetDate" : "2014-06-01",
}
Now, how should I test GET requests to a single project? I have two options and I'm not sure which one is better:
Testing for /api/projects/{projectId} in the tests, replacing {projectId} with the id of the project the mock service layer expects/returns.
Requesting /api/projects/ first then testing the links returned in the response. So the test will not have /api/projects/{projectId} hardcoded.
The first option makes the tests much simpler, but it basically hardcodes the URLs, which is the thing HATEOAS was designed to avoid in the first place. The tests will also need to change if I ever change the URL structure for one reason or another.
The second option is more "correct" in the HATEOAS sense, but the tests will be much more convoluted; I need to traverse all parent resources to test a child resource. For example, to test GET requests to a task, I need to request /api/projects/, get the link to /api/projects/1234, request that and get the link to /api/projects/2345/tasks/543, and finally test that! I'll also need to mock a lot more in each test if I test this way.
The advantage of the second option is that I can freely change the URLs without changing the tests.
If your goal is testing a Hypermedia API, then your testing tools need to understand how to process and act on the hypermedia contained in a resource.
And yes, the challenge is how deep you decide to traverse the link hierarchy. Also, you need to account for non-GET methods.
If these are automated tests a strategy would be to organize the tests in resource units. Only test the links returned in the resource under test: a module for projects, and others for project, tasks, task, and so on. This does require some hard-coding of well-known URLs for each module, but allows you to manage the tests more easily around your resource model.
I don't know about HATEOAS . But what I can say.
You may try a swat - a perl,curl based DSL for web, rest services test automation. Swat was designed to simplify URL "juggling" you probably talking about here. A quick reference for how this could be done by SWAT ( a strait forward way, but there are more elegant solutions ):
$ mkdir -p api/project/project_id
$ echo '200 OK' > api/project/project_id/get.txt
$ nano api/project/project_id/hook.pm
modify_resource(sub{
my $r = shift; # this is original rout api/project/project_id/
my $pid = $ENV{project_id};
$r=~s{/project_id}{$pid} # dynamically setup route to api/project/{project_id}
return $r;
});
$ project_id=12345 swat http://your-rest-api # run swat test suite!
A more complicated examples could be found at the documentation.
(*) Disclosure - I am the tool author.
If you use Spring HATEOAS you can use ControllerLinkBuilder (http://docs.spring.io/autorepo/docs/spring-hateoas/0.19.0.RELEASE/api/org/springframework/hateoas/mvc/ControllerLinkBuilder.html) for link creation in your tests as described in http://docs.spring.io/spring-hateoas/docs/0.19.0.RELEASE/reference/html/#fundamentals.obtaining-links. With ControllerLinkBuilder, there is no hard-coded URL-s.
ControllerLinkBuilderUnitTest.java (https://github.com/spring-projects/spring-hateoas/blob/4e1e5ed934953aabcf5490d96d7ac43c88bc1d60/src/test/java/org/springframework/hateoas/mvc/ControllerLinkBuilderUnitTest.java) shows how to use ControllerLinkBuilder in tests.