Our identity server uses identity server 3 and implements sustainsys.saml2 for SAML integration. We have made an effort to move from v1 to v2 of the SustainSys.Saml2 NuGets. With v1, we explicitly set our audience restrictions by doing:
_spOptions.SystemIdentityModelIdentityConfiguration.AudienceRestriction = new AudienceRestriction
{
AllowedAudienceUris =
{
_audience,
new Uri(_entityId),
},
};
However, in v2.9.0 the SpOptions.SystemIdentityModelIdentityConfiguration property is no longer accessible.
Is there no longer a need to set the audience restriction? Or is there a different way to set it?
I'm not seeing anything in the docs... hopefully I'm not just blindly missing it.
v2 doesn't use System.IdentityModel, but instead the more modern Microsoft.IdentityModel nuget packages. The corresponding settings are now found in SpOptions.TokenValidationParametersTemplate.
Some parameters,like the audience restriction is set after the template is copied, but you can alter the values in the Unsafe.TokenValidationParametersCreated notification. The reason it is under "Unsafe" is because setting the wrong values in the TokenValidationParameters could remove important security checks.
Related
I went to \simple_salesforce and changed a line in api.py by hand from
DEFAULT_API_VERSION = '42.0'
to
DEFAULT_API_VERSION = '51.0'
But it feels incorrect to do it like this. Is there some other way?
There's bit of text in readme in "additional features".
SalesforceLogin, which takes in a username, password, security token,
optional version and optional domain
(...)
SFType class, which is
used internally by the getattr() method in the Salesforce() class
and represents a specific SObject type. SFType requires object_name
(i.e. Contact), session_id (an authentication ID), sf_instance
(hostname of your Salesforce instance), and an optional sf_version
So looks like you can pass sf_version to SalesforceLogin() call and it'll be respected. Or version to Salesforce(). Check the files and experiment? Maybe even make a pull request in simple's Git repo so they update the default. 42 was over 3 years ago. It's perfectly fine to use newer API to see more tables, get some performance boost, bugfixes.
I'm using openZeppelin to make a crowdsale contract, all (30 of them) my tests pass with flying colours ;) and I can migrate on a locall ganache blockchain no problem.
When I try to deploy to rinkeby I start having issues. My config in truffle.js is
rinkeby: {
provider: rinkeybyProvider,
network_id: 3,
gas: 4712388,
gasPrice: web3.utils.toWei("40", "gwei"),
websockets: true,
from: "0x9793371e69ed67284a1xxxx"
}
When I deploy on rinkeby I get:
"SplitWallet" hit a require or revert statement somewhere in its
constructor. Try: * Verifying that your constructor params satisfy
all require conditions. * Adding reason strings to your require
statements.
I have gone through and put messages in every revert in the constructor hierachy, but I never see any of the messages. I thought it might be that my payees and shares were different lengths but, no, they are the same (only parameters that the constructor for a splitwallet take)
Things to note:
I have an infura api key
I am using truffle-wallet-provider provider, with just a private key (no mnemonic) to deploy
I am confused (due to the above), how my deploy script, can know multiple (10) wallets on deployment. Usually (in ganache) these are the 10 wallets ganache generates for you, but here, I am providing a private key, so it shouldn't be able to know 10 wallets, just one - the public key of the private key that is deploying the contract, no? (talking about here):
module.exports = async (
deployer,
network,
[owner, purchaser, investor, organisation, ...accounts] //how does it know these??
)
This last point, makes me wonder, because I printed out owner/purchaser and they dont match my public key wallet at all, so I have no idea where they are coming from. And if they dont match, and it defaults to the owner being accounts[0], then that wallet may not be able to pay for the gas.... perhaps??
Thanks
Rinkeby network id is 4, not 3.
What I am trying to achieve
Protect a resource in Keycloak with policy like:
if (resource.status == 'draft') $evaluation.grant();
else $evaluation.deny();
Going by their official documents and mailing list responses, it seems attribute based access control is possible, however, I could not find a way of getting it to work.
What I have tried
Using Authorization Services: I was unable to figure out where and how I can inject the attributes from the resource instance.
Using Authorization Context: I was hoping to get the policies associated with a resource and a scope so that I could evaluate them my self.
So far, I have managed to get no where with both approaches. To be honest, I have been overwhelmed by the terminology used in the Authorization services.
Question
How can I use attributes of a resource instance while defining a policy in keycloak?
I solved this problem in Keycloak 4.3 by creating a JavaScript policy because Attribute policies don't exist (yet). Here is an example of the code I got working (note that the attribute values are a list, so you have to compare against the first item in the list):
var permission = $evaluation.getPermission();
var resource = permission.getResource();
var attributes = resource.getAttributes();
if (attributes.status !== null && attributes.status[0] == "draft") {
$evaluation.grant();
} else {
$evaluation.deny();
}
Currently there is no way to do what you are looking to do. ResourceRepresentation class only has (id, name, uri, type, iconUri, owner) fields. So you can use owner to determine ownership per Keycloak example. I've seen a thread that talks about adding additional resource attributes, but haven't seen a Keycloak JIRA for it.
Perhaps you could use Contextual Attributes in some way by setting what you need at runtime and writing a Policy around it.
var context = $evaluation.getContext();
var attributes = context.getAttributes();
var fooValue = attributes.getValue("fooAttribute");
if (fooValue.equals("something"))
{
$evaluation.grant();
}
I have some EC 2 applications (in node.js) that have many REST paths, and since still in development process, the path keep changing.
How can I set up in the api-gateway, that will map to multiple paths, instead of specify each of them?
e.g.
my ec2 end points have:
my.ec2.com/api/test
my.ec2.com/api/test1
my.ec2.com/api/test2
my.ec2.com/api/user/time
my.ec2.com/api/user/time1
my.ec2.com/api/user/time2
instead of setting the all resources in api-gateway,
can I do something like:
api-gateway.amazon.com/api that points to my.ec2.com/api/
and it will resolve any call to http://api-gateway.amazon.com/api/test automatically points to http://my.ec2.com/api/test, etc. ?
As of today, it is still impossible in the API gateway console.
The solution is to use Swagger. then I can use some scripts to generate the Swagger file and import to API gateway. In fact, it is a good solution because I can source control on Swagger.
I'm struggling to figure out how I can do role-based authorization depending on what HTTP method a request is using. I use HTTP basic auth and depending on the users role and the HTTP method used a request should succeed or fail.
Example:
a GET request to http://localhost/rest/ should always be allowed, even to non-authenticated users (anon access)
a PUT request to http://localhost/rest/ (same resource!) should only be allowed if user is authenticated
a DELETE request to http://localhost/rest/ (same resource!) should only be allowed if user is authenticated and has the role ADMINISTRATOR
My current (non-working) attempt of configuring shiro.ini looks like this:
/rest = authcBasic[PUT], roles[SERVICE_PROVIDER]
/rest = authcBasic[POST], roles[EXPERIMENTER]
/rest = authcBasic[DELETE], roles[ADMINISTRATOR]
/rest = authcBasic
Update
I've just found https://issues.apache.org/jira/browse/SHIRO-107 and updated my shiro.ini to be
/rest/**:put = authcBasic, roles[SERVICE_PROVIDER]
/rest/**:post = authcBasic, roles[EXPERIMENTER]
/rest/**:delete = authcBasic, roles[ADMINISTRATOR]
/rest/** = authcBasic
but it still doesn't work. It seems that only the last rule matches. Also, the commit comment also seems to indicate that this only works with permission-based authorization. Is there no equivalent implementation for role-based authz?
I think HttpMethodPermissionFilter is the one you need to configure: http://shiro.apache.org/static/1.2.2/apidocs/org/apache/shiro/web/filter/authz/HttpMethodPermissionFilter.html This should enable you to map the HTTP method to Shiro's "create,read,update,delete" permissions as outlined in the javadoc for the class.
I had a similar situation with Shiro and my REST application. While there may be a better way (I hadn't seen SHIRO-107), my solution was to create a custom filter extending the Authc filter (org.apache.shiro.web.filter.authc.FormAuthenticationFilter). You could do something similar extending the authcBasic filter or the Roles filter (although I think authcBasic would be better as it is probably more complicated).
The method you want to override is "protected boolean isAccessAllowed(ServletRequest request, ServletResponse response, Object mappedValue)". Your argument (e.g. "ADMINISTRATOR") will come in on the mappedValue as String[] where the arguments were split by commas.
Since I needed the possibility of both a method and a role, I ended up have my arguements looks like "-". For example:
/rest/** = customFilter[DELETE-ADMINISTRATOR]
That let me split out the role required to perform a delete from the role required for a POST by doing something like:
/rest/** = customFilter[DELETE-ADMINISTRATOR,POST-EXPERIMENTER]
I think if you play with this, you'll be able to get the functionality you need.
BTW, I hadn't seen SHIRO-107, so I've not tried that technique and probably won't since I've already invented my own custom filter. However that may provide a cleaner solution than what I did.
Hope that helps!