ADFS - Claims - emailAddress urn format version mismatch - saml

So I was setting up an ADFS service on a Windows Server 2016 instance. I created a Relying Party Trust, and was about to create 2 claim issuance policies since our Service Provider has a nameId policy which needs to be met. The required policy is as follows
<NameIDPolicy Format="urn:oasis:names:tc:SAML:2.0:nameid-format:emailAddress" AllowCreate="true"/>
So I added these two claims:
The second is a transformation rule as follows:
This resolves to the rule language:
c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress");
The problem is, that this generates a format of urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress and not urn:oasis:names:tc:SAML:2.0:nameid-format:emailAddress as in the requested policy and seemingly I can't change it to SAML2.0 as I can not manually edit the rule. Any ideas to fix this?

Copy that rule and use it to make a new custom policy rule and then edit it.

Related

Sustainsys.SAML2 uses http-redirect instead of http-post

I should say that I've just started exploring SAML authentication and bumped into authentication problem which unfortunately I cannot reproduce on my development machine which confuses me even more.
I have the following configuration using OWIN:
var options = new Saml2AuthenticationOptions(false)
{
Notifications = new Saml2Notifications
{
AuthenticationRequestCreated = (request, provider, dictionary) =>
{
request.Binding = Saml2BindingType.HttpPost;
}
},
AuthenticationType = services.AuthenticationType,
Caption = services.Caption,
SPOptions = new SPOptions
{
EntityId = new EntityId(Path.Combine(services.RelyingPartyUri, "Saml2"))
}
};
options.IdentityProviders.Add(new IdentityProvider(new EntityId(services.IdentityProviderConfiguration.IdentityProviderMetadataUri), options.SPOptions)
{
AllowUnsolicitedAuthnResponse = true,
Binding = Saml2BindingType.HttpPost,
LoadMetadata = true,
SingleSignOnServiceUrl = new Uri(services.IdentityProviderConfiguration.SingleSignOnUri)
});
app.UseSaml2Authentication(options);
services variable contains configuration such as metadata uri, sso uri, etc.
This configuration works perfectly on my machine. I've inspected login SAML request and here is what I have there:
<saml2p:AuthnRequest
xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"
ID="id10c4b76119b64952857d38c7581ca0b4"
Version="2.0"
IssueInstant="2018-12-04T14:29:00Z"
Destination="https://identity.provider/trust/saml2/http-post/sso/application"
ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
AssertionConsumerServiceURL="https://application/Saml2/Acs">
<saml2:Issuer>https://application/Saml2</saml2:Issuer>
</saml2p:AuthnRequest>
The authentication then works fine.
When I deploy this code to external server for testing purpose some times it works as expected but quite often I cannot authenticate user because instead of http-post the authentication mechanism uses http-redirect.
In this case I see the following login SAML request:
<saml2p:AuthnRequest
xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"
ID="id10c4b76119b64952857d38c7581ca0b4"
Version="2.0"
IssueInstant="2018-12-04T14:29:00Z"
Destination="https://identity.provider/trust/saml2/http-redirect/sso/application"
ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
AssertionConsumerServiceURL="https://application/Saml2/Acs">
<saml2:Issuer>https://application/Saml2</saml2:Issuer>
</saml2p:AuthnRequest>
The difference in the SSO uri which is used for authentication.
What I did so far is checked configuration files to eliminate configuration issue. All configurations are valid and services.IdentityProviderConfiguration.SingleSignOnUri contains valid SSO uri with http-post. I've played around with different settings and as you may see in the code snippet I set Binding to HttpPost which I thought should have solved my issue in case if SingleSignOnServiceUrl is taken automatically from IDP metadata. I also looked through sustainsys.SAML2 source code and couldn't find anything which could give me a clue.
Any help highly appreciated!
If you set LoadMetadata=true the settings found in the Metadata will override your manual configuration. Obviously the metadata of the Idp contains an endpoint https://identity.provider/trust/saml2/http-redirect/sso/application with a POST binding.
To fix this ask the Idp to get their metadata correct. Or set LoadMetadata=false and rely on in-code configuration. You must add the Idp signing certificate to your code in that case.

Connect to Kafka on Unix from Windows with Kerberos

I'm quite new to Kafka so please bear with me. Here is my set up.
I have kafka hosted on a unix box. Clustered. and in a domain say B.
client is on windows. and am trying to connect to kafka hosted on B using a domain A.
I have the keytab. and krb5. both these are set up in the envt.
krb5.ini(and is set to envt variable KRB5_CONFIG)
[logging]
default = CONSOLE
admin_server = CONSOLE
kdc = CONSOLE
[libdefaults]
renew_lifetime = 7d
clockskew = 324000
forwardable = true
proxiable = true
renewable = true
default_realm = some.something.COM
dns_lookup_realm = true
dns_lookup_kdc = false
default_tgs_enctypes = somethingelse
default_tkt_enctypes = somethingelse
[appdefaults]
renewable = true
[realms]
some.something.COM = {
kdc = some.something.COM
admin_server = some.something.COM
}
I also have set up Jaas.config(Kafka.client.ini in my case and is set to envt variable KAFKA_CLIENT_KERBEROS_PARAMS) below is the config
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="sample.keytab"
storeKey=true
useTicketCache=true
serviceName="kafka"
principal="svcacc#some.something.COM";
};
downloaded apache kafka_2.12-0.10.2.1.tgz and am executing this command.
kafka-console-producer.bat --broker-list <broker list> --topic <mytopic> --security-protocol SASL_PLAINTEXT
no matter what i change i keep getting below error
"security-protocol is not a recognised option"
can someone please help me in this?
i also added below props in producer.properties. but nothing seems to change. I'm not sure what i'm missing
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
I even tried setting this property in kafka-console-producer.bat but with no luck
set KAFKA_CLIENT_KERBEROS_PARAMS=- Djava.security.auth.login.config=..\..\config\kafka_Connection.ini
looking forward for your inputs. Many thanks (i've no control as of now on kafka server nor i will be able to explain why its hosted on domain B)
Disclaimer: I'm not too familiar with Kafka, and that error message does not clearly hint at a Kerberos problem.
But given that this is a cross-realm situation, you will probably hit a Kerberos snag sooner or later...
From Kerberos MIT documentation about krb5.conf section [capaths]
In order to perform direct (non-hierarchical) cross-realm
authentication, configuration is needed to determine the
authentication paths between realms.
A client will use this section to find the authentication path between
its realm and the realm of the server.
In other words, you get a Kerberos TGT (ticket-granting-ticket) for principal wtf#USERS.CORP.DMN but need a Kerberos service ticket for kafka/brokerhost.some.where#SERVERS.CORP.DMN. Each realm has its own KDC servers. Your Kerberos client (the Java implementation in this case) must have a way to "hop" from one domain to the others
Scenario 1 >> both realms are "brother" AD domains with mutual trust, and they use the default hierarchical relationship -- meaning that there is a "father" AD domain named CORP.DMN that is in the path from USERS to SERVERS.
Your krb5.conf should look like this...
[libdefaults]
default_realm = USERS.CORP.DMN
kdc_timeout = 3000
...
...
[realms]
USERS.CORP.DMN = {
kdc = roundrobin.siteA.users.corp.dmn
kdc = roundrobin.bcp.users.corp.dmn
}
SERVERS.CORP.DMN = {
kdc = dc1.servers.corp.dmn
kdc = dc2.servers.corp.dmn
kdc = roundrobin.bcp.servers.corp.dmn
}
CORP.DMN = {
kdc = roundrobin.corp.dmn
kdc = roundrobin.bcp.corp.dmn
}
...assuming you have multiple AD Domain Controllers in each domain, sometimes behind DNS aliases doing round-robin assignment, plus another set of DC on a separate site for BCP/DRP. It could be more simple than that :-)
Scenario 2 >> there is trust enabled but the relationship does not use the default, hierarchical path.
In that case you must define explicitly that "path" in a [capaths] section, as explained in the Kerberos documentation.
Scenario 3 >> there is no trust between realms. You are screwed.
Or rather, you must obtain a different user that can authenticate on the same domain as the Kafka broker, e.g. xyz#SERVERS.CORP.DMN.
And maybe use a specific krb5.conf that states default_realm = SERVERS.CORP.DMN (I've seen weird behaviors of some JDK versions on Windows, for example)
Bottom line: you must require assistance from your AD administrators. Maybe they are not familiar with raw Kerberos conf, but they will know about the trust and about the "paths"; at this point it's just a matter of following the proper krb5.conf syntax.
Or, maybe, that conf has already been done by the Linux administrators; so you should require an example of their standard krb5.conf to check whether there is cross-domain stuff in there.
And of course you should enable Kerberos debug traces in your Kafka producer:
-Dsun.security.krb5.debug=true
-Djava.security.debug=gssloginconfig,configfile,configparser,logincontext
Just for the record, but not useful here... when using Keberos over HTTP (SPNego) there's an additional flag-Dsun.security.spnego.debug=true

Bearer was not authenticated: Signature validation failed

I am using Identity Server 4 to protect my APIs (Implicit Flow Mode) which are accessed by angular application. Every thing is working fine, however at specific period the access token suddenly became invalid even before its expiry.
Configuration:
Here is the Identity Server Startup file:
var identityBuilder = services.AddIdentityServer().AddInMemoryStores().SetTemporarySigningCredential();
identityBuilder.AddInMemoryScopes(identitySrvConfig.GetScopes());
identityBuilder.AddInMemoryClients(identitySrvConfig.GetClients());
Protecting the APIs:
app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
{
Authority = identityOptions.Authority,
ScopeName = "userProfile_api",
RequireHttpsMetadata = false
});
Investigation:
The issue was the bearer was not authenticated
Bearer was not authenticated. Failure message: IDX10501: Signature validation failed. Unable to match 'kid': 'e4f3534e5afd70ba74c245fe2e39c724', token
After some investigation, it appears that identity server is generating a new key which was causing the signature validation to fail.
In the log, I can see when the two warning events at end happening, then I see "Repository contains no viable default key" and "a new key should be added to the ring"
Questions
Why would there no be a key at anytime when the key lifetime is almost 3 months even I am using temporary signing (SetTemporarySigningCredential) and I am not restarting the server?
Creating key {a2fffa4a-345b-4f3b-bae7-454d567a1aee} with creation date 2017-03-03 19:15:28Z, activation date 2017-03-03 19:15:28Z, and expiration date 2017-06-01 19:15:28Z.
How can I solve this issue?
Creating a self signing certificate and removing the temporary signing on identity server fixed the issue.
var signingCertificate = new X509Certificate2("ReplaceByCertificatePath, "ReplaceByPasswordCertificate");
var identityBuilder = services.AddIdentityServer().AddInMemoryStores().SetSigningCredential(signingCertificate);
identityBuilder.AddInMemoryScopes(IdentitySrvConfig.GetScopes());
identityBuilder.AddInMemoryClients(IdentitySrvConfig.GetClients());

Where do you set the identityserver3 endpoint urls?

Are the urls for the endpoints in identityserver3 configurable?
How come in the example for MVC the Authority is set to:
https://localhost:44319/identity
While the standalone webhost (minimal) sample has the authorization endpoint set to:
https://localhost:44333/connect/authorization
Has something been configured somewhere so that the /identity will work.
Or is the .../identity not the IDSrv3 endpoint at all, but rather only the API call instead of
https://localhost:44321/identity
which is what is called in the CallApiController... (I would change this example totally to something else with different names, so that there's a clear difference between what is part of the app (Foo and Bar) and what is part of idsrv3 (auth claims tokens and scopes) --sigh.
(end of question...)??
In any case:
When the webhost standalone minimal idsrv3 is down - I'm getting:
No connection could be made because the target machine actively refused it ... Wasn't sure what I was doing wrong, but was sure that I was doing something wrong. (Forgot to run the IDSrv3)
When its up, in both paths: (/identity and /connect/authorization)
I get 404 not found,
and if I just give the root with a trailing slash, I get: Error, The client application is unknown or is not authorized, instead of showing me the login page...
So it seems the trailing slash root is the correct way to go, which leaves me with my first question, so how/why is the Authority set in the MVC demo to include the path /identity.
IdentityServer url is configured in the startup.cs file.
In the MVC app the IdS is configured under 'webroot'/identity. In The console app IdS is running under the root of the selfhost 'webroot/'
app.Map("/identity", idsrvApp =>
{
idsrvApp.UseIdentityServer(new IdentityServerOptions
{
SiteName = "Embedded IdentityServer",
SigningCertificate = LoadCertificate(),
Factory = new IdentityServerServiceFactory()
.UseInMemoryUsers(Users.Get())
.UseInMemoryClients(Clients.Get())
.UseInMemoryScopes(Scopes.Get()),
AuthenticationOptions = new IdentityServer3.Core.Configuration.AuthenticationOptions
{
EnablePostSignOutAutoRedirect = true,
IdentityProviders = ConfigureIdentityProviders
}
});
});
The other urls you mentioned are all urls which can be resolved via the discovery document: http://'webroot'/.well-known/openid-configuration
or in case of the MVC app: http://'webroot'/identity/.well-known/openid-configuration

How do I code Citrix web sites to use a Secure Gateway (CSG)?

I'm using Citrix's sample code as a base and trying to get it to generate ICA files that direct the client to use their Secure Gateway (CSG) provider. My configuration is that the ICA file's server address is replaced with a CSG ticket and traffic is forced to go to the CSG.
The challenge is that both the Citrix App Server (that's providing the ICA session on 1494) and the CSG have to coordinate through a Secure Ticket Authority (STA). That means that my code needs to talk to the STA as it creates the ICA file because STA holds a ticket that the CSG needs embedded into the ICA file. Confusing? Sure! But it's much more secure.
The pre-CSG code looks like this:
AppLaunchInfo launchInfo = (AppLaunchInfo)userContext.launchApp(appID, new AppLaunchParams(ClientType.ICA_30));
ICAFile icaFile = userContext.convertToICAFile(launchInfo, null, null);
I tried to the SSLEnabled information to the ICA generation, but it was not enough. here's that code:
launchInfo.setSSLEnabled(true);
launchInfo.setSSLAddress(new ServiceAddress("CSG URL", 443));
Now, it looks like I need to register the STA when I configure my farm:
ConnectionRoutingPolicy policy = config.getDMZRoutingPolicy();
policy.getRules().clear();
//Set the Secure Ticketing Authorities (STAs).
STAGroup STAgr = new STAGroup();
STAgr.addSTAURL(#"http://CitrixAppServerURL/scripts/ctxsta.dll");
//creat Secure Gateway conenction
SGConnectionRoute SGRoute = new SGConnectionRoute(#"https://CSGURL");
SGRoute.setUseSessionReliability(false);
SGRoute.setGatewayPort(80);
SGRoute.setTicketAuthorities(STAgr);
// add the SGRoute to the policy
policy.setDefault(SGRoute);
This is based on code I found on the Citrix Forums; however, it breaks my ability to connect with the Farm and get my application list!
Can someone point me to an example of code that works? Or a reference document?
The code in the question is basically right, but I was trying too hard to inject configuration into the launching ICA generator.
Note: Using the WebInterface.conf file for guidance is a good way to determine the right config settings. Even if the code is right, the configuration is very touchy!
Most of the Citrix Secure Gateway (CSG) / Secure Ticket Authority (STA) magic happens when the policy for the initial connection to the farm is established. Specifically, in Global.asax.cs, you must have the following blocks of code:
1) you must have a valid STAGroup:
//Set the Secure Ticketing Authorities (STAs).
STAGroup STAgr = new STAGroup();
STAgr.addSTAURL(#"http://[STA URL]/scripts/ctxsta.dll");
2) the you must create a CSG connection (with the STA mapped):
//create Secure Gateway conenction
SGConnectionRoute SGRoute = new SGConnectionRoute(#"[CSG FQDN without HTTPS]");
SGRoute.setUseSessionReliability(false);
SGRoute.setGatewayPort(443);
SGRoute.setTicketAuthorities(STAgr);
3) you need to set the policy default
// Create a DMZ routing policy
ConnectionRoutingPolicy policy = config.getDMZRoutingPolicy();
policy.getRules().clear();
policy.setDefault(SGRoute);
4) you need to tell the launchInfo that you want to be CGP enabled:
launchInfo.setCGPEnabled(true);
WARNING: The SSL enabled as a red herring.
There's another way to do this that is cleaner and more configurable. The code can be setup to use the webinterface.conf file that the default Citrix Web Interface uses.
The following code should replace all of the farmConfig, STAGroup, ConnectionRoutinePolcy, mess in the above sample.
InputStream inputStream = new FileInputStream(#"C:\temp\WebInterface.conf");
CtxConfig configInput = new CtxConfig(inputStream);
Map settingsMap = configInput.getSettingsMap();
WIConfiguration wiConfiguration = ConfigurationParser.buildWIConfiguration(settingsMap);
com.citrix.wing.config.Configuration config = new com.citrix.wing.config.Configuration();
config.setGlobalConfig(wiConfiguration.getGlobalConfig());
config.setMPSFarmConfigs(wiConfiguration.getMPSFarmConfigs());
config.setDMZRoutingPolicy(wiConfiguration.getDMZRoutingPolicy());
config.setClientProxyPolicy(wiConfiguration.getClientProxyPolicy());
// Create a StaticEnvironmentAdaptor instance.
WIASPNetStaticAdaptor staticEnvAdaptor = new WIASPNetStaticAdaptor(this);
// Create a WebPNBuilder instance.
WebPNBuilder builder = WebPNBuilder.getInstance();
Application["WebPNBuilder"] = builder;
// Create a WebPN instance from the configuration.
WebPN webPN = builder.createWebPN(config, staticEnvAdaptor);
Application["WebPN"] = webPN;
Another note on this problem from using the JICA client with an internal certificate (non-trusted root).
The JICA client does not let you accept a certificate from a non-trusted root, so it was required to add the certificate to the Java CA store. Adding it to the Windows store does not do any good!
Get your dev root CA, then navigate to bin directory of the latest Java install (typically, under c:\program files\java\jre*** )
Execute the following command:
keytool -import -trustcacerts -keystore "..\lib\security\cacerts" -file "c:\temp\root.cer" -alias myroot
I'll let you Google for the password because your supposed to changeit [sic].