Good morning,
we have a use case that involves 2 different Windows environments, in which the applications should authz against a keycloak server.
The first environment contains the leading AD, which is replicated once a day into the second environment, since the second environment must not query the AD of the first one. This is a firewalling policy.
Now our thought is to use keycloak as a “bridge” between the two environments, having a keycloak server in the first environment reading the AD with user federation turned on.
In the second environment the keycloak server somehow replicates with the first one, in the best case via HTTPS (which could pass the firewall) so the applications and users in the second environment can authz with the same credentials as in the first environment.
Is there a way to achieve this or something similar?
thx in advance
Frank
Related
We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.
How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.
For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit
My requirement is such that i have an application authenticated via keycloak. Suppose if my current keycloak fails i need a second instance of the keycloak to run in parallel without any downtime.
Can someone provide some reference on how to create 2 or more instance and such that if one fails the one one can continue providing authentication
Thanks,
Radhakrishnan
A clustered installation of Keycloak would allow for high availability and tolerate the failure of a single instance. The Keycloak and Wildfly documentation referenced below explain the particulars:
https://www.keycloak.org/docs/latest/server_installation/#_clustering
http://docs.wildfly.org/18/High_Availability_Guide.html#JGroups_Subsystem
What can it be for, what logic can be there?
I suppose there should be a connection to keycloak, maybe checking access to create roles and users, am I right or wrong?
It may mean that you need to deploy Keycloak in its own container, separate from the apps.
How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
We tried to register a PCF Service broker (cf create-service-broker ...) in one space, then use it as a 'service instance' (cf create-service ...) in another space.
To illustrate the problem, consider the following work flow, from a HashiCorp Vault guide:
$ cf create-space examplespace
$ cf target -s examplespace
$ cf create-service-broker vault-broker "${AUTH_USERNAME}" "${AUTH_PASSWORD}" "https://${BROKER_URL}" --space-scoped
$ cf marketplace
service plans description
hashicorp-vault shared HashiCorp Vault Service Broker
# ...
$ cf create-service hashicorp-vault shared my-vault
The above works fine. The problem comes up when we have an app in a different space that we want to consume the HashiCorp Vault API:
$ cf target -s myappspace
$ cf bind-service my-app my-vault
This last part fails.
Also, now that I'm in the space myappspace, cf marketplace does **notCC show the new service broker.
Now, we have someone on our team with org-admin permissions.
I figured that we could just register the new service broker at the org level, using enable-service-access subcommand:
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
$ cf enable-service-access my-vault -o WebOrg
This failed as well, because, even though he had Admin permissions for the entire org, he got a permission denied error.
If we then go on to registering the service broker in the second space, myappspace, we get a
All three of these methods failed, but there has to be some way to make a service from one space available to the others, within an Org., if I have administrative permissions for that PCF Org.
How?
A similar (although more specific) type of this issue is documented in the following two github issues for PCF's cloud_controller_ng repository:
https://github.com/cloudfoundry/cloud_controller_ng/issues/935
https://github.com/cloudfoundry/cloud_controller_ng/issues/837
I've done the following research:
https://docs.cloudfoundry.org/services/managing-service-brokers.html#register-broker
https://docs.cloudfoundry.org/services/access-control.html
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
https://starkandwayne.com/blog/register-your-own-service-broker-with-any-cloud-foundry/
(We ran variations of every command on this page.)
The most similar of the existing questions on Stack Overflow were these:
WebSphere Message Broker - how to send a PCF message
Need help on Registering App on PCF with Spring Cloud Data Flow which is also on PCF
They don't seem to have much to do with name spacing issues in the PCF marketplace, or with PCF permissions management.
Note: At first I wanted to post this to serverfault.com, because this has more to do with the infrastructure for an application, rather than just programming. But, while serverfault.com has no tag for Pivotal Cloud Foundry, Stack Overflow has a pivotal-cloud-foundry tag with 588 uses, already.
How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
I don't think you can do this. You'd need to be a platform admin/operator. Then you'd need to register the service broker with the platform & mark that broker as accessible to select orgs & spaces. You could then create services instances & if the broker permits share them across spaces.
If you only have org/space permissions, you can only register the service broker with a specific space. It's then only visible in that space.
Without platform admin/operator permissions, I think the best you could do would be this:
register the broker in a specific space
create a service instance in that space
bind that to your apps in this space
create a service key for your app in the second space
switch to the second space
create a user provided service in that space and enter the service key info
Repeat steps 4-6 for each app in the second service (this ensure you get unique credentials per app, you could use one service key for all apps if you don't care about this).
Happy to be corrected, but I think that is the state of things as I write this.
Assuming you are using PCF 2.1 or above.
Service brokers must explicitly enable service instance sharing by setting a flag in their service-level metadata object. This allows service instances, of any service plan, to be shared across orgs and spaces.
This is from Enabling Service Instance Sharing
Looks like you have already followed the rest of steps from Sharing Service Intances
We have implemented Single Sign-On (SSO) using Kerberos in our production environment.
The configuration of our application is as below.
Operating System: Solaris10
Application Server: WebSphere7.0.0.11
Things are working fine for the Parent domain (MAIL.COM). But the users from child domains (like CO.MAIL.COM, BO.MAIL.COM..) are unable to login to the application.
We have the Kerberos Configuration file with the child domain details also. My doubt is "What are the changes needs to be done at the WAS console (realm related, domain related etc..)"
Thank you very much in advance..!!!