We have multiple environments and it's via human manual input to insert the identity providers and clients when migrating up the environments.
Is there a way to isolate export/import of an identity provider or client? The manual input has brought in errors when migrating identity providers and clients up the environments.
Thank you.
Is there a way to isolate export/import of an identity provider or
client?
I have faced the same issue, to solved what I have done was to create a bunch of bash scripts based on the Rest Admin API. For instance:
Get the clients : GET /{realm}/clients
Create the clients : POST /{realm}/clients
First, I call the get endpoint, and export its response (i.e., the clients) into a .json that I later use as the payload for the post endpoint.
And the same logic applies to the identity providers. It is a bit cumbersome in the beginning to create those scripts, some of them I have already upload to the my repo (I plan to upload a bunch more of them), but after they are working the process gets smoother.
You can apply the same aforementioned logic, but instead of using bash scripts use the Keycloak Java API. The other option is to use Keycloak export Realm feature; export the realm, extract from the .json file all the content that you do not need, and use the remaining content afterwards with the import Realm feature.
Related
I couldn't find information on how other people solve this, so maybe you can help me out.
What I have
Multiple Services with REST APIs, that are secured using OpenID Connect. Connections between the Services work fine.
Now I have multiple developers, who sometimes need to write and execute local scripts (Python, R, Bash etc.) for quick analysis and testing.
What I want
I want to enable the developers to use the services as easy as possible, but still respecting security concerns.
What I tried
I defined the script itself as a client. Therefore I created a public client in my OIDC product, which is called somewhat like 'developer-scripts'. Using a library which handles the oauth dance, I can then execute the script connecting as aforesaid client. First time, the browser pops up and requests the user to authenticate and therefore authorize the client to use the REST API on behalf of the user. After that, the tokens are cached and I can easily continue working on that script.
This simplified drawing tries to summarize, what I just described
That works perfectly fine and regarding security I'm glad that credentials are not saved on the local computers as it was before with e.g. Basic Authentication. Furthermore, I'm able to control the access to different services on a user level.
Other ideas, which didn't convince me:
every web service also has an public client which can then be used as a client by the scripts (so the scripts aren't defined as clients anymore)
token generation is done somewhere else and the developer just adds the generated access/refresh token to the script
My problem
What concerns me about my current solution is the definition of that client. In the described case it would be either a generic client used by all developers for all scripts, or a new client for every developer who want's to write a local script. The latter seems to be a lot of overhead, the former may be a security problem?
So finally I'm asking the question: Are there any known best practices for my described use case?
EDIT:
I found a small article by [Martin Fowler](https://martinfowler.com/articles/command-line-google.html), he is basically explaining, how he is receiving a token to use for a local script. But in his case, he's using it for one certain use case, and not as a general public client. So unfortunately it doesn't really contribute to my answer.
I've been trying to implement a way to download all the changes made by a particular user in salesforce using PowerShell script & create a package The changes could be anything whether it can be added or modified, Apex classes, profiles, Account, etc based on the modified by the user, component ID, timestamp, etc. below is the URL that exposes the API. The URL Does not explain any way to do this by using a script.
https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_listmetadata.htm
Does anyone know how I can implement this?
Regards,
Kramer
Salesforce orgs other than scratch orgs do not currently provide source tracking, which makes it possible to pinpoint user changes in metadata and extract only those changes. This is done by an SFDX/Metadata API client, like Salesforce DX or CumulusCI (disclaimer: I'm on the CumulusCI team).
I would not try to implement a Metadata API client in PowerShell; instead, harness one of the existing tools to do so.
Salesforce orgs other than scratch orgs don't provide source tracking at present. To identify user changes, you can either
Attempt to extract all metadata and diff it against your version control, which is considerably harder than it sounds and is implemented by a variety of commercial DevOps tools for Salesforce (GearSet, Copado, etc).
Have the user manually add components to a Change Set or Unmanaged Package, and use a Metadata API client as above to retrieve the contents of that package. (Little-known fact, a Change Set can be retrieved as a package!)
To emphasize: DevOps on Salesforce does not work like other platforms. Working on the Metadata API requires a fair amount of time investment and specialization. Harness the existing work of the Salesforce community where you can, but be aware that the task you are laying out may be rather more involved than you think and it's not necessarily something you can just throw together from off-the-shelf components.
Hey guys I wonder if anyone can help with this.
Now I am facing a problem at my company. We are developing a Magento 2 Community multistore for our customers.
The idea is to have several stores in the same Magento 2 installation, where each store is for each independent company. The problem is the integration with our ERP system. With the API REST we have full control in the installation, even if we are not with the admin master credential. if we run commands like this in postman: https://magentostore.com/rest/V1/orders?searchCriteria
we have all the orders in installation, all stores. So the companies with their credentials would have the same control and it is a very bad problem of security. The stores would have access to data from each other.
We have tried extensions for advanced permissions like Aitoc and Amasty but it's only works at a frontend level and does not take any effect in API REST. We know that Magento was not made for this kind of thing so my question is:
is it possible to change the API REST to filter the queries by store? and where can I find these API REST queries?
I'd thank you so much.
So you can override api calls using webapi.xml file in your module, Just point it in your service interface and change acl if you want. In your service interface inject the original one and add some your filter before calling original.
The second approach to write a plugin on OrderRepositoryInterface and add filter there (but first solution is better because this service is used not only in api so you may do not want to restrict all calls)
I have a newmips app (on the Cloud) and I want to interact with my data. The only interaction I found in newMips was a data export to CSV or Excel (for the import I think I should go to the studio).
I need to display my data with the tools I developed on my web site (PHP).
Is there a CRUD (or at least a read) in a REST standard (or not)? Or is it possible to know the database connection that will give me access to my data tables ?
Newmips natively exposes REST API services on all entities managed.
To use it, you must define a client account (with role / group) in API credential menu of authentification module (use drop-down list on left of the editor to access it).
Documentation of your application API is auto-generated and can also be accessed in authentification module.
Note. There is no other tools yet available (except export features) to retrieve data.
I'm building a web application that will have access to PeopleSoft's database via jdbc.
Is it possible that I can use PeopleSoft's id/password for my custom application, so users accessing my website will not have to have another username/password?
Peoplesoft stores user details in the table PSOPRDEFN.
You will be able to verify the username against: PSOPRDEFN.OPRID.
The password field is: OPERPSWD.
Unfortunately the encryption function used for this field: hash() is available only from within peoplecode.
If you want to use a single sign on you should be able to do so by customizing the USERMAINT.gbl component perhaps in the saveprechange peoplecode, to save the password in a second field of your choice with an encryption algorithm that you can implement from JDBC as well.
If you want to reuse PeopleSoft security, you'd need to connect at a higher level than JDBC straight into the database. You could look at a component interface (codeable in Java) or send a SOAP message into PeopleSofts Integration Gateway - both methods would authenticate you against peopleSoft using its own security mechanisms.
The old way was to customize psuser.c to your needs and recompile as a new dll, used it your program, assuming you're on a Microsoft platform. As mentioned above, you could have a peoplesoft developer create a component interface ( or use the one that is delivered ). You can export wrapper Java or C/C++ code from a CI, a template. This code can then be used in an external program to call the CI. one way or the other, you have to interface with peopletools to call their decrypt for passwords.
Depending on how dynamic your business is, whether you add lots of employees each day, you could export psoprdefn using app messaging to another database. On the send, you could encrypt passwords however you like. But as you can surmise, this would not be real-time.
One thing I remember doing long ago was have a peoplesoft tech person develop a page the sole functionality of which was to call my java class and which obtained user/pswds as needed. Once I had them, I was good to go.
You can use the psjoa.jar , in that way you can signon via app.server using the same users and passwords in the psoprdefn table.
PeopleSoft has an LDAP integration ability but it has to be configured. If you are accessing via a Java wrapper around a component interface, a special account can be set up in PeopleSoft with access only to the underlying component, but the login/password would have to be passed into the component interface. This can be encrypted or sent over https.
PeopleSoft also has what it calls "row level" security - the ability to partition data sets so that for example your code could only access employee data within a specific business unit or accounting info for a particular line of business. This is all controlled within the PeopleSoft online security application.