Transfer client configuration between environments - keycloak

For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?

I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.

Related

REST on non-CRUD operations

I have a resource called “subscriptions”
I need to update a subscription’s send date. When a request is sent to my endpoint, my server will call a third-party system to update the passed subscription.
“subscriptions” have other types of updates. For instance, you can change a subscription’s frequency. This operation also involves calling a third-party system from my server.
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
PATCH subscriptions/:id
I can hypothetically use my controller behind the endpoint to fire different functions depending on the query string... But what if I need to add a third or fourth “update” type action? Should they ALL run through this single PATCH route?
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
No - but you will often want to.
Consider how you would support this on the web: you might have a number of different HTML forms, each accepting a slightly different set of inputs from the user. When the form is submitted, the browser will use the input controls and form metadata to construct an HTTP (POST) request. The target URI of the request is copied from the form action.
So your question is analogous to: should we use the same action for all of our different forms?
And the answer is yes, if you want the general purpose HTTP application to understand which resource is expected to change in response to the message. One reason that you might want that is cache invalidation; using the right target URI allows all of the caches to understand which previously cached responses should not be reused.
Is that choice free? no - it adds some ambiguity to your access logs, and routing the request to the appropriate handler in your code takes a bit more work.
Trying to use PATCH with different target URI is a little bit weird, and suggests that maybe you are trying to stretch PATCH beyond the standard constraints.
PATCH (and PUT) have remote authoring semantics; what they mean is "make your copy of the target resource look like my copy". These are methods we would use if we were trying to fix a spelling error on a web page.
Trying to change the representation of one resource by sending a remote authoring request to a different resource makes it harder for the general purpose HTTP application components to add value. You are coloring outside of the lines, and that means accepting the liability if anything goes wrong because you are using standardized messages in a non standard way.
That said, it is reasonable to have many different resources that present representations of the same domain entity. Instead of putting everything you know about a user into one web page, you can spread it out among several that are linked together.
You might have, for example, a web page for an invoice, and then another web page for shipping information, and another web page for billing information. You now have a resource model with clearer separation of concerns, and can combine the standardized meanings of PUT/PATCH with this resource model to further your business goals.
We can create as many resources as we need (in the web level; at the REST level) to get a job done. -- Webber, 2011
So, in your example, would I do one endpoint like this user/:id/invoice/:id and then another like this user/:id/billing/:id
Resources, not endpoints.
GET /invoice/12345
GET /invoice/12345/shipping-address
GET /invoice/12345/billing-address
Or
GET /invoice/12345
GET /shipping-address/12345
GET /billing-address/12345
The spelling conventions that you use for resource identifiers don't actually matter very much.
So if it makes life easier for you to stick all of these into a hierarchy that includes both users and invoices, that's also fine.

How do you save API keys without exposing them in the first place?

If I save API keys to Flutter_secure_storage, they must be exposed in the first place. How could they be pre-encrypted or saved to secure storage without exposing them initially?
I want to add a slight layer of security where keys are stored securely, only to be exposed when making an API call. But if I have keys hardcoded then they are exposed even if only at initial app run. How do you get around this logic?
To avoid exposing API key, you should store keys in a '.env' file and use flutter_dotenv package to access it while making API calls. Although this method will not help when making API call. If you really want to secure exposing keys, you should move the API calls to the backend so those network calls cannot be seen by the client.
If this is a web project, you could use something like base64 on both ends, then debase and save like this:
SERVER ON PHP
apiKeyEncoded = base64_encode(apiKeyGenerator());
CLIENT:
apiKeyEncoded = await getApiKey();
apiKeyDecoded = base64Decode(apiKeyEncoded).toString(); //this is the usable one, save it.
Now, if the project is focused on mobile use, I don't think you actually need to implement this, tho the code would be the same.
I will add some input to this. I am using Parse Back4App which exposes app API keys in the same way that firebase does. I have discovered a few very important security designs which may help with this.
Client side
Don't worry about app API keys being abused. Firebase/Back4App both have some security features in place for this including DoS & DDoS security features.
Move ALL actual API calls to server and call from client via cloud code. If you want to go to the extreme, create a user-device hash code for custom client rate limiting.
Server side
LOCK DOWN ALL CLPs, ALL ACLs, basically lock ALL PERMISSIONS and ONLY allow cloud calls with heavy security checks authorized access to anything server side including outside API calls.
Make API calls from your server only. Better yet, move your API calls outside cloud calls & create "cloudJobs", these run on schedule with Back4App and you can periodically call whatever API from server. Example: a crypto currency app might update prices once per second, once per minute etc. server gets these updates and pushes to clients. No risk of someone getting your crypto API keys and running the limits.
Put in a custom rate-limiting design & design around this so your rate limits would never trip under normal circumstances. If they do trip in excess, ban user & drop their requests.
Also put API keys in .env file on server. Go a step further & use a key encryption hardware service.
It would be a tell-tale sign that your server is compromised if your API keys get abused with this structure.
Want further DoS & DDoS protection? Mirror your server a few times and create a structure whereby client requests can be redirected under attack times or non-DDos/DoS attacking clients receive new app API keys.
... I could go on and on about security & what I've learned but I'll leave it at that.

how to automate bots to monitor for successful queues on orchestrator?

I have a project that I have to do that deals with queues being loaded successfully and unsuccessfully whereby I do manually at the moment that can be tedious and also positive negative meaning the orchestrator can state that new queues have been added but when I access the actual job (process) nothing has been added.
I would like to know, is there a way to monitor queue success and unsuccessful rates on orchestrator instead of the using monitoring it manually?
You can access pretty much any information via the Orchestrator API.
You can find the "Orchestrator HTTP Request" activity, which will allow you to access any relevant endpoint.
Note that the provisioned Robot in Orchestrator needs to have the right access permission, so please have a look at what roles are associated to the Robot user.
The API reference can be found here:
https://docs.uipath.com/orchestrator/reference
You will see it mentions swagger, which in turn will give you all the information you need to access the relevant APIs.

Testing service session management via REST

I need to write test for some JAX RS web service that asserts that certain value is cached in the session from disk on the first request in the session.
The testing process does not have access to the tested process. The use case involves using REST API to invoke services.
I can think of several options to proceed with:
Create a REST endpoint just for testing, and query there the needed session value.
Write and then read a log message.
I am aware that I am trying to test an implementation detail via an external API which does not provide contract for this detail, but currently I'm a bit constrained about which processes may be run by the testing infrastructure.
Are there any additional seams to exploit for testing, and what general good practice exists for this scenario?
I just came up with the idea of changing the cached resource and using the change in the behavior.

Test symfony restful api with phpunit and doctrine

I'm trying to pull-of some tests for my RESTful api functions.
For this I did the following:
Installed PHPUnit.
Created a new database for testing.
Created a new enviorment (test) and changed the doctrine config for it.
Created a test.
My problem is this:
When performing a request (somedomain.com/api/somemethod) -> the requested page doesn't know i'm performing a test on it -> so the data it uses is the production/development database and not the 'test' db i have created for the tests.
(the script using test db, the requested page uses normal configurations).
Is there a way to solve it without touching or modifying the API code/behavior?.
Thanks.
Since you said you're requesting somedomain.com I can only suspect you're firing requests over HTTP.
Symfony is made to be easily testable and you can perform functional test without ever making a real HTTP request. Instead, it will make a request object and tell it's kernel to handle it as if it were coming from a real client.
There is a chapter in symfony book on this: Functional tests
If you use method described there (using Symfony BrowserKit client and paths instead of complete urls), Symfony will have it's kernel booted in test environment and will handle request like that.
If, however, for any reason you are unable/don't want to do it that way, and want to fire real HTTP requests, I suggest you to make a file in web directory called app_test.php. In that file you should boot the kernel in test environment and make sure your tests are actually hitting that file (instead of app.php or app_dev.php). However, have in mind that this file will be publicly available and as so, it will cause a security hole so make sure to guard it somehow (check app_dev.php for hints). As an idea, you could require specific key to be provided in request header to allow it to pass on. Or if it will be tested from a single machine, you could also guard it by IP, or whatever works for your case.