I am integrating Apache Superset into my application, and the most important thing for me is to not allow exporting of data outside of the application (sensitive data)
I have created a custom role, started with no permissions, and added only the necessary ones.
There are a few options I have not been able to block:
I have removed all permissions related to exporting, but when viewing Charts, I am not able to export as CSV but am able to export as JSON (which is effectively the same in terms of the data)
Also I have been able to remove the option to share as email and get shareable like for Dashboards, but not for Charts
I have also tried to block these endpoints on an infrastructure level (Superset is running on K8S behind Nginx) but blocking the superset-api/v1/api/*/export does not help at all because the export through the UI is from the endpoint superset-api/v1/api/*/data (which cannot be blocked because it is called to view the data)
Related
For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.
I am designing REST APIs for some resources in my system.
System allows users to upload files.
There are 3 kind of resources:
Post/GET a file (data file).
Get Config files (meta file of file format (format is system specific - like how CSV file in my system should be , how Json file should be etc).
Get configfiles of server resources and permissions.
I am thinking of some url of the form:
host/api/v1/files.
host/api/v1/config/files.
host/api/v1/config/server.
Does host/api/v1/config/files, host/api/v1/config/server
makes more sense or host/api/v1/files/config, host/api/v1/server/config
makes more sense?
Also when version of config of datafiles changes to v2 does it make sense to change version of server config files also to v2 - despite being the fact they are unrelated and don't change together?
Or can I broadly classify files and config of files under same category as
/host/api/files/v1/data
/host/api/files/v1/config
and server config in another category as
/host/api/server/v1/config
Then both can change independently no need to migrate server/v1/config to v2.
Does host/api/v1/config/files, host/api/v1/config/server makes more sense or host/api/v1/files/config, host/api/v1/server/config makes more sense?
The latter. Think about expanding the service in the future, what if e.g. you want to add a method for getting/adding server users. Its URL should be host/api/v1/server/config.
Regarding the versioning, service should be observed as a whole. If you update one part to new version, whole service goes to a new version. If some parts are completely independent, consider making two services.
IBM states the following:
It can be argued that, when the interface to a service changes in a
non-backwards-compatible way, in reality an entirely new service has
been created. In such a case, unless implementations of the first
interface continue to exist, the preexisting service is, in effect,
discontinued. From the client's perspective, a service is no more than
an interface and some non-functional qualities (such as trust and QoS
attributes) that it may claim to exhibit; thus, if the interface to a
service changes in a non-backwards-compatible way, it no longer
represents an instance of the original service, but is rather a
completely new service.
Load testing using Jmeter of Elastic Search API Queries through CSV
I want to perform load testing using Jmeter of Elastic Search API queries which I will pass through CSV.
Please give me suggestions, what should be things I should Consider before doing that and what kinds of graphs that I should look in to, and what plugins should be installed in Jmeter
Get familiarized with the concept of web applications performance testing, load patterns, performance metrics, etc. See Performance Testing Guidance for Web Applications as an example reference material
Build your test plan "skeleton". Implement requests to web services endpoints using HTTP Request samplers. You may also need to add a HTTP Header Manager to send at least Content-Type header. See Testing SOAP/REST Web Services Using JMeter article for details.
Once done validate your script by running it with 1 virtual user and View Results Tree listener enabled. Check request and response details to see if your test is doing what it is supposed to be doing.
If your test works fine - add CSV Data Set config to your Test Plan and replace the values you would like to parameterize with the JMeter Variables originating from the CSV file
Repeat step 3 with 1-2 users to see whether your parameterization works as expected.
Now it's time to configure your load pattern (number of virtual users, ramp-up, test duration, etc.) and run your test
Analyze results using JMeter Listeners and/or HTML Reporting Dashboard
We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.
Is there a way to configure a container so that for a certain user it allows creation of new objects, but denies deletion and modification of existing objects?
My case is that I provide a web service which receives and serves files using remote openstack swift storage and I want that in case of a credential compromise at the web service level, the person who gains access to those credentials would not be able to alter existing files.
To the best of my knowledge, I don't think it is possible to deny any user from deleting or updating existing objects of the same container, when one can upload objects using credentials.
But you can write a java API and expose it to the user to upload file and internally you can upload the file using the set of credentials. Do not expose the functions that the user is not supposed to do (delete/update etc). You can have all your creds and everything in the code (better to be encrypted). This way you may achieve what you want. But this is a work around.