Is there a way to configure a container so that for a certain user it allows creation of new objects, but denies deletion and modification of existing objects?
My case is that I provide a web service which receives and serves files using remote openstack swift storage and I want that in case of a credential compromise at the web service level, the person who gains access to those credentials would not be able to alter existing files.
To the best of my knowledge, I don't think it is possible to deny any user from deleting or updating existing objects of the same container, when one can upload objects using credentials.
But you can write a java API and expose it to the user to upload file and internally you can upload the file using the set of credentials. Do not expose the functions that the user is not supposed to do (delete/update etc). You can have all your creds and everything in the code (better to be encrypted). This way you may achieve what you want. But this is a work around.
Related
If I save API keys to Flutter_secure_storage, they must be exposed in the first place. How could they be pre-encrypted or saved to secure storage without exposing them initially?
I want to add a slight layer of security where keys are stored securely, only to be exposed when making an API call. But if I have keys hardcoded then they are exposed even if only at initial app run. How do you get around this logic?
To avoid exposing API key, you should store keys in a '.env' file and use flutter_dotenv package to access it while making API calls. Although this method will not help when making API call. If you really want to secure exposing keys, you should move the API calls to the backend so those network calls cannot be seen by the client.
If this is a web project, you could use something like base64 on both ends, then debase and save like this:
SERVER ON PHP
apiKeyEncoded = base64_encode(apiKeyGenerator());
CLIENT:
apiKeyEncoded = await getApiKey();
apiKeyDecoded = base64Decode(apiKeyEncoded).toString(); //this is the usable one, save it.
Now, if the project is focused on mobile use, I don't think you actually need to implement this, tho the code would be the same.
I will add some input to this. I am using Parse Back4App which exposes app API keys in the same way that firebase does. I have discovered a few very important security designs which may help with this.
Client side
Don't worry about app API keys being abused. Firebase/Back4App both have some security features in place for this including DoS & DDoS security features.
Move ALL actual API calls to server and call from client via cloud code. If you want to go to the extreme, create a user-device hash code for custom client rate limiting.
Server side
LOCK DOWN ALL CLPs, ALL ACLs, basically lock ALL PERMISSIONS and ONLY allow cloud calls with heavy security checks authorized access to anything server side including outside API calls.
Make API calls from your server only. Better yet, move your API calls outside cloud calls & create "cloudJobs", these run on schedule with Back4App and you can periodically call whatever API from server. Example: a crypto currency app might update prices once per second, once per minute etc. server gets these updates and pushes to clients. No risk of someone getting your crypto API keys and running the limits.
Put in a custom rate-limiting design & design around this so your rate limits would never trip under normal circumstances. If they do trip in excess, ban user & drop their requests.
Also put API keys in .env file on server. Go a step further & use a key encryption hardware service.
It would be a tell-tale sign that your server is compromised if your API keys get abused with this structure.
Want further DoS & DDoS protection? Mirror your server a few times and create a structure whereby client requests can be redirected under attack times or non-DDos/DoS attacking clients receive new app API keys.
... I could go on and on about security & what I've learned but I'll leave it at that.
I need to use google cloud storage to store some files that can contain sensitive information. File names will be generated using crypto function, and thus unguessable. Files will be made public.
Is it safe to assume that the file list will not be available to public ? I.e. file can only be accessed by someone who knows the file name.
I have ofc tried accessing the parent dir and bucket, and I do get rejected with unauthenticated error. I am wondering if there is or will ever be any other way to list the files.
Yes, that is a valid approach to security through obscurity. As long as the ACL to list the objects in a bucket is locked down, your object names should be unguessable.
However, you might consider using Signed URLs instead. They can have an expiration time set so it provides extra security in case your URLs are leaked.
Yes, but keep in mind that the ability to list objects in a bucket is allowed for anyone with read permission or better on the bucket itself. If your object names are secret, make sure to keep the bucket's read permissions locked down as much as possible.
jterrace's suggestion about preferring signed URLs is a good one. The major downside to obscure object names is that it's very difficult to remove access to a particular entity later without deleting the resource entirely.
We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.
This is related to Is there ReadOnly REST API key to a MongoLab database, or is it always ReadWrite and How does Mongolab REST API authenticate
I want to make it possible for unauthenticated users of my web app to create resources and share them. The created resource is an array of links ['link1', 'link2', 'link3'].
I'm looking at using MongoLabs directly from the client for this, which is possible through their REST api.
The problem though is that as far as I can see, if I do that, it would be impossible to prevent vandalists to clear out the entire collection rather easily.
Is this correct, and if so, is there a simple solution (without running a custom backend) to do something like this?
First off, you could create a "history", so if something goes wrong you can call on an easy command to restore records.
Secondly you might screen connected clients for abusive behavior; eg measure the number of delete or update commands in a certain timeset. If this get triggered you can call on your restoration process.
Note; i have no experience with MongoLabs whatsoever, but this - to me - would be a suitable safeguard in creating a public api.
I'm writing a mock of a third-party web service to allow us to develop and test our application.
I have a requirement to emulate functionality that allows the user to submit data, and then at some point in the future retrieve the results of processing on the service. What I need to do is persist the submitted data somewhere, and retrieve it later (not in the same session). What I'd like to do is persist the data to a database (simplest solution), but the environment that will host the mock service doesn't allow for that.
I tried using IsolatedStorage (application-scoped), but this doesn't seem to work in my instance. (I'm using the following to get the store...
IsolatedStorageFile.GetStore(IsolatedStorageScope.Application |
IsolatedStorageScope.Assembly, null, null);
I guess my question is (bearing in mind the fact that I understand the limitations of IsolatedStorage) how would I go about getting this to work? If there is no consistent way to do it, I guess I'll have to fall back to persisting to a specific file location on the filesystem, and all the pain of permission setting that entails in our environment.
Self-answer.
For the pruposes of dev and test, I realised it would be easiest to limit the lifetime of the persisted objects, and use
HttpRuntime.Cache
to store the objects. This has just enough flexibility to cope with my situation.