IceFaces and FileResource - delete after session expires - icefaces

I'm using Icefaces. I generate reports for users and provide it using FileResource class. I do nothing with the file, since it should be available for download whenever user clicks on link to resource.
However, after user logs out, the report is no longer needed and should be deleted. Is there a build-in option in IceFaces to bind FileResource with session and automatically delete it when session expires?

Resource binds to a file which already exists when you are binding it to a Resource. It is expected that the file would be needed again, so there is probably no point in providing a method to delete the file.
The IceFaces tutorial on FileUpload talks about manually deleting files after session expiry using a session listener. So I guess IceFaces does not provide any method to delete files, whether or not they are binded to a Resource.

Related

Keycloak URL fragments do not disappear when logged in

Keycloak inserts session_state, state and code in url fragment params.. sometimes after successful login these remain on the url...
Or, when alternate routes are clicked in the app, these appear again.
Unnecessarily exposing the internals of keycloak params to users.
Is there some solution to not have these appear or delete them?
e.g. http://localhost:3000/home/#state=e625140e-c4f9-4500-858e-32c80e89f8a9&session_state=445229c3-d7eb-46e9-bfba-3339253dd17e&code=af0abde4-a60d-4f34-a101-8db5c76546b9.445229c3-d7eb-46e9-bfba-3339253dd17e.59915134-a59b-4ffb-878a-d02e7e84f2dd
Update:
with more tests narrowed down the issue to occur when
anything on the keycloak instance is touched. e.g. keycloak.token
any function call of keycloak is invoked... then after that these params get added and removed for every url route thereafter...
e.g. await keycloak.updateToken()
Keycloak Sever and js lib Version : 9.0.2
It is not a Keycloak issue. That's how used login flow works (maybe you need different flow, which will be more suitable for your use case). Your app code (used OIDC/OAuth library) should "clean" URL fragments. Cleaning can be: exchange code for the token (in this particular case), remove URL fragments, clean browser history, etc.

.HttpContext.User is null after successful login from SAML Identity Provider?

Trying to retrofit an old webforms application.
Got my configuration working so that it's prompting for login and successfully redirecting back to the application. The folks that manage the IP can see the response is generated.
However in the callback to my application the User is null. I'm told if it's configured correctly it should be populated.
We have a custom IHttpModule and that is where I can see getting hit with the call to /Saml2/Acs with the User not populated. I think this may be expected as the handler for that is supposed to populate the User, I think? However the following call (the returnUrl configured in sustainsys.Saml2) still has no User and I don't see any sort of error or anything.
Anyone with experience have an idea how to debug this?
The call to /Saml2/Acs should be taken care of by the Sustainsys.Saml2.HttpModule. It will process the response and then call the SessionAuthenticationModule to set a cookie that preservers the User across calls.
To get some more information about what's happening in the library, you can assign an implementation of ILoggerAdapter to Sustainsys.Saml2.Configuration.Options.FromConfiguration.SPOPtions.Logger to get some logging output from the library.
My issue turned out to be that I had another authentication module loaded before SessionAuthenticationModule and Saml2AuthenticationModule in the web config.
The comment in the example was
Add these modules below any existing. The SessionAuthenticatioModule
must be loaded before the Saml2AuthenticationModule
However in my case with I had another authentication module involved that needed to go last.

Can't create working Shared Access Signature for Azure Files

I need to create a SAS so I can create an Azure SQL Extended Event session. The event session needs a file data storage target via SAS and I can't create one that works. Here's what I've tried:
Identified a storage account that's not blob; just general. I'm pretty sure I need general so I can create files directly.
Created a file share therein.
Using azure storage explorer, right clicked on that file share and selected, "Get Shared Access Signature."
Checked Read, Write, List and created.
This gives me the URL https://mystorageacct.file.core.windows.net/xevents?st=2018-12-25T16%3A29%3A51Z&se=2018-12-29T16%3A29%3A00Z&sp=rwl&sv=2018-03-28&sr=s&sig=mysig
If I just try to follow this URL or create a CloudFile object with it in code, I get the oft-seen error, Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. Signature did not match. String to sign used was rwl 2018-12-25T16:29:51Z 2018-12-29T16:29:00Z /file/cs7f0fbc5104d4ax435dx883/$root 2018-03-28
Tried adding in comp=list&restype=container as suggested here. No joy.
Ensured I have no access policy in use.
Went to the azure portal and created a different SAS at the storage account level (couldn't see a way to create it on the file share). That gave me this "File service SAS URL": https://mystorageacct.file.core.windows.net/?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2018-12-30T01:25:16Z&st=2018-12-26T17:25:16Z&spr=https&sig=mysig
If I try that URL I get Value for one of the query parameters specified in the request URI is invalid. I don't know which parameter is in question, they look fine to me, but I don't know what the value srt=sco indicates. Based on this doc srt is resource type, but I don't know what the value sco indicates.
Very confused, looking for suggestions.
For any future readers, extended event sessions confusingly (because they write a file) require blob containers, not general/file/queue containers. At least I could only get them to work that way.
You are probably confused by how the SAS URLs are presented. In fact, the SAS URLs you got just provide examples of how to use the SAS token, they can't be used directly. Hence you saw those errors occur.
Service-level SAS URL, i.e. the one you got from Storage Explorer.
It's in the format of fileEndPoint/fileShareName?SASToken. The SASToken gives us permission to operate on all files inside the specified file share. To leverage the token, we need to add fileName in the URL, i.e. fileEndPoint/fileShareName/fileName?SASToken.
comp=list&restype=container is to list blobs in Blob Container, not for File Share.
Account-Level SAS URL, the one you got form Azure portal.
It's in the format of fileEndPoint/?SASToken. Likewise, we need to complement the URL to make it valid, i.e. fileEndPoint/fileShareName/fileName?SASToken. Note that this SASToken has all permission on all Storage resources because all choices are checked.
sco means we have permission to operate on service, container, and object, which indicates the scope of permission, check doc for details.
I am not familiar with Azure SQL Extended Event session, but if you only need to work with files inside one file share, 1st is enough.

How to change multiple Urls at once in Postman?

I have 10+ requests in my Postman collection.
Each time when testing on my local server and testing server I have to change urls from 'localhost:8000' to 'test.mysite' manually.
Is there any solution to change all of them at once ?
You can do this using the 'manage environments' feature - This can be accessed using the icon under the Send/Save button.
Add a new environment file and set the key to 'url' and the value to 'localhost:8000' - make sure you give the file a name before saving. Go back to your request and select the new file in the drop down menu - This will currently say 'No Environment'. Once you’ve selected your file, replace the url string with {{url}}/your-route and hit send. This should now send the local request.
Repeat again but this time add the test server value to the 'url' key. Once that’s in place, all you need to do is switch between the 2 environment files when making requests.
More information about this can be found here https://github.com/DannyDainton/All-Things-Postman/blob/master/Examples/02_createEnvironmentFile.md

Import API not working in sisense

I was trying to use the dashboard import API from v1.0 which can be found in the REST API reference. I logged in to http://localhost:8083/dev/api/docs/#/ , gave the correct authorization token, and a dash file in the body, and a 24 character importFolder and hit the Run button to fire the API. It returns 201 as HTTP response, which means the request was successful. However, when I go back to the homepage, I don't see any new dashboard imported in to the said folder. I have tried both cases, where the importFolder exists (already created manually be me), and does not already exist, where I expect the API to create it for me. Neither of these, however, create/import the dashboard
A few comments that should help you resolve this:
When running the command from the interactive API reference (swagger) you don't need the authentication token, because you're already logged in with an active session.
Make sure the json of your dashboard is valid, by saving it as a .dash file and importing via the UI
The folder field is optional - if you leave the field blank, the dashboard is imported to the root of your navigation/folders panel.
If you'd like to import to a specific folder, you'll need to provide the folder ID, not its name, which can be found several ways such as using the /api/v1/folders endpoint, where you can provide a name filtering field and use the oid property of the returned object as the value for the folder field in the import endpoint.
If you can't get this to work still, use chrome's developer tools to look at the outgoing request when you import from the UI and compare the request (headers, body and path) to what you're doing via swagger in order to find the issue.