Apiko - trying to update an existing file - apiko

With the Quasar Framework has created a form to add and update files on the server Apiko.
With this code trying to update a file that already exists on the server Apiko - https://gist.github.com/gearmobile/cc7de273d9c526b589e198eb35ff89d6
However the update file on the server Apiko fails, and the browser console there are no errors.
Console Apiko gives this log:
...
[APIKO LOG 11:34:25.093] Checking if a different session token is specified...
[APIKO LOG 11:34:25.093] No session token is specified, going with luYzmm9Dgzte2OKcLvIyN2b4K5c8Ijeu
[APIKO LOG 11:34:25.094] Checking restrictions...
[APIKO LOG 11:34:25.094] This request has passed the restrictions check.
[APIKO LOG 11:34:25.094] Checking parameters...
[APIKO LOG 11:34:25.094] This request has passed the params check.
Help to understand, what is the cause of the error?

You are using the PUT /files/:id endpoint, which is not implemented in Apiko by default. The reason is that normally to update files you can instead delete an existing one (DELETE /files/:id) and upload a new file (POST /files).
Alternatively, you can create a custom endpoint that would override PUT /files/:id to handle file updates for you.
If you would like to see this feature in Apiko's core, please go ahead and vote here.

Related

How do I resolve RESTEASY002186 so my Wildfly 26 web application can use SSE over https?

I have a web application running on Wildfly 26 that uses SSE broadcasting and works correctly with http. However, when I switch to using an https endpoint, I get Wildfly log entries of:
WARN [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1)
RESTEASY002186: Failed to set servlet request into asynchronous mode,
server sent events may not work
This happens with each registration attempt of the https endpoint but I never see this when registering with the http endpoint.
Testing with curl against the http endpoint results in curl waiting for events to show up (and keeps printing them out as it receives them) until I quit. Using curl to test the https endpoint, I will see the same headers I got from the http endpoint, namely:
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Type: text/event-stream
But after printing out my registration successful event, curl seems to believe the stream is closed and exits -- giving me my command prompt back.
My #GET MediaType.SERVER_SENT_EVENTS registration endpoint will create an OutboundSseEvent and send it to the SseEventSink to acknowledge successful registration to my SseBroadcaster instance (this is the event curl sees and prints before exiting). I then log a registration successful message before exiting the method. All of this appears to work correctly for both http and https but the stream doesn't stay open once the request endpoint completes because of the failure to run asynchronously as outlined above.
I have not found information on the causes and/or workaround solutions for my RESTEASY002186 problem. I posted a question on this issue last week using the Wildfly Google Group (https://groups.google.com/g/wildfly/c/SO2eHdvMEko) but thought I would try a wider audience since this doesn't seem to be a commonly experienced condition. I don't see any indications during initialization that WildFly will be unable to use asynchronous mode, it just complains when it tries and fails... Any help would be greatly appreciated!
Edit 6/6/2022
The code is running on an isolated network so I can't just cut/paste the code here, but I gutted the resources file to a bare minimum -- just leaving enough for the client to be able to register. The problem remains unchanged. The code is now essentially:
#Path("sse")
public class SseResources {
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public void listen(#Context Sse sse, #Context SseEventSink sseEventSink) {
SseRegComplete regComplete = new SseRegComplete("sse-server");
OutboundSseEvent event = sse.newEventBuilder().
name(regComplete.getType().toString()).
id(regComplete.getEventId()).
mediaType(MediaType.APPLICATION_JSON_TYPE).
data(SseRegComplete.class, regComplete).
comment("Event Stream Registration Completed Successfully").
build();
sseEventSink.send(event);
}
}
Before the above simplified code, I had declared the resource as #ApplicationScoped, had Sse injected into it, and kept a reference to the SseBroadcaster so I could use it whenever an event would come in. I was catching the events to broadcast by using an #Observes method (which I also got rid of). I was calling register(sseEventSink) on the SseBroadcaster in the listen method so I could later call broadcast(outboundEvent) whenever I had updates to publish. I got rid of all that just to see if I could get the stream to stay open but to no avail. I still get the RESTEASY002186 message and curl still exits after printing out the regComplete event sent to it in the code above.
Edit 6/7/2022
Yesterday I was able to get my code working in a new vanilla Wildfly 26 install using an https endpoint URL by following these configuration instructions. Something I hadn't mentioned in the original post is that I am trying to add SSE functionality to an already existing app. It is several years old and we actually moved to Wildfly 26 about 6 months ago because of the log4j vulnerability in the earlier version of Wildfly we were using. I suspect that the problem is related to either our Wildfly configuration (perhaps because old settings were brought over that shouldn't have been) or some 3rd party dependency that is preventing Wildfly from using asynchronous mode.
We are using Shiro for authentication and authorization against an LDAP server -- perhaps Shiro has some hooks into the Wildfly runtime that are causing issues? After initial login, we use a session cookie in all subsequent calls. That is a difference from my test server but I don't think it is relevant because the call definitely passed authentication before executing the registration code. The only other thing that comes to mind right now is our web app ships with LogBack and tells Wildfly not to use the default logging framework.
I plan to start today by comparing the two standalone.xml files to see if anything jumps out at me as being fundamentally different. Is there anything else I should be checking for differences (I think there is a domain.xml file somewhere...)?
Edit 6/14/2022
This definitely has something to do with Shiro being in the loop. When I edit the web.xml file to have Shiro's filter-mapping url-pattern to not include the SSE endpoint, everything works as expected.

Identify reasons for 500 errors Google auth

We have an api deployed on Azure that uses Google authentication. Over the weekend, the API started to throw 500 errors that were resolved after restarting the API. Is there a way to identify what the underlying cause for these errors might be?
Check if you have custom error mode in web.config file to “on” or “Remoteonly”. If
yes then turn it off. Add the following line to System.web element in web.config
Enable custom logging/instrumentation in the code which can help you in more
information.
ASP.NET applications can use the System.Diagnostics.Trace class to log information to
the application diagnostics log. For example
System.Diagnostics.Trace.TraceError("If you're seeing this, something bad happened");
Enable Detailed Error Messages - Detailed version of the html files produced when
your website responds with an error message. This is good to enable for debugging
some error responses in your website. It is stored in the website's file system.
Web Server Logging - Also known as HTTP logs or IIS logs, this will log all requests
to your website in W3C Extended Log File Format.
Failed Request Tracing - Also known as FREB, here you can get lots of information
from IIS through its different stacks for each failing request.

Update action package with gactions always returns request timeout

I created a project under actions console and made a test action package for smart home app. I want to try uploading the action package I have using gactions. However, every time I execute this command
./gactions --verbose update --action_package action.json --project my_project_id
the result is always like this:
Unable to update: Patch https://actions.googleapis.com/v2/agents/my_project_id?updateMask=agent.draftActionPackage.actions%2Cagent.draftActionPackage.conversations&validateOnly=false: Post https://accounts.google.com/o/oauth2/token: dial tcp 216.58.200.45:443: i/o timeout
I checked the verbose log and I noticed that it is reading some data from creds.data
Reading credentials from: creds.data
Then I noticed the contents in creds.data contains the access token and the expiry time. But the expiry time is july 18, which is a lot of days from now. I am not sure if this is the case that causes timeout error. And I also don't know how to update the creds.data to get a new access token.
Alright. I noticed that a part of this error is my net problem. But I was able to open yahoo and other sites, while the update just didn't work. But nevermind, I just switched to a different Wi-Fi.
Then I deleted the creds.data. And executed the update command again, this will come out.
Gactions needs access to your Google account. Please copy & paste the URL below into a web browser and follow the instructions there. Then copy and paste the authorization code from the browser back here.
Visit this URL:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=237807841406-o6vu1tjkq8oqjub8jilj6vuc396e2d0c.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fassistant+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Factions.builder&state=state
Enter authorization code:
Then I followed the instructions above, got the authorization code, copied and pasted it in the console, and everything works fine now.

How to prevent access to the Hidden Directories like Scripts, Contents, aspnet_client from browser in a asp.net mvc application?

I want to prevent the user from accessing the Hidden Directories like Scripts, Contents, aspnet_client directly from the browser in a asp.net mvc 2 application. Currently whenever I try to access the above mentioned Hidden Directories it is returning the following error message:
403 - Forbidden: Access is denied.
You do not have permission to view this directory or page using the credentials that you supplied.
I want to show "404 Not Found" error page whenever one tries to access the above mentioned hidden directories.
Can anyone help me with to resolve this issue?
When attacker try to access the file with some random name, if given file name not exists then it will give error like "404 File not exists". if file name exists but don't have access to file then it will return error like "403 Forbidden" error. so attacker get idea of the file and file directory.
So the application should be capable of handling this issue
Solution is to show the return response in different way, recommended to show as 404 error.
To do this in IIS we can add customerError configuration in the web.config file.
Please check the below article for the details of issue and solution.
https://www.c-sharpcorner.com/UploadFile/092589/custom-error-page-in-Asp-Net/
You may want to add custom handler - something like this - http://forums.asp.net/post/4152906.aspx

Protecting click once web deployed installations

I have a link on my website to the standard publish page generated by Visual Studio. My concern is that if anybody finds out the URL to that page, they can download my software. Sure, I could password protect the page with the link, but it still would not be protecting the download URL. Are there any ways to secure the click once upload? I have looked around, and it seems like I am stuck in this sense.
Public URL is a security issue in ClickOnce Deployment. However, there is a solution for your problem if your web server has windows and .NET installed. Tell me if you have one ? I will have to come up with another workaround for Linux web server in case you have that.
Brief
Firstly, a bit of information about ClickOnce deployment. When you deploy the application, the GET requests on the server made are (assuming WebDir is the publish directory on the server)
G-1. GET /WebDir/setup.exe (Initial download)
G-2. GET /WebDir/MyApp.Application (setup.exe -url request)
G-3. GET /WebDir/MyApp.Application (.application deployment provider URL request)
G-4. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.manifest (Application manifest request)
G-5. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ... (Application file requests)
Implementation
Now, the solution is to intercept these file requests on the server. On IIS, you can attach a custom HTTPHandler and handle the request. On Apache, you can redirect requests to a PHP code using .htaccess files. Apart from this, you will have to generate unique identifier uid for client instances downloaded from the server (can be your license key) and put that in the deployment provider URL query parameters.
Directory Structure
Create an "Application" folder inside your WebDir and restrict access to /WebDir/Application/. Rest everything can be there inside /WebDir/
File Requests
So here's what you do on a Apache web server hosted on a windows machine:
Create a custom download page or use the one created from publishing the application using Visual Studio (but you will have to edit it manually!). Let's assume that page is /WebDir/Download.php
After authenticating user from Download.php, you have to send setup.exe from your code (can do it with readfile() in PHP) to the user. However, the catch is bootstrapper (setup.exe) after installing will do a GET request [G-2]. Don't forget now, that you have to validate this file request. So basically you change the "setup.exe -url" property to include uid before returning the file. For eg: change it to /WebDir/uid/MyApp.Application [G-2]. You can use MsiStuff.exe to change the URL property for the bootstrapper.
Using a .htaccess file, rewrite [G-2] to /WebDir/Handler.php?user=uid. From Handler.php, you can check if it is a valid uid. If it is valid, you will have to include the uid in the deployment provider URL and "Dependent Assemblies Path" in deployment manifest so that if an upgrade request comes (It essentially requests the deployment manifest), you can validate the user there too. Add uid to query string parameters. For eg: change it to /WebDir/MyApp.application?user=uid [G-3]. Don't forget that you will have to resign the manifests once you modify them. Use Mage or write your own code to do that.
So finally, the GET requests on the server will be (assuming uid=1f3rd)
G-1. GET /WebDir/Download.phpAction: return setup.exe with the -url changed
G-2. GET /WebDir/Application/setup.exe/1f3rd/MyApp.ApplicationAction: redirect, validate user, change URL, re-sign and return file
G-3. GET /WebDir/Application/setup.exe/MyApp.Application?user=1f3rdAction: redirect, validate user and return file
G-4. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.manifestAction: redirect, validate user and return file
G-5. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ...Action: redirect, validate user and return file
Pros
Application is successfully deployed and upgraded only if all the requests have a valid uid in the URL present.
You can now identify different instances of application on client systems. You can track the update history, do a selective version upgrade/downgrade and much more !
Cons
You will need a windows server to implement the above since you need mage.exe | your-own-.NET-code-signing-application and Msistuff.exe.
You may have minor performance issues since you are performing validation on every file request. You can choose to skip validation on .manifest and .deploy file requests.
You will have to ensure proper security for companies certificate which will be present on the web server for signing (You can store it on the server local file-system if you have the full server to yourself. In that case, it is fine unless somebody breaks into machine itself !)
If you want me to make something clear or explain in detail, feel free to ask. In case you have suggestions for modification to the above, post that too.
I will write a detailed CodeProject article if I have spare time someday.