This problem seems to exist on a specific server. All other servers are working ok.
Background: The website is basically Forms Auth but there's an asmx that manually requires Basic Auth.
I have two C# client.
When invoking using SOAP (asmx client proxy) with basic auth credentials - all is well.
When invoking using WebClient or WebRequest with the same basic auth credentials, I get 401.5.
The folders have "Everyone" set to them.
When setting up iis trace, I see a very weird behavior. The request arrives with the correct Basic auth header. But further down the trace I see the following:
GENERAL_REQUEST_HEADERS
Headers="Connection: Keep-Alive
Content-Length: 68
Content-Type: application/json
Authorization: Kerberos
Expect: 100-continue
Host: 1.2.3.4
The Kerberos seems very weird. It is as if the request headers changed throughout the process, and perhaps that explains the 401.5.
Again, I would like to stress out that on other servers there's no problem with both clients. The only difference I can think of is that the problematic server is a DC. But if that is a problem then why is the SOAP client working well?
Any ideas?
Progress!
After some debugging I noticed that Application_AuthenticateRequest was fired twice for every request. The first time with Basic auth as I expected and the second time with the Kerberos!
After googling I found this:
http://forums.asp.net/t/1868629.aspx?HttpModule+triggered+two+times+for+request+to+URL+without+default+document
Seems like for extensionless urls those events might fire multiple times, depending on the configured Extensionless urls.
Going back to the trace I noticed that in the non-working server the trace shows usage of ExtensionlessUrl-ISAPI-4.0_64bit, and in the working servers no such entry existed. After comparing the two IIS I noticed that the non-working IIS had ExtensionlessUrl-ISAPI-4.0_64bit configured whereas in the working IIS there was ExtensionlessUrl**Handler**-ISAPI-4.0_64bit (note the "handler"). I compared the dlls involved and the working server had a newer aspnet_isapi.dll. I assume that this is an updated extensionless url handler. I suppose an upgrade to IIS or .NET might install a later version, but for now I tried to remove the ExtensionlessUrl-ISAPI-4.0_xxbit like so:
<remove name="ExtensionlessUrl-ISAPI-4.0_32bit" />
<remove name="ExtensionlessUrl-ISAPI-4.0_64bit" />
And it worked! Now there is only a single Application_AuthenticateRequest.
The non-working version that had this in the trace:
OldHandlerName="", NewHandlerName="ExtensionlessUrl-ISAPI-4.0_64bit", NewHandlerModules="IsapiModule", NewHandlerScriptProcessor="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll", NewHandlerType=""
Now changed to:
OldHandlerName="", NewHandlerName="WebServiceHandlerFactory-ISAPI-4.0_64bit", NewHandlerModules="IsapiModule", NewHandlerScriptProcessor="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll", NewHandlerType=""
Hopefully that's the end of it. Additional testing still required.
I would appreciate if someone can write how to upgrade IIS dlls to a later version. Is this an upgrade to .NET or is there a specific KB update that is downloaded with Windows Update?
Related
I have a web application running on Wildfly 26 that uses SSE broadcasting and works correctly with http. However, when I switch to using an https endpoint, I get Wildfly log entries of:
WARN [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1)
RESTEASY002186: Failed to set servlet request into asynchronous mode,
server sent events may not work
This happens with each registration attempt of the https endpoint but I never see this when registering with the http endpoint.
Testing with curl against the http endpoint results in curl waiting for events to show up (and keeps printing them out as it receives them) until I quit. Using curl to test the https endpoint, I will see the same headers I got from the http endpoint, namely:
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Type: text/event-stream
But after printing out my registration successful event, curl seems to believe the stream is closed and exits -- giving me my command prompt back.
My #GET MediaType.SERVER_SENT_EVENTS registration endpoint will create an OutboundSseEvent and send it to the SseEventSink to acknowledge successful registration to my SseBroadcaster instance (this is the event curl sees and prints before exiting). I then log a registration successful message before exiting the method. All of this appears to work correctly for both http and https but the stream doesn't stay open once the request endpoint completes because of the failure to run asynchronously as outlined above.
I have not found information on the causes and/or workaround solutions for my RESTEASY002186 problem. I posted a question on this issue last week using the Wildfly Google Group (https://groups.google.com/g/wildfly/c/SO2eHdvMEko) but thought I would try a wider audience since this doesn't seem to be a commonly experienced condition. I don't see any indications during initialization that WildFly will be unable to use asynchronous mode, it just complains when it tries and fails... Any help would be greatly appreciated!
Edit 6/6/2022
The code is running on an isolated network so I can't just cut/paste the code here, but I gutted the resources file to a bare minimum -- just leaving enough for the client to be able to register. The problem remains unchanged. The code is now essentially:
#Path("sse")
public class SseResources {
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public void listen(#Context Sse sse, #Context SseEventSink sseEventSink) {
SseRegComplete regComplete = new SseRegComplete("sse-server");
OutboundSseEvent event = sse.newEventBuilder().
name(regComplete.getType().toString()).
id(regComplete.getEventId()).
mediaType(MediaType.APPLICATION_JSON_TYPE).
data(SseRegComplete.class, regComplete).
comment("Event Stream Registration Completed Successfully").
build();
sseEventSink.send(event);
}
}
Before the above simplified code, I had declared the resource as #ApplicationScoped, had Sse injected into it, and kept a reference to the SseBroadcaster so I could use it whenever an event would come in. I was catching the events to broadcast by using an #Observes method (which I also got rid of). I was calling register(sseEventSink) on the SseBroadcaster in the listen method so I could later call broadcast(outboundEvent) whenever I had updates to publish. I got rid of all that just to see if I could get the stream to stay open but to no avail. I still get the RESTEASY002186 message and curl still exits after printing out the regComplete event sent to it in the code above.
Edit 6/7/2022
Yesterday I was able to get my code working in a new vanilla Wildfly 26 install using an https endpoint URL by following these configuration instructions. Something I hadn't mentioned in the original post is that I am trying to add SSE functionality to an already existing app. It is several years old and we actually moved to Wildfly 26 about 6 months ago because of the log4j vulnerability in the earlier version of Wildfly we were using. I suspect that the problem is related to either our Wildfly configuration (perhaps because old settings were brought over that shouldn't have been) or some 3rd party dependency that is preventing Wildfly from using asynchronous mode.
We are using Shiro for authentication and authorization against an LDAP server -- perhaps Shiro has some hooks into the Wildfly runtime that are causing issues? After initial login, we use a session cookie in all subsequent calls. That is a difference from my test server but I don't think it is relevant because the call definitely passed authentication before executing the registration code. The only other thing that comes to mind right now is our web app ships with LogBack and tells Wildfly not to use the default logging framework.
I plan to start today by comparing the two standalone.xml files to see if anything jumps out at me as being fundamentally different. Is there anything else I should be checking for differences (I think there is a domain.xml file somewhere...)?
Edit 6/14/2022
This definitely has something to do with Shiro being in the loop. When I edit the web.xml file to have Shiro's filter-mapping url-pattern to not include the SSE endpoint, everything works as expected.
We have an ASP.Net/angular/web api on IIS 7.5 that is being called with bad encoding and we're trying to get the request body logged so we can show the problem to the caller.
We googled around and found ModSecurity, so we installed it and are giving it a try - but only for the Audit Logging portion. Unfortunately, neither C nor I seem to be logging anything, no matter what we do. I've seen some other Oflow posts that I infer to mean ModSecurity only logs those types for certain incoming requests (.html logs C but nothing else does kind of thing). WebApi and angular might be confusing it, but I'm not sure. Nothing I've tried seems to work.
Here's our configuration:
# -- Audit log configuration -------------------------------------------------
# Log the transactions that are marked by a rule, as well as those that
# trigger a server error (determined by a 5xx or 4xx, excluding 404,
# level response status codes).
#
SecAuditEngine On
SecRequestBodyAccess On
# Log everything we know about a transaction.
SecAuditLogParts ABCIJDEFHZ
# Use a single file for logging. This is much easier to look at, but
# assumes that you will use the audit log only ocassionally.
#
SecAuditLogType Serial
SecAuditLog E:\ModLogs\modsec_audit4.log
Is there something else I've got to do to get ModSecurity to do the C/I logging?
ModSecurity 2.9.1 (downloaded today), IIS 7.5, Web Api, Angular, ASP.Net
Thanks
This looks to be a known bug with ModSecurity and IIS: https://github.com/SpiderLabs/ModSecurity/issues/538
Near the bottom of that bug there are comments that this can be resolved by setting this in your config:
SecStreamInBodyInspection On
There are also some posts suggesting disabling DynamicCompression in IIS is necessary.
We are using WSO2 Carbon 4.2.0 through the WSO2 Application Server (AS) package. In replacing an older, highly customized Carbon installation (provided by a company that no longer supports the product, has abandoned it and refuses to work on it, and left us no details on how/what they modified in Carbon), we have deployed a couple web applications in the webapps container as they were deployed before in the older instance. We have changed our WebContextRoot in the carbon.xml from the default "/" to a sub-URL of ex: "/stuff", as is also detailed in the self-answered SO question here. However the answer given there is not detailed in what the OP actually encountered when he modified his WSO2 instance.
In testing the above configuration we noticed that if a user were to go to a non-existent web address on the server, depending on the format of the URL they are either:
redirected to a blank page;
receive a "500 Internal server error" (I suspect this is the embedded Tomcat?);
get sent to the Carbon login page (which we definitely do not want to happen for security reasons); or
get an XML document stating:
<faultString> The service cannot be found for the endpoint reference (EPR) /stuff/services/nonexistantservicename </faultString>
At least in the case of missing content we wish the user to be sent to a standardized 404 error page, or at the least be sent an HTTP 404 error by the server. For services the XML error is palatable, we can deal with that.
The only option for us right now to circumvent this issue is to place a proxy in front of the WSO2 instance, which would be another layer to manage and tune, and possibly degrade performance. Please know that I am not a programmer but just an admin with DevOps experience. I would not know how to handle this with e.g. a Java solution or re-coding parts of WSO2. Customizing the core product would also hamper future upgrades of WSO2, a scenario we are trying to dig ourselves out of now as detailed above. Is there no internal WSO2 mechanism to handle non-existent content? Can we not redirect any errors to a standard canned response page?
After nearly drowning in tears of frustration I have to ask you a question.
My play (2.0.3, scala) application is consuming a wsdl, which works perfectly fine, if I run the dev version of my webservice on localhost, which makes the wsdl-url something like http://localhost:8080/Service/Service?wsdl.
When I try to consume the WSDl from the remote test system server, with an Url like http://testserver.company.net:8084/Service/Service?wsdl, I get:
[WebServiceException: Failed to access the WSDL at: http://testserver.company.net:8084/Service/Service?wsdl. It failed with: Got Server returned HTTP response code: 502 for URL: http://testserver.company.net:8084/Service/Service?wsdl while opening stream from http://testserver.company.net:8084/Service/Service?wsdl.]
My company uses a http proxy for internet use, which is the reason for the 502 error. So I want play to stop using the proxy.
So far I have tried (all together):
deleted proxy from Intenet Explorer
set _JAVA_OPTIONS=-Dhttp.noProxyHosts="testserver.company.net"
set JAVA_OPTIONS=-Dhttp.noProxyHosts="testserver.company.net"
play run -Dhttp.noProxyHosts="testserver.company.net"
None of this worked. Any ideas? How can I stop play from using the HttpProxy?
EDIT:
I found it has someting to do with java Webservices-api / jaxws libraries.
Any ideas?
EDIT 2012-10-17:
It seams to depend on system proxy settings. I still don't know why it didn't work that day although I deleted the whole proxy from IE and restarted everything. Is there any way to make my play app independend from system settings?
Try:
play -Dhttp.noProxyHosts="testserver.company.net" run
I noticed a typo in your property, the correct property is http.nonProxyHosts so add and extra n after no.
I'm trying to generate a Perl library to connect to a WebService. This webservice is in an HTTPS server and my user has access to it.
I've executed wsdl2perl.pl several times, with different options, and it always fails with the message: Unauthorized at /usr/lib/perl5/site_perl/5.8.8/SOAP/WSDL/Expat/Base.pm line 73.
The thing is, when I don't give my user/pass as arguments, it doesn't even asks for them.
I've read [SOAP::WSDL::Manual::Cookbook] (http://search.cpan.org/~mkutter/SOAP-WSDL-2.00.10/lib/SOAP/WSDL/Manual/Cookbook.pod) and done what it says about HTTPS: Crypt::SSLeay is instaleld, and both SOAP::WSDL::Transport::HTTP and SOAP::Transport::HTTP are modified.
Can you give any hint about what may be going wrong?
Can you freely access the WSDL file from your web browser?
Can someone else in your network access it without any problems?
Maybe the web server hosting the WSDL file requires Basic or some other kind of Authentication...
If not necessary ,I don't recommend you to use perl as a web service client .As you know ,perl is a open-source language,although it do support soap protocol,but its support do not seem very standard.At first,its document is not very clear.And also ,its support sometimes is limited.At last,bug always exists here and there.
So ,if you have to use wsdl2perl,you can use komodo to step into the code to find out what happened.This is just what I used to do when using perl as a web service client.You know ,in the back of https is SSL,so ,if your SSL is based on certificate-authorized,you have to set up your cert path and the list of trusted server cert.You'd better use linux-based firefox to have a test.As I know ,you can set up firefox's cert path and firefox's trusted cert list.If firefox can communicated with your web service server succefully,then,it's time to debug your perl client.
To debug situations with Perl and SOAP, interpose a web proxy so you can see exactly what data is being passed and what response comes back from the server. You were getting a 401 Not authorized, I expect, but there may be more detail in the server response.
Both Fiddler http://docs.telerik.com/fiddler and Charles proxy https://www.charlesproxy.com/ can do this.
The error message you quote seems to be from this line :
die $response->message() if $response->code() ne '200';
and in HTTP world, Unauthorized is clearly error code 401, which means your website asks for a username and password (most probably, some website may "hijack" this error code to cater for other conditions like a filter on the source IP).
Do you have them?
If so, you can
after wdsl2perl has run, find in the created files where set_proxy() is called and change the URL in there to include the username and password like that : ...->set_proxy('http://USERNAME:PASSWORD#www.example.com/...')
or your in code, after instantiating the SOAP::WSDL object, call service(SERVICENAME) on it (for each service you have defined in your WSDL file), which gives you a new object, on which you call transport() to access the underlying transport object on which you can call proxy() with the URL as formatted above (yes it is proxy() here and set_proxy() above); or you call credentials() instead of proxy() and you pass 4 strings:
'HOSTNAME:PORT'
the realm, as given by the webserver but I think you can put anything
the username
the password