Error while using authorization in SoapUI - soap

I am trying to test my SOAP webservice running on weblogic 12c (which is not installed locally) using SoapUI as a client. Without authorization everything works fine, but when I implement very simple UserToken on server and do every step described here:
https://www.soapui.org/soap-and-wsdl/authenticating-soap-requests.html
then I've got following error:
<faultcode>wsse:InvalidSecurityToken</faultcode>
<faultstring>Security token failed to validate. weblogic.xml.crypto.wss.SecurityTokenValidateResult#211ca945[status: false][msg UNT Error:Message Created time past the current time even accounting for set clock skew]</faultstring>
Also I've looked to the http log of SoapUI and I've noticed something strange there:
Tue Mar 20 09:15:52 CET 2018:DEBUG:>>
<soapenv:Header>
<wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="..." xmlns:wsu="...">
<wsse:UsernameToken wsu:Id="UsernameToken-374B6AAD9B07D377D515215337525091">
<wsse:Username>user</wsse:Username>
<wsse:Password Type="...#PasswordText">password</wsse:Password>
<wsse:Nonce EncodingType="...#Base64Binary">urRvoAYbjjovfD0OQqvJ6g==</wsse:Nonce>
<wsu:Created>2018-03-20T08:15:52.464Z</wsu:Created>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
The time of log (which is current one) is different then in <wsu:Created> tag. I don't know if it is important, but I am in UTC+01:00 time zone.
EDIT
I've give up with SoapUI and I've implemented my own Java client with authorization using CredentialProvider:
((BindingProvider) port).getRequestContext().put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST, credentialProviders);
And then it was the same! So probably it isn't SoapUI problem, but something else.
Exception in thread "main" com.sun.xml.ws.fault.ServerSOAPFaultException:
Client received SOAP Fault from server: Security token failed to validate. weblogic.xml.crypto.wss.SecurityTokenValidateResult#37b70a09[status: false]
[msg UNT Error:Message Created time past the current time even accounting for set clock skew] Please see the server log to find more detail regarding exact cause of the failure.
at com.sun.xml.ws.fault.SOAP11Fault.getProtocolException(SOAP11Fault.java:193)
at com.sun.xml.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:131)
at com.sun.xml.ws.client.sei.StubHandler.readResponse(StubHandler.java:253)
at com.sun.xml.ws.db.DatabindingImpl.deserializeResponse(DatabindingImpl.java:203)
at com.sun.xml.ws.db.DatabindingImpl.deserializeResponse(DatabindingImpl.java:290)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:119)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:92)
...
When I've looked to the message using Wireshark, time in header was also decrease by one hour. That is really strange and I have no idea what is going on. How am I supposed to test my webservice using authorization? Locally I am using Windows 7, but my Weblogic is running on CentOS 7. As I said both are set to UTC+01:00.

OK, I finally menage to solve this issue. It turns out that server time is late for 2 minutes comparing to the local machine time. Webservice requires time between client and server cannot be larger then few seconds. When I adjusted time everything is working properly.
Additionally, current time in log Mar 20 09:15:52 CET 2018 and time in message 2018-03-20T08:15:52.464Z are equal, but just in different format.

Related

SoapUI endpoint error randomly

I don't understand something about SoapUI and his mockservice's behaviour.
I'm using the client of SoapUI (testcase) and a Java EE application with JAX-RPC.
My problem is :
when I'm trying to call any webservice, from my Java Client, or the testcase of SoapUI, the mockservice return a well message at first call, and the error below at the second call, with the same call or not.
But if I'm waiting, It works ...
So, I have enabled the option in SOAPui : "close HTTP connection after each SOAP request" and it works all the time...
So my question is :
"Is it a normal behaviour of the mockservice, and how to implement this with my java client ?"
Thank you all.
<soapenv:Fault>
<faultcode>Server</faultcode>
<faultstring>Missing operation for soapAction [] and body element [null] with SOAP Version [SOAP 1.1]</faultstring>
</soapenv:Fault>
OK,
I found a solution on the forum of SoapUI : http://www.soapui.org/forum/viewtopic.php?t=5648
It is when you have the settings flag "HTTP Settings/Logs wire content of all mock requests" set to true.
=> Uncheck the flag and it works fine!
Thanks a lot !
I had the same problem, using SoapUI 5.1.2 Pro.
After receiving first asynchronous response to the MockService, the MockService
stopped and could not receive any more responses for the request I sent.
The error message was:
Thu Jul 02 12:59:44 CEST 2015:ERROR:An error occurred [Missing operation for soapAction [XXXX] and body element [null] with SOAP Version [SOAP 1.1]], see error log for details
In SoapUI Settings:
File->Preferences->Http Settings: "Enable Mock HTTP log", uncheck box:
"Logs wire content of all mock requests".
Now I receive several asynchronous responses in a row, and give response back on them.
The same problem may happen when two mock services run with the same endpoint address (including port and path) on SoapUI.

Express Checkout Digital Goods : Proxy Error on sandbox.paypal.com/incontext

I have a Flash website. When I want to use Paypal Express Checkout with Digital Goods, I call this javascript code :
dg = new PAYPAL.apps.DGFlow();
dg.startFlow("http://mydomain.com/setup.php");
setup.php calls SetExpressCheckoutPayment function and redirect to https://www.sandbox.paypal.com/incontext?token=...&useraction=commit
With Firebug I can see this address returns a 302, and redirects to https://www.sandbox.paypal.com/webapps/checkout/webflow/sparta/expresscheckoutvalidatedataflow?exp_type=&cookiesBlocked=&token=...&useraction=commit
This adress returns also a 302 and redirects to https://www.sandbox.paypal.com/webapps/checkout/webflow/sparta/expresscheckoutvalidatedataflow?execution=e1s1
Here it hangs for several minutes and ends with this error message :
Proxy Error
The proxy server could not handle the request GET /webapps/checkout/webflow/sparta/expresscheckoutvalidatedataflow.
Reason: Error during SSL Handshake with remote server
I started to get this error sometimes last week, and I have it every time today.
It happens on my MAMP environment and on my website.
I don't have SSL certificate but I didn't last week and it was not a problem.
Do you know anything about this error message ?
Edit
I tried with Opera, proxyError comes at a different step : https://www.sandbox.paypal.com/webapps/checkout/webflow/sparta/expresscheckoutvalidatedataflow?execution=e1s4
And once this morning on Firefox I had another Proxy Error after the first redirection :
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /webapps/checkout/webflow/sparta/expresscheckoutvalidatedataflow.
Reason: Error reading from remote server
I don't have the Proxy Error anymore since yesterday. I didn't change anything so it seems PayPal servers are unstable...
I'm having the same issue since Sunday evening (sorry that I can't post this as a comment, don't have enough reputation yet).
I'm on LiquidWeb shared hosting, using the Merchant SDK ( https://github.com/paypal/merchant-sdk-php ). I was on merchant-sdk-php-2.1.96 when the errors began, and tried upgrading to merchant-sdk-php-2.2.98 but now it is worse (won't even do the first redirect, which is confusing). My code is server side, but getting the timeout and proxy error at the same urls:
$setECResponse = $PayPal_service->SetExpressCheckout($setECReq);
if($setECResponse->Ack == 'Success') {
$token = $setECResponse->Token;
$payPalURL = 'https://www.sandbox.paypal.com/incontext?token=' . $token;
$this->Redirect($payPalURL);
}

How to make browser stop caching GWT nocache.js

I'm developing a web app using GWT and am seeing a crazy problem with caching of the app.nocache.js file in the browser even though the web server sent a new copy of the file!
I am using Eclipse to compile the app, which works in dev mode. To test production mode, I have a virtual machine (Oracle VirtualBox) with a Ubuntu guest OS running on my host machine (Windows 7). I'm running lighttpd web server in the VM. The VM is sharing my project's war directory, and the web server is serving this dir.
I'm using Chrome as the browser, but the same thing happens in Firefox.
Here's the scenario:
The web page for the app is blank. Accorind to Chrome's "Inspect Element" tool, it's because it is trying fetch 6E89D5C912DD8F3F806083C8AA626B83.cache.html, which doesn't exist (404 not found).
I check the war directory, and sure enough, that file doesn't exist.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
However, if I open the app.nocache.js on the browser, the javascript is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html!!! That is, even though the web server sent a new app.nocache.js, the browser seems to have ignored that and kept using its cached copy!
Goto Google->GWT Compile in Eclipse. Recompile the whole thing.
Verify in the war directory that the app.nocache.js was overwritten and has a new timestamp.
Reload the page from Chrome and verify once again that the server sent a 200 OK response to the app.nocache.js.
The browser once again tries to load 6E89D5C912DD8F3F806083C8AA626B83.cache.html and fails. The browser is still using the old cached copy of app.nocache.js.
Made absolutely certain in the war directory that nothing is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html (via find and grep)
What is going wrong? Why is the browser caching this nocache.js file even when the server is sending it a new copy?
Here is a copy of the HTTP request/response headers when clicking reload in the browser. In this trace, the server content hasn't been recompiled since the last GET (but note that the cached version of nocache.js is still wrong!):
Request URL:http://192.168.2.4/xbts_ui/xbts_ui.nocache.js
Request Method:GET
Status Code:304 Not Modified
Request Headersview source
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:192.168.2.4
If-Modified-Since:Thu, 25 Oct 2012 17:55:26 GMT
If-None-Match:"2881105249"
Referer:http://192.168.2.4/XBTS_ui.html
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4
Response Headersview source
Accept-Ranges:bytes
Content-Type:text/javascript
Date:Thu, 25 Oct 2012 20:27:55 GMT
ETag:"2881105249"
Last-Modified:Thu, 25 Oct 2012 17:55:26 GMT
Server:lighttpd/1.4.31
The best way to avoid browser caching is set the expiration time to now and add the max-age=0 and the must-revalidate controls.
This is the configuration I use with apache-httpd
ExpiresActive on
<LocationMatch "nocache">
ExpiresDefault "now"
Header set Cache-Control "public, max-age=0, must-revalidate"
</LocationMatch>
<LocationMatch "\.cache\.">
ExpiresDefault "now plus 1 year"
</LocationMatch>
your configuration for lighthttpd should be
server.modules = (
"mod_expire",
"mod_setenv",
)
...
$HTTP["url"] =~ "\.nocache\." {
setenv.add-response-header = ( "Cache-Control" => "public, max-age=0, must-revalidate" )
expire.url = ( "" => "access plus 0 days" )
}
$HTTP["url"] =~ "\.cache\." {
expire.url = ( "" => "access plus 1 years" )
}
We had a similar issue. We found out that timestamp of the nocache.js was not updated with gwt compile so had to touch the file on build. And then we also applied the fix from #Manolo Carrasco MoƱino. I wrote a blog about this issue. http://programtalk.com/java/gwt-nocachejs-cached-by-browser/
We are using version 2.7 of GWT as the comment also points out.
There are two straightforward solutions (second is modified version of first one though)
1) Rename your *.html file which has a reference to *.nocache.js to i.e. MyProject.html to MyProject.jsp
Now search the location of you *.nocache.js script in MyProject.html
<script language="javascript" src="MyProject/MyProject.nocache.js"></script>
add a dynamic variable as a parameter for the JS file, this will make sure actual contents are being returned from the server every time. Following is example
<script language="javascript" src="MyProject/MyProject.nocache.jsp?dummyParam=<%= "" + new java.util.Date().getTime() %>"></script>
Explanation: dummyParam will be of no use BUT will get us our intended results i.e. will return us 200 code instead of 304
Note: If you will use this technique then you will need to make sure that you are pointing to right jsp file for loading your application (Before this change you was loading your app using HTML file).
2) If you dont want to use JSP solution and want to stick with your html file then you will need java script to dynamically add the unique parameter value on the client side when loading the nocache file. I am assuming that should not be a big deal now for you given the solution above.
I have used first technique successfully, hope this will help.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
I wouldn't rely on this. I've seen a bit of strange behaviour in Chrome's dev tools with the network tab in combination with caching (at least, it's not 100% transparent for me). In case of doubt, I usually still consult Firebug.
So probably Chrome still uses the old version. It may have decided long ago, that it will never have to reload the resource again. Clearing the cache should resolve this. And then make sure to set the correct caching headers before reloading the page, see e.g. Ideal HTTP cache control headers for different types of resources.
Open the page in cognito mode just to get-rid of cache issue and unblock yourself.
You need to configure cache time as mentioned in others comments.
After unsuccessfully preventing caching via Apache I created a bash script that root runs every minute in a cron job on my Linux Tomcat server.
#!/bin/bash
#
# Touches GWT nocache.js files in the Tomcat web app directory to prevent caching.
# Execute this script every minute in a root cron job.
#
cd /var/lib/tomcat7/webapps
find . -name '*nocache.js' | while read file; do
logger "Touching file '$file'"
touch "$file"
done

Invoking external web methods with BPEL + Apache Ode (calling .Net asmx) problems

Pre-info:
I'm learning to use orchestration of web methods(WM). I've sucessfully completed lessons with assings, invoking web methods, some parallel processing in BPEL. I'm using Eclipse Indigo 3.7.1 with BPEL plugins, Tomcat7 server with Apache Ode as orchestration base. At other side I need to learn calling secured WMs written on Mono .Net platform.
Having now:
Now I'm having problem calling ANY web methods. I've made:
1) Web Method running by Mono .Net - works, can be tested with browser (http://localhost:8081/hwws.asmx ) and with Eclipse tool "Web Services Explorer", it works fine.
2) my BPEL that only invokes this .Net web method throught SOAP port.
3) at other work stantion I've made .Net service with Visual Studio. Having errors either, if need I'll post it text later.
Problem: I'm getting errors at invoking.
Screens:
1) browser test of .net WS HW(helloWorld) http :// photo -hosting.winsoftmagic .com/ 1/ s4nbwdsqib.jpg
2) Eclipse test of .net WS HW http://photo-hosting.winsoftmagic.com/1/zywnl2wtgu.jpg
3) Error I get http://photo-hosting.winsoftmagic.com/1/ltbexoxcdl.jpg
Error listing:
18:15:25,294 WARN ExternalService Fault response: faultType=(unkown)
soap:ClientCould not deserialize Soap message
18:15:25,376 ERROR INVOKE Failure during invoke:
18:15:25,382 INFO BpelRuntimeContextImpl ActivityRecovery: Registering activity 11, failure reason: on channel 21
And it's give timeout error later. I've spent a week around this problems already, searched with all ways I could think up.
EDIT 12.03.2012:
Now test with mono WS worked for some reason.
I've tryed call WS from the internet and it gave the same error as I had at work spot:
14:25:16,177 ERROR [INVOKE] Failure during invoke: Error sending message (mex={PartnerRoleMex#hqejbhcnphr747jefui9ic [PID {http://wsaspx.tns/}inetWS-24] calling org.apache.ode.bpel.epr.WSAEndpoint#1e3a4c7.checkText(...) Status ASYNC}): The input stream for an incoming message is null.
14:25:16,178 INFO [BpelRuntimeContextImpl] ActivityRecovery: Registering activity 11, failure reason: Error sending message (mex={PartnerRoleMex#hqejbhcnphr747jefui9ic [PID {http://wsaspx.tns/}inetWS-24] calling org.apache.ode.bpel.epr.WSAEndpoint#1e3a4c7.checkText(...) Status ASYNC}): The input stream for an incoming message is null. on channel 21
In same time this service works from all test forms.
Edit: 16.03.2012
My mono method stopped work same as it started without my understanding. TcpMon-1.1.jar shows such message again:
POST /hwws.asmx HTTP/1.1
Content-Type: text/xml; charset=UTF-8
SOAPAction: "http://hwws.tps/HelloWorld"
User-Agent: Axis2
Host: localhost:8092
Transfer-Encoding: chunked <--- EDITED: REASON OF NOT WORKING ----
31c
<?xml version='1.0' encoding='UTF-8'?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header>
<addr:To xmlns:addr="http://www.w3.org/2005/08/addressing">http://localhost:8092/hwws.asmx</addr:To>
<addr:Action xmlns:addr="http://www.w3.org/2005/08/addressing">http://hwws.tps/HelloWorld</addr:Action>
<addr:ReplyTo xmlns:addr="http://www.w3.org/2005/08/addressing"><addr:Address>http://www.w3.org/2005/08/addressing/anonymous</addr:Address></addr:ReplyTo>
<addr:MessageID xmlns:addr="http://www.w3.org/2005/08/addressing">uuid:hqejbhcnphr74k7fapcntd</addr:MessageID>
</soapenv:Header>
<soapenv:Body><HelloWorld xmlns="http://hwws.tps/">
<s0:st xmlns:s0="http://hwws.tps/">My test message</s0:st>
</HelloWorld></soapenv:Body></soapenv:Envelope>
0
HTTP/1.0 500 Internal Server Error
Date: Fri, 16 Mar 2012 08:01:50 GMT
Server: Mono.WebServer2/0.4.0.0 Unix
Connection: close
X-AspNet-Version: 4.0.30319
Content-Length: 366
Cache-Control: private
Content-Type: text/xml; charset=utf-8
<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body><soap:Fault><faultcode>soap:Client</faultcode>
<faultstring>Could not deserialize Soap message</faultstring>
</soap:Fault></soap:Body></soap:Envelope>
Actually I get one of 3 errors: couldn't deserialise, The input stream for an incoming message is null or even error 411 yesterday:) P.s. had 4th error with no socket connecting also, but all them vanished.
My main goal is ssl+authorisation .net services - would be gratefull if you have examples.
Thanks a lot to everyone! It's real pleasure to see your help:)
Thanks to all, testing soap-body shown that it was good and problem was in headers part which had some strange "Chunked" and numbers before xml (length of xml text) and 0 after xml end. I just set http.request.chunk=false and now it work at all my tests yet. For that purpose download sample.endpoint from http://ode.apache.org/endpoint-configuration.html , renamed it as bpel name (MonoCaller.bpel => MonoCaller.endpoint). It has string for chunked already commented. And also added something like http.default-headers.authorization=Basic <64b code of "login:password" made in any coder> for authorization purpose and it also works now! :-]
The same error has occured with me, the problem was with the web service itself, I had an empty constructor plus the methodes wich causes a problem, the solution is to delete the constructor.

SOAP "error fetching http headers": how do I do suspected solution of disabling keep-alive?

I'm troubleshooting an existing webservice. It previously worked just fine, but now SOAP-based requests to the postgreSQL database result in an "unknown error: Error Fetching http headers" error.
In looking up this problem, I come across the following tip:
When you get errors like: "Fatal error: Uncaught SoapFault exception:
[HTTP] Error Fetching http headers in" after a few (time intensive)
SOAP-Calls, check your webserver-config.
Sometimes the webservers "KeepAlive"-Setting tends to result in this
error. For SOAP-Environments I recommend you to disable KeepAlive.
Hint: It might be tricky to create a dedicated vhost for your
SOAP-Gateways and disable keepalive just for this vhost because for
normal webpages Keepalive is a nice speed-boost.
I haven't been able to figure out exactly how you disable KeepAlive or where this parameter would be set. I've tried grep -i "keepalive" /usr/share/tomcat5/conf/*, result negative.
Perhaps due to the variability of server environments this is a question for my sysadmin, but I do have root privileges.
Thanks for your help, stack!
In your Tomcat's server.xml file, set the maxKeepAliveRequests attribute to 1 on your HTTP connectors to effectively disable keep alive.
For more information:
http://tomcat.apache.org/tomcat-5.5-doc/config/http.html#Standard_Implementation