I am modifying some legacy project using SOAP web services. I noticed that some of the URLs it is pointing to for some of the namespace are not working anymore (500). Any idea what the consequence would be?
Both the client and server seems to be working fine still, but I need to make a new client that consumes the WS.
Namespaces may be in the form of a URL, but they do not represent a resource on the network. In many cases, there never was any resource at that location. It makes no difference at all.
Related
I've got a public Java web app created with Spring Boot. I'm ready to release it to production and wanted to get some tips on making sure that it is as secure as possible.
It is basically a search engine that uses Apache Lucene in the backed and a lot of javascript in the front end. I am planning on deploying it to Amazon Web Services, while using my local machine as a backup/test enviroment.
Users can search and browse data. The frontend calls REST endpoints using Javascripts XMLHttpRequest to query the backend for content and then displays it to the user.
The app is completely public and there is no user authentication as of yet.
The app also persists user requests to a database for tracking purposes.
Here's what I've done so far to secure it:
Make sure that the REST endpoints fully verify that the parameters given to them in the requests are valid.
What I plan on doing:
Using HTTPS.
Verifying that any put request or url requests have a reasonable size.
What I am considering adding:
Limiting the number of requests a user can make in a given time period. (not sure if Spring-Boot already has a facility to do this, or I should implement this myself)
Use some kind of API key scheme to make sure that my endpoints are only accessed by my front end. (not sure if this is effective and don't yet know how to do this).
Is there anything else that I should consider doing? Do the things I listed above make sense?
Would greatly appreciate any tips on this.
Is it possible to get the Portal base URL (like http://www.thisismyportal.com) from a Portlet using Portlet 2.0 API?
Right now I'm planning to manually build it concatenating PorletRequest.getServerName(), PortletRequest.getServerPort() and PortletRequest.getContextPath(); but it seems kind of clumsy (and there's no PortletRequest.getProtocol())
While it is clumsy, it is the safest way to construct the URL; and while there is no PortletRequest.getProtocol() method, you can conclude the protocol using the PortletRequest.isSecure() method.
I would advise against using an external configuration for the base URL, for a couple of reasons.
First, it would be yet another configuration item for you to maintain across environments (test, integration, production and so forth). There's very little justification to hold, in configuration, something that is fully reproducible using the current request.
Second, under certain circumstances, it might be impossible to designate a particular URL as a "base URL" for the portal. An example would be the case in which the portal server is associated with multiple hosts, or multiple host aliases.
We had those configuration properties in Resource Environment Provider for the purpose of generating external URLs for sending them in emails. It was specific solution and it wasn't a problem for us as we had other properties stored there as well so we knew it will be available at runtime. I don't know if that suits your needs. It depends on your scenario.
Also, we used https only during login, so we always generated http URLs.
Hope this helps.
I developed a web application (works properly) which registers a user to the system and allows user to upload a file to system via https. The client side code is totally developed by using GWT 2.4 and the back end is several servlets. Except the upload code, all the client-server communications are done via using ServiceAsync interface as it is the common practice in GWT. The upload code is based on a form which is communicating with the upload servlet directly.
This project is developed as a course work and my Professor is keen on knowing the underlying architecture of google web toolkit specificly focused on the client-server communication. His question was,
"How the client code knows the server's url so that all the communication is done?"
His question is legitimate for the ServiceAsync interface. I am calling a function on the server side which seems interesting to him and he wants to know the underlying process behind it.
For uploading, I just defined uploadForm.setAction(GWT.getModuleBaseURL()+"upload"); where upload is the name of the upload servlet in web.xml.
I told him that the compiler generates Javascript code which contains all the web application code (whole system developed dynamically) and the url to the servlet is placed in that script file however the answer did not satisfy him. Please let me know the inner facts of the client-server communication with GWT.
Please give me some answers that can help my Professor to understand GWT's asynchronous client-to-server RPC communication.
The underlying technology is shown here as a diagram. Google says "GWT provides an RPC mechanism based on Java Servlets to provide access to server side resources. This mechanism includes generation of efficient client and server side code to serialize objects across the network using deferred binding."
The client knows the URL to query because you would have annotated your service interface with a #RemoteServiceRelativePath tag. This associates the service with a default path relative to the module base URL. That URL is where Javascript sends your request.
There's a lot more to learn about GWT's RPC if you care to, and you can start picking it apart here and here.
There are multiple ways in GWt how to bind RPC service to the specific url.
First is an annotation #RemoteServiceRelativePath which is placed on synchronous interface. Using deferred binding rule GWT will discover this URL and will set it to the service instance automatically.
Second is casting instance of GWT-RPC async service to the ServiceDefTarget and setting an url manually.
But this answer alone will not satisfy your professor, because most likely he will want to know some other details, so I recommend you to learn how exactly GWT-RPC works.
Regarding the basic question of how the client knows the url of the server, it sounds as though the professor may be asking about the full url of the site (the domain name), and not just the sub-directories for the services as they are defined in #RemoteServiceRelativePath and the web.xml.
For this more fundamental aspect of the question, I think the "Same Origin Policy" (SOP) that browsers have for javascript security could be an important part of the answer. This is explained in one of the GWT FAQs. The first thing that the browser is doing on the client side (after the HTTPS connection is established, which I think could be another important part of the answer) is reading the host html file, where the bootstrap nocache.js file is referenced. Once this file is loaded, the SOP is going to guarantee that all of the subsequent JS application files are coming from the same server as the bootstrap and the host html files. Once the application files are loaded, then everything is happening within that context, with specific internal url paths being defined for RPCs as already mentioned.
Is there a good reason to deploy or consume a SOAP service without using a WSDL "file"?
Explanation:
I'm in a situation where a 3rd-party has created a SOAP service that does not follow the very WSDL file they have also created. I think I am forced to ignore the WSDL file in order to consume this service. Therefore I'm researching how to do this.
What I am really wondering is why it is even possible to do this? What is the intention?
Is it designed so that we can use poor services made by poor programmers? Surely there must be a better reason. I almost wish it wasn't possible. Then I could demand they write it properly.
The WSDL is supposed to be a public document that describes the SOAP service, so describes the signatures of all the methods available in the service.
Of course there may be service providers who want to expose a service to certain consumers, but who don't want to make the signature of the service public, if only to make it a little bit harder for people they don't want using the service to find it or attempt to use it. The signature of the services might expose some private information about the schema of their data for example.
But I don't see any excuse for writing a WSDL that doesn't match the service. I would worry that if they can't get the WSDL right what is the quality of the service going to be like?
To answer the other question yes you can consume the service without the WSDL. If you are using Visual Studio for example you could have VS build a proxy for you based on the incorrect WSDL and then tweak it to match the correct service method signatures. You just need to make sure your data contracts and method contracts in your proxy match the actual service data contracts and method contracts.
Is there any reason why a route would be properly mapped in one environment and not another? I am deploying the exact same routing information from my local development server to a production server, and the routes are not being evaluated the same.
I have downloaded Phil Haack's Routing Debugger, and it is confirming that the routes are matching locally, but not in production.
Has anyone ever experienced this?
UPDATE: I didn't include many details above. The production server is IIS 6 on Windows Server 2003. All my routes were working except for one that I was using as a custom image handler. The route I specified was mapping to a URL that ended with ".png"
I found that this was the problem with IIS 6 since it was not handing off the ".png" request off to ASP.NET. I added a wildcard mapping to the site and that fixed the problem.
I apologize for not having put more details before. Hope this helps someone else.
There's a couple of things at work. As you mentioned, IIS 6 doesn't hand off requests that aren't mapped to the ASPNET_ISAPI.dll. A wildcard mapping fixes that problem.
The other potential issue is that by default, routing doesn't route files that exist on disk. So if you make a request for a physical .png file, it won't get routed.