Quickly Update Static Content on WildFly Server - wildfly

I have web application that is deployed to a local WildFly server. I am currently working on the front-end and am having troubles testing the data calls. All of my data gathering is done via AJAX and thus requires that I be on an actual server to silence any Cross-Site issues. This isn't a problem, but it is very time-consuming to re-deploy the entire application just to tweak a line of javascript.
I have successfully been able to deploy an exploded war. However, this forces a re-deploy when I saved the file. Is there a way for me to have it automatically update static content quickly without re-deploying?

You can also use this CORSFilter I think I found
here. There are plenty around.
This will add Access-Control-Allow-* headers to responses which in turn prevents the client from rejecting the reply as it would otherwise do for security reasons.
I have it included in my web app (for now), running on http://localhost:8080
and I can then run my grunt/gulp/node server on
http://localhost:8000 on the same machine w/ livereload ''as
usual'' doing requests between the servers.
I am a backend architect and I haven't done much frontend so there
might be better sollutions around; this is how I solved it today.
The CORSFilter:
#Singleton
#Provider
public class CORSFilter implements ContainerResponseFilter {
#Override
public void filter(final ContainerRequestContext requestContext, final ContainerResponseContext cres)
throws IOException {
cres.getHeaders().add("Access-Control-Allow-Origin", "*");
cres.getHeaders().add("Access-Control-Allow-Headers", "origin, content-type, accept, authorization");
cres.getHeaders().add("Access-Control-Allow-Credentials", "true");
cres.getHeaders().add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS, HEAD");
cres.getHeaders().add("Access-Control-Max-Age", "1209600");
}
}

Probably not the answer you hoped for; but you another way is to use the right front-end development tools. For example: Webpack (http://webpack.github.io/) to build and test your front-end while mocking your REST-backend. That way you can also easily unit test your JavaScript front-end.

Related

How do I resolve RESTEASY002186 so my Wildfly 26 web application can use SSE over https?

I have a web application running on Wildfly 26 that uses SSE broadcasting and works correctly with http. However, when I switch to using an https endpoint, I get Wildfly log entries of:
WARN [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1)
RESTEASY002186: Failed to set servlet request into asynchronous mode,
server sent events may not work
This happens with each registration attempt of the https endpoint but I never see this when registering with the http endpoint.
Testing with curl against the http endpoint results in curl waiting for events to show up (and keeps printing them out as it receives them) until I quit. Using curl to test the https endpoint, I will see the same headers I got from the http endpoint, namely:
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Type: text/event-stream
But after printing out my registration successful event, curl seems to believe the stream is closed and exits -- giving me my command prompt back.
My #GET MediaType.SERVER_SENT_EVENTS registration endpoint will create an OutboundSseEvent and send it to the SseEventSink to acknowledge successful registration to my SseBroadcaster instance (this is the event curl sees and prints before exiting). I then log a registration successful message before exiting the method. All of this appears to work correctly for both http and https but the stream doesn't stay open once the request endpoint completes because of the failure to run asynchronously as outlined above.
I have not found information on the causes and/or workaround solutions for my RESTEASY002186 problem. I posted a question on this issue last week using the Wildfly Google Group (https://groups.google.com/g/wildfly/c/SO2eHdvMEko) but thought I would try a wider audience since this doesn't seem to be a commonly experienced condition. I don't see any indications during initialization that WildFly will be unable to use asynchronous mode, it just complains when it tries and fails... Any help would be greatly appreciated!
Edit 6/6/2022
The code is running on an isolated network so I can't just cut/paste the code here, but I gutted the resources file to a bare minimum -- just leaving enough for the client to be able to register. The problem remains unchanged. The code is now essentially:
#Path("sse")
public class SseResources {
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public void listen(#Context Sse sse, #Context SseEventSink sseEventSink) {
SseRegComplete regComplete = new SseRegComplete("sse-server");
OutboundSseEvent event = sse.newEventBuilder().
name(regComplete.getType().toString()).
id(regComplete.getEventId()).
mediaType(MediaType.APPLICATION_JSON_TYPE).
data(SseRegComplete.class, regComplete).
comment("Event Stream Registration Completed Successfully").
build();
sseEventSink.send(event);
}
}
Before the above simplified code, I had declared the resource as #ApplicationScoped, had Sse injected into it, and kept a reference to the SseBroadcaster so I could use it whenever an event would come in. I was catching the events to broadcast by using an #Observes method (which I also got rid of). I was calling register(sseEventSink) on the SseBroadcaster in the listen method so I could later call broadcast(outboundEvent) whenever I had updates to publish. I got rid of all that just to see if I could get the stream to stay open but to no avail. I still get the RESTEASY002186 message and curl still exits after printing out the regComplete event sent to it in the code above.
Edit 6/7/2022
Yesterday I was able to get my code working in a new vanilla Wildfly 26 install using an https endpoint URL by following these configuration instructions. Something I hadn't mentioned in the original post is that I am trying to add SSE functionality to an already existing app. It is several years old and we actually moved to Wildfly 26 about 6 months ago because of the log4j vulnerability in the earlier version of Wildfly we were using. I suspect that the problem is related to either our Wildfly configuration (perhaps because old settings were brought over that shouldn't have been) or some 3rd party dependency that is preventing Wildfly from using asynchronous mode.
We are using Shiro for authentication and authorization against an LDAP server -- perhaps Shiro has some hooks into the Wildfly runtime that are causing issues? After initial login, we use a session cookie in all subsequent calls. That is a difference from my test server but I don't think it is relevant because the call definitely passed authentication before executing the registration code. The only other thing that comes to mind right now is our web app ships with LogBack and tells Wildfly not to use the default logging framework.
I plan to start today by comparing the two standalone.xml files to see if anything jumps out at me as being fundamentally different. Is there anything else I should be checking for differences (I think there is a domain.xml file somewhere...)?
Edit 6/14/2022
This definitely has something to do with Shiro being in the loop. When I edit the web.xml file to have Shiro's filter-mapping url-pattern to not include the SSE endpoint, everything works as expected.

camel route for opc-ua milo project

I am working on creating a Camel, Spring boot application that implements the OPC-UA connection. Till now, I was successfully able to run the examples obtained from Eclipse milo github repository.
Now, my task is to create a camel route that will connect to the opc-ua server that is running on a different machine, read data from there and store in a jms queue.
Till now, I am able to run the BrowseNodeExample and ReadNodeExample where I am connecting to a server simulator (Top Server V6). In the example code, when connecting to the server, the endpoint of the server is given as - "opc.tcp://127.0.0.1:49384/SWToolbox.TOPServer.V6"
Now in the camel routing piece of code, in the .configure() part, what shall I write in the .from() part. The piece of code is as -
#Override
public void configure() throws Exception {
from("opc.tcp://127.0.0.1:49384/SWToolbox.TOPServer.V6")
.process(opcConnection)
.split(body().tokenize(";"))
.to(opcBean.getKarafQueue());
}
While searching for the solution I came across one option: milo-server:tcp://127.0.0.1:49384/SWToolbox.TOPServer.V6/nodeId=2&namespaceUri=http://examples.freeopcua.github.io. I tried that but it didn't work. In both the cases I get the below error:
ResolveEndpointFailedException: Failed to resolve endpoint: (endpoint
given) due to: No component found with scheme: milo-server (or
opc.tcp)
You might want to add the camel-opc component to your project.
I've found one on Github
and also milo version on maven central for the OPC-UA connection.
Hope that helps :-)
The ResolveEndpointFailedException is quite clear, Camel cannot find the component. That means that the auto-discovery failed to load the definition in the META-INF directory.
Have you checked that the camel-milo jar is contained in your fat-jar/war?
As a workaround you can add the component manualy via
CamelContext context = new DefaultCamelContext();
context.addComponent("foo", new FooComponent(context));
http://camel.apache.org/how-do-i-add-a-component.html
or in your case
#Override
public void configure() throws Exception {
getContext().addComponent("milo-server", new org.apache.camel.component.milo.server.MiloServerComponent());
from("milo-server:tcp://127.0.0.1:49384/SWToolbox.TOPServer.V6/nodeId=2&namespaceUri=http://examples.freeopcua.github.io")
...
}
Furthermore be aware that milo-server starts an OPC UA server. As I understood your question you want to connect to an OPC UA server. Therefore you need the milo-client component.
camel-milo client at github

I cant make rest calls with react

I am learning to use React at the moment, and I have a problem: I want react to work with my Java Spring Boot backend (Tomcat). React is running on port 3000 (default) and my Tomcat server on port 8080 (default). Now, when I make a REST call, I get the following error
script.js:13 GET http://localhost:8080/api/path?p=test net::ERR_CONNECTION_REFUSED
My rest call looks like:
fetch('http://localhost:8080/api/path?p=test')
.then(res => {
console.log(res);
});
What do I make wrong? I do not really have an idea.
An net::ERR_CONNECTION_REFUSED error is typically thrown when your frontend (React application) cannot connect to your backend (your Spring boot application). There are many possible reasons for this to happen, such as:
Your back-end application is not running
The path to your back-end application is not correct
There is some kind of firewall between your front- and back-end application that is stopping the traffic
...
In your case it appeared to be the back-end application that was not running properly.
Your second problem is related to CORS, which is a mechanism that prevents JavaScript from connecting to other APIs/websites that are not on the same (sub)domain and port. In your case your frontend is running on a different port than you backend, so that means you have to deal with CORS.
To handle CORS, you have to add a header to your backend (namely the Access-Contol-Allow-Origin header the error is mentioning). With Spring, you can do that by adding the following annotation to your controller:
#CrossOrigin(origins = "http://localhost:3000")
Or you can configure CORS globally with a filter or using WebMvcConfigurer:
#Bean
public WebMvcConfigurer corsConfigurer() {
return new WebMvcConfigurerAdapter() {
#Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/api/**").allowedOrigins("http://localhost:3000");
}
};
}
As mentioned in the comments by #Lutz Horn, this is also described in a Spring guide.

JRuby, Sinatra, Warbler app - HTTP PATCH request fails with 501 error

I've created a simple web service using JRuby 1.7.4, Sinatra 1.4.4 and Rack 1.5.2. This web service responds to GET, POST, PATCH and DELETE requests with a simple message "Hello world using [request-type]".
I started Rack and tested all the request types. All four worked.
I used Warbler to create a war file of the application and deployed it to Tomcat 7.0.47. When I tested with Tomcat PATCH failed with a "HTTP Status 501 - Method PATCH is not is not implemented by this servlet for this URI". (Yes, 'is not' is repeated. But this is the response I get from Tomcat.) GET, POST and DELETE worked fine.
I then tried using Jetty 9.1.0. Same result. GET, POST and DELETE work but PATCH fails.
Why are PATCH requests failing and how do I get them to work with this set up?
According to Tomcat documentation HttpServlet can handle only GET, POST, PUT, DELETE requests
public abstract class HttpServlet
extends GenericServlet
Provides an abstract class to be subclassed to create an HTTP servlet
suitable for a Web site. A subclass of HttpServlet must override at
least one method, usually one of these:
doGet, if the servlet supports HTTP GET requests doPost, for HTTP POST
requests doPut, for HTTP PUT requests doDelete, for HTTP DELETE
requests
But you can find this useful:
If you use an HTTP library that doesn't allow overriding or setting an
arbitrary HTTP method name, you can send a POST request and provide an
override to the HTTP method via the query string parameter
_HttpMethod.
For example, to update an Account, this will work with an actual POST request:
.../services/data/v23.0/sobjects/Account/0016000000eEhmxAAC?_HttpMethod=PATCH

Calling GWT RPC service

I have been going through the google tutorial ( which I find very good ) at
https://developers.google.com/web-toolkit/doc/latest/tutorial/RPC
I have the service up and running on my local server and my JavaScript client can call it fine. OK so far. Now, what I want to do is deploy the service on a remote server JoeSoapHost:8080
How do I now tell my client where to send it's requests? I can't see any server/url being created in my RPC call. It just works by magic but now I want to get under the bonnet and start breaking it.
[Edit}
This is the Interface my client uses to know what service on the Server is to be called. I know that my Web.xml web descriptor must have a url that matches this. It has this because my server is invoked ok. Problem is, if I now decide to deploy my server elsewhere how do I tell my client what server/domain name to use?
#RemoteServiceRelativePath("stockPrices")
public interface StockPriceService extends RemoteService
{
StockPrice[] getPrices(String[] symbols);
}
What I want to achieve first is have a simple GWT client calling into an RPC service. I have this working but only when the server is localhost.
Next step, I deploy my app to the Google App Engine. What must I change now because my RPC service in my JavaScript is not being called when I deploy my app to
http://stockwatcherjf.appspot.com/StockWatcher.html
1) Brian Slesinsky excellent document on RPC - https://docs.google.com/document/d/1eG0YocsYYbNAtivkLtcaiEE5IOF5u4LUol8-LL0TIKU/edit#heading=h.amx1ddpv5q4m
2) #RemoteServiceRelativePath("stockPrices") allows GWT code to determine relative to your host/server/domain i.e http//mydomain.com/gwtapp/stockPrices
3) You can search GOOGle IO Sessions from 2009 - 2012 for some more in depth stuff on GWT RPC usage.
#RemoteServiceRelativePath gives the path of the servlet relative to the GWT.getModuleBaseURL() (which is more or less the URL of the *.nocache.js script); it doesn't "just work by magic".
If you deploy your services on a different server than the one serving your client code, then you'll likely hit the Same Origin Policy. CORS can help here, but you'll lose compatibility with IE (up to IE9 included). You'd better stick serving everything from the same origin.