How to call Hawkular from Vert.x - vert.x

I want to get the metrics from vert.x with Hawkular, but I have problem.
Following the tutorial of this. http://vertx.io/docs/vertx-hawkular-metrics/java/
Then, I change the code of the tutorial of vert.x
http://vertx.io/blog/my-first-vert-x-3-application/
like this.
from this
#Before
public void setUp(TestContext context) {
vertx = Vertx.vertx();
vertx.deployVerticle(MyFirstVerticle.class.getName(),
context.asyncAssertSuccess());
}
to this
VertxOptions vertxOptions = new VertxOptions()
.setMetricsOptions(new VertxHawkularOptions()
.setHost("localhost")
.setPort(8080)
.setTenant("com.acme")
.setAuthenticationOptions(
new AuthenticationOptions()
.setEnabled(true)
.setId("jdoe")
.setSecret("password")).setEnabled(true));
vertx = Vertx.vertx(vertxOptions);
JsonObject message = new JsonObject()
.put("id", "myapp.files.opened")
.put("value", 7);
vertx.eventBus().publish("metrics", message);
But I think I there are no changes in Hawkular.
First of all, I checked with WireShark, there looks like no connection of HTTP request of this application.
I want to know if I execute this code, can I see some change in the Hawkular Metrics?
I already checked.
this program pass these codes.
even though I change the Host and Port wrong one, there are no execption.

I think the test process finishes before the metrics had time to get reported. I tried with your example (which looks correct beside this timing issue), and had to put a Thread.sleep of 1 second after publishing on the event bus in order to see something in Hawkular.
curl -u jdoe:password -H "Hawkular-Tenant: com.acme" http://localhost:8080/hawkular/metrics/counters
now gives
[{"id":"vertx.eventbus.publishedRemoteMessages","dataRetention":14,"type":"counter","tenantId":"com.acme"},{"id":"vertx.pool.worker.vert.x-internal-blocking.queuedCount","dataRetention":14,"type":"counter","tenantId":"com.acme"},{"id":"vertx.eventbus.receivedMessages","dataRetention":14,"type":"counter","tenantId":"com.acme"}, etc.

Related

What is best way to run periodic consumer for redis or kafka in Ktor backend?

I'm trying to use Ktor for my new backend, and I have a question about how can I launch periodic consumer for kafka or redis inside Ktor backend application.
In very simple way, I thought that I make a single backend application configurable to enable routing and consuming, thus some of instance will support both, and some of instance will support only routing to ensure availability.
But, I'm not sure how can I trigger consumer application from ktor. From the code I tested with this,
fun main() {
embeddedServer(Netty, port = 8080, host = "127.0.0.1") {
configureDependencyInjection()
configureRouting()
configureSecurity()
configureSerialization()
configureExceptionHandling()
configureMonitoring()
consumerStarting() // this is consumer starting.
}.start(wait = true)
}
#OptIn(ExperimentalTime::class)
fun Application.consumerStarting() {
CoroutineScope(Dispatchers.IO).launch {
// todo: make this in to rule activator initializer
println("TEST")
delay(3.seconds)
consumerStarting()
}
}
And, when I tested this, I can see that "TEST" will be printed every 3 seconds.
Similar to this way, I thought that I can start monitoring instances from Application.consumerStarting() (e.g. periodically fetch stream from redis, consume it we coroutine, etc.).
But I'm not sure it is right way because I cannot find any references for this situation with Ktor.
Any comment will be welcome.

Project Reactor and Server Side Events

I'm looking for a solution that will have the backend publish an event to the frontend as soon as a modification is done on the server side. To be more concise I want to emit a new List of objects as soon as one item is modified.
I've tried implementing on a SpringBoot project, that uses Reactive Web, MongoDB which has a #Tailable cursor that publish an event as soon as the capped collection is modified. The problem is that the capped collection has some limitation and is not really compatible with what I want to do. The thing is I cannot update an existing element if the new one has a different size(as I understood this is illegal because you cannot make a rollback).
I honestly don't even know if it's doable, but maybe I'm lucky and I'll run into a rocket scientist right here that will prove otherwise.
Thanks in advance!!
*** EDIT:
Sorry for the vague question. Yes I'm more focused on the HOW, using the Spring Reactive framework.
When I had a similar need - to inform frontend that something is done on the backend side - I have used a message queue.
I have published a message to the queue from the backend and the frontend consumed the message.
But I am not sure if that is what you're looking for.
if you are using webflux with spring reactor, I think you can simply have a client request with content-type as 'text/event-stream' or 'application/stream+json' and You shall have API that can produce those content-type. This gives you SSE model without too much effort.
#GetMapping(value = "/stream", produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE, MediaType.APPLICATION_JSON_UTF8_VALUE})
public Flux<Message> get(HttpServletRequest request) {
Just as an idea - maybe you need to use a web socket technology here:
The frontend side (I assume its a client side application that runs in a browser, written in react, angular or something like that) can establish a web-socket communication with the backend server.
When the process on backend finishes, the message from backend to frontend can be sent.
You can do emitting changes by hand. For example:
endpoint:
public final Sinks.Many<SimpleInfoEvent> infoEventSink = Sinks.many().multicast().onBackpressureBuffer();
#RequestMapping(path = "/sseApproach", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<SimpleInfoEvent>> sse() {
return infoEventSink.asFlux()
.map(e -> ServerSentEvent.builder(e)
.id(counter.incrementAndGet() + "")
.event(e.getClass().getName())
.build());
}
Code anywhere for emitting data:
infoEventSink.tryEmitNext(new SimpleInfoEvent("any custom event"));
Watch out of threads and things like "subscribeOn", "publishOn", but basically (when not using any third party code), this should work good enough.

Milo: get IP of client

Is there a way to get a Clients IP in Context of a write?
I want to get the IP of an Client that writes to my Milo-OPCUA-Server, so I can handle these writes differently based on the Clients IP (local Clients should be able to write directly on the Server, whilst other writes should get forwarded to another Server)
Okay, this is not part of any official API right now, so it almost certainly will break in the future, but:
With the OperationContext you get when implementing AttributeManager#write(WriteContext, List<WriteValue>):
context.getSession().ifPresent(session -> {
UaStackServer stackServer = context.getServer().getServer();
if (stackServer instanceof UaTcpStackServer) {
ServerSecureChannel secureChannel = ((UaTcpStackServer) stackServer)
.getSecureChannel(session.getSecureChannelId());
Channel channel = secureChannel.attr(UaTcpStackServer.BoundChannelKey).get();
SocketAddress remoteAddress = channel.remoteAddress();
}
});
I'll have to add some official API to do this, probably something hanging off the Session object.

HttpListener prevent Timeout

I Implemented a HttpListener to process SoapRequests. This works fine but I can't find a soloution for the problem, that some soap-requests take too much time, resulting in timeouts on client side.
How do I let the requesting client know, that his request is not a timeout?
I thought about sending "dummy"-information while the request gets processsed, but the HttpListener only seems to send the data when you Close the response-object, and this can be done only once, so this is not the right thing to do I suppose.
Soloution:
Thread alliveWorker = new Thread(() =>
{
try
{
while (context.Response.OutputStream.CanWrite)
{
context.Response.OutputStream.WriteByte((byte) ' ');
context.Response.OutputStream.Flush();
Thread.Sleep(5000);
}
}
finally
{
}
});
alliveWorker.Start();
doWork();
alliveWorker.Interrupt();
createTheRealResponse();
Sending dummy information is not a bad idea.
I think you need to call the Flush() method on the HttpListenerResponse's OutputStream property after writing the dummy data. You must also enable SendChunked property:
Try sending a dummy space at regular interval:
response.SendChunked = true;
response.OutputStream.WriteByte((byte)' ');
response.OutputStream.Flush();
I see two options - increase timeouts on client side or extend protocol with operation status requests from client for long running operations.
If you are using .net 4.5, take a look at the HttpListenerTimeoutManager Class, you can use this class as a base to implement custom timeout behaviour.

Netty concurrency and "Connection reset by peer"

I've built the following simple server, and I'm stress testing it using ab.
If I run ab making 3000 total request (300 concurrent) it works. If I run it again, it shows me:
apr_socket_connect(): Connection reset by peer (54)
And If after this error I try to make a single request with curl without restarting the server, it works. If I run again ab it shows the same error.
It seems that it can't handle too many concurrent connections. Below the code:
public static void main(String[] args) throws Exception {
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(new StringEncoder(), new MyServerHandler());
}
});
bootstrap.bind(new InetSocketAddress(9090));
System.out.println("Running");
}
Here is the handler:
public class MyServerHandler extends SimpleChannelUpstreamHandler {
private static AtomicLong request = new AtomicLong();
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e)
throws Exception {
ChannelFuture channelFuture = e.getChannel().write("This is request #" + request.incrementAndGet() + "\n");
channelFuture.addListener(ChannelFutureListener.CLOSE);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
System.out.println(e.getCause());
e.getChannel().close();
}
}
As you see it's very simple, it just shows the total number of requests handled.
Any tips?
Thanks
'Connection reset by peer' usually means you have written to a connection that has already been closed by the other end. In other words, an application protocol error. You get the error itself on a subsequent read or write.
I don't immediately see anything wrong, but you could try the following to try to get more information:
Override channelClosed and output something so that you're 100% sure that Netty is at least trying to close the channel.
Use jvisualvm to have a look at the JVM running your server; you should be able to see the threads and whether they're active or not.
Write something to System.out server-side on channelConnected so you know that your connections have made it that far (especially for the 2nd run).
When you run ab the second time, is there an error for every connection attempt, or just for some?
What I find odd is that it seems to work the first time, but not thereafter. Keep in mind that this may not be a Netty - or even a JVM - issue, but rather the OS somehow limiting the the connection attempts.
I have done some tests with my own Netty test server, and found that starting large batches concurrent connections will produce an unpredictable outcome (most will connect, some will fail, but always a different ratio). As of yet I haven't figure out why that is (yet), but I suspect it is my OS refusing the connections rather than Netty.