Understanding Esper IO Http example - complex-event-processing

What is Trigger Event here ?
How to plug this to the EsperEngine for getting events ?
What URI should be passed ? how should engineURI look like ?
Is it the remote location of the esper engine ?
ConfigurationHTTPAdapter adapterConfig = new ConfigurationHTTPAdapter();
// add additional configuration
Request request = new Request();
request.setStream("TriggerEvent");
request.setUri("http://localhost:8077/root");
adapterConfig.getRequests().add(request);
// start adapter
EsperIOHTTPAdapter httpAdapter = new EsperIOHTTPAdapter(adapterConfig, "engineURI");
httpAdapter.start();
// destroy the adapter when done
httpAdapter.destroy();
Changed the stream from TriggerEvents to HttpEvents and I get this exception given below
ConfigurationException: Event type by name 'HttpEvents' not found

The "engineURI" is a name for the CEP engine instance and has nothing to do with the EsperIO http transport. Its a name for looking up what engines exists and finding the engine by name. So any text can be used here and the default CEP engine is named "default" when you allocate the default one.
You should define the event type of the event you expect to receive via http. A sample code is in http://svn.codehaus.org/esper/esper/trunk/esperio-socket/src/test/java/com/espertech/esperio/socket/TestSocketAdapterCSV.java

You need to declare your event type(s) in either Java, or through Esper's EPL statements.
The reason why you are getting exception is because your type is not defined.
Then you can start sending events by specifying type you are sending in HTTP request. For example, here is a bit of code in python:
import urllib
cepurl = "http://localhost:8084"
param = urllib.urlencode({'stream':'DataEvent',
'date': datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
'src':data["ipsrc"],
'dst':data["ipdst"],
'type':data["type"]})
# sending event:
f = urllib.urlopen(cepurl + "/sendevent?" + param);
rez = f.read()
in java this probably would be something like this:
SupportHTTPClient client = new SupportHTTPClient();
client.request(8084, "sendevent", "stream", "DataEvent", "date", "mydate");

Related

Interpreting server response for events correclty

I would like to store values of event properties received from the server in a database. My problems are that in the event consumer:
I cant figure out which eventtype my client received.
I dont know how to map variant indexes to properties without knowing the EventType.
Events come with the property "EventType", which would solve my first problem. But since I am receiving many different event types, I do not know in which variant index it is located. Should I always relocate "EventType" at index 0 in the select clause whenever creating a new EventFilter?
For the second problem, item.getMonitoringFilter().decode(client.getSerializationContext())) offers a view on the property structure but I am not sure how to use it for mapping of variants to properties. Does anybody know how to solve those problems?
Here is the event consumer code that I use. It is taken from milo client examples.
for (UaMonitoredItem monitoredItem: mItems){
monitoredItem.setEventConsumer((item, vs) -> {
LOGGER.info(
"Event Received from: {}", item.getReadValueId().getNodeId());
LOGGER.info(
"getMonitoredItemId: {}", item.getMonitoredItemId());
LOGGER.info(
"getMonitoringFilter: {}", item.getMonitoringFilter().decode(client.getSerializationContext()));
for (int i = 0; i < vs.length; i++) {
LOGGER.info("variant[{}]:, datatype={}, value={}", i, vs[i].getDataType(), vs[i].getValue());
}
});
}
Thank you in advance.
Update:
Seems I have figured it out, by typcasting to EventFilter. Further information such as qName for event properties or event type node IDs can then be derived:
ExtensionObject eObject = item.getMonitoringFilter();
EventFilter eFilter = ((EventFilter) eObject.decode(client.getSerializationContext()));
QualifiedName qName = eFilter.getSelectClauses()[0].getBrowsePath()[0];
LiteralOperand literalOperand = (LiteralOperand) eFilter.getWhereClause().getElements()[0]
.getFilterOperands()[1].decode(client.getSerializationContext());
NodeId eventTypeNodeId = (NodeId) literalOperand.getValue().getValue();
Didn't you supply the filter in the first place when you created the MonitoredItem? Why do you need to "reverse engineer" the filter result to get back to what you did in the first place?
The properties you receive in the event data and the order they come in are defined by the select clause you used when creating the MonitoredItem. If you choose to select the EventId field then it will always be at the same corresponding index.

Enqueue liquidsoap request from script instead of command

I'm trying to write my very first liquidsoap program. It goes something like this:
sounds_path = "../var/sounds"
# Log file
set("log.file.path","var/log/liquidsoap.log")
set("harbor.bind_addr", "127.0.0.1")
set("harbor.timeout", 5)
set("harbor.verbose", true)
set("harbor.reverse_dns", false)
silence = blank()
queue = request.queue()
def play(~protocol, ~data, ~headers, uri) =
request.push("#{sounds_path}#{uri}")
http_response(protocol=protocol, code=20000)
end
harbor.http.register(port=8080, method="POST", "^/(?!\0)+", play)
stream = fallback(track_sensitive=false, [queue, silence])
...output.whatever...
And I was wondering if there is any way to push to the queue from the harbor callback.
Else, how should I proceed about making requests originate from HTTP calls? I really want to avoid telnet. My final objective is having an endpoint that I can call to make my stream play a file on demand and be silent the rest of the time.
give this a go its liquidsoap so its tricky to understand but it should do the trick
########### functions ##############
def playnow(source,~action="override", ~protocol, ~data, ~headers, uri) =
queue_count = list.length(server.execute("playnow.primary_queue"))
arr = of_json(default=[("key","value")], data)
track = arr["track"];
log("adding playnow track '#{track}'")
if queue_count != 0 and action == "override" then
server.execute("playnow.insert 0 #{track}")
source.skip(source)
print("skipping playnow queue")
else
server.execute("playnow.push #{track}")
print("no skip required")
end
http_response(
protocol=protocol,
code=200,
headers=[("Content-Type","application/json; charset=utf-8")],
data='{"status":"success", "track": "#{track}", "action": "#{action}"}'
)
end
######## live stuff below #######
playlist= playlist(reload=1, reload_mode="watch", "/etc/liquidsoap/playlist.xspf")
requested = crossfade(request.equeue(id="playnow"))
live= fallback(track_sensitive=false,transitions=[crossfade, crossfade],[requested, playlist])
output.harbor(%mp3,id="live",mount="live_radio", radio)
harbor.http.register(port=MY_HARBOR_PORT, method="POST","/playnow", playnow(live))
to use the above you need to send a post request with json data like so:
{"track":"http://mydomain/mysong.mp3"}
this is also with the assumption you have the harbor running which you should be able to find out using the liquidsoap docs
there are multiple methods of sending into the queue, there is telnet, you can create a http input, or a metadata request to playnow via the harbor, let me know which one you opt for and i can provide you with a code example

How to log incoming request and response?

I am using Akka HTTP and would like to log every incoming request and outgoing result. I know, that it exists a logRequestResult directive, but how to use it? Or is it the right for my purpose?
Yes, this is the directive you are looking for, and I agree - the official documentation is a bit hard to grasp on.
Here is how an endpoint with logRequestResult would look like:
val requestHandler: Route = logRequestResult("req/resp", Logging.InfoLevel) {
handleExceptions(errorHandler) {
endpointRoutes
}
}
def start()(implicit actorSystem: ActorSystem,
actorMaterializer: ActorMaterializer): Future[Http.ServerBinding] =
Http().bindAndHandle(
handler = requestHandler,
interface = host,
port = port)
Notice you can choose a generic prefix for each request-response entry, i.e, req/resp, as well as the logging level on which the request-response log is available, i.e. Logging.InfoLevel.
The above example produces log lines similar to the one below:
[your-actor-system-akka.actor.default-dispatcher-19] INFO akka.actor.ActorSystemImpl - req/resp: Response for
Request : HttpRequest(HttpMethod(GET),http://<host>/<path>,List(Host: <host>, Connection: close: <function1>),HttpEntity.Strict(none/none,ByteString()),HttpProtocol(HTTP/1.1))
Response: Complete(HttpResponse(200 OK,List(),HttpEntity.Strict(text/plain; charset=UTF-8,OK),HttpProtocol(HTTP/1.1)))
Happy hakking :)

ActiveMQ-Artemis 2.6 Set queue dead letter address using JMS management API

I'm trying to set the dead letter address for a queue via the JMS management API. From reading the latest Artemis docs it appears that I should be able to do this using the QueueControl.setDeadLetterAddress(...) method. See https://activemq.apache.org/artemis/docs/latest/management.html and search for "setDeadLetterAddress".
It is my understanding that the parameters of these methods should be found in the Artemis QueueControl javadocs here:
https://activemq.apache.org/artemis/docs/javadocs/javadoc-latest/org/apache/activemq/artemis/api/core/management/QueueControl.html
However, that documentation does not have any mention of a setDeadLetterAddress method or what parameters is might accept.
Does the QueueControl.setDeadLetterAddress method still exist and can it be called from the JMSManagementHelper.putOperationInvocation(...) method?
Many thanks!
Looking at the QueueControlImpl class code it is clear that the setDeadLetterAddress operation is no longer present. An operation in the ActiveMQServerControlImpl class named addAddressSettings does provide the capability to set the DLA for a queue (as well as plenty of other settings).
For example:
Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management");
Queue replyQueue = ActiveMQJMSClient.createQueue("management.reply");
JMSContext context = connectionFactory.createContext();
JMSConsumer consumer = context.createConsumer(replyQueue)) {
JMSProducer producer = context.createProducer();
producer.setJMSReplyTo(replyQueue);
// Using AddressSettings isn't required, but is provided
// for clarity.
AddressSettings settings = new AddressSettings()
.setDeadLetterAddress(new SimpleString("my.messages.dla"))
.setMaxDeliveryAttempts(5)
.setExpiryAddress(new SimpleString("ExpiryAddress"))
.setExpiryDelay(-1L) // No expiry
.setLastValueQueue(false)
.setMaxSizeBytes(-1) // No max
.setPageSizeBytes(10485760)
.setPageCacheMaxSize(5)
.setRedeliveryDelay(500)
.setRedeliveryMultiplier(1.5)
.setMaxRedeliveryDelay(2000)
.setRedistributionDelay(1000)
.setSendToDLAOnNoRoute(true)
.setAddressFullMessagePolicy(AddressFullMessagePolicy.PAGE)
.setSlowConsumerThreshold(-1) // No slow consumer checking
.setSlowConsumerCheckPeriod(1000)
.setSlowConsumerPolicy(SlowConsumerPolicy.NOTIFY)
.setAutoCreateJmsQueues(true)
.setAutoDeleteJmsQueues(false)
.setAutoCreateJmsTopics(true)
.setAutoDeleteJmsTopics(false)
.setAutoCreateQueues(true)
.setAutoDeleteQueues(false)
.setAutoCreateAddresses(true)
.setAutoDeleteAddresses(false);
Message m = context.createMessage();
JMSManagementHelper.putOperationInvocation(m, ResourceNames.BROKER, "addAddressSettings",
"my.messages",
settings.getDeadLetterAddress().toString(),
settings.getExpiryAddress().toString(),
settings.getExpiryDelay(),
settings.isLastValueQueue(),
settings.getMaxDeliveryAttempts(),
settings.getMaxSizeBytes(),
settings.getPageSizeBytes(),
settings.getPageCacheMaxSize(),
settings.getRedeliveryDelay(),
settings.getRedeliveryMultiplier(),
settings.getMaxRedeliveryDelay(),
settings.getRedistributionDelay(),
settings.isSendToDLAOnNoRoute(),
settings.getAddressFullMessagePolicy().toString(),
settings.getSlowConsumerThreshold(),
settings.getSlowConsumerCheckPeriod(),
settings.getSlowConsumerPolicy().toString(),
settings.isAutoCreateJmsQueues(),
settings.isAutoDeleteJmsQueues(),
settings.isAutoCreateJmsTopics(),
settings.isAutoDeleteJmsTopics(),
settings.isAutoCreateQueues(),
settings.isAutoDeleteQueues(),
settings.isAutoCreateAddresses(),
settings.isAutoDeleteAddresses());
producer.send(managementQueue, m);
Message response = consumer.receive();
// addAddressSettings returns void but this will also return errors if the
// method or parameters are wrong.
log.info("addAddressSettings Reply: {}", JMSManagementHelper.getResult(response));

Set charset when processing xml using Dispatch Databinder 0.10

I'm wrapping an upstream API with a Scalatra application and using Dispatch to make async requests. However, I'm having trouble turning the upstream XML into xml.Elems using Dispatch's built-in XML processing support.
I'm trying to do something fairly similar to what's in the Dispatch docs, namely retrieve the upstream XML and do some reprocessing. The functions in question look something like:
def facilitiesSvc = {
val myhost = host("upstream.api.co.uk") / "organisations" / "foo" / "123" / "bar" / "core.xml"
myhost.addQueryParameter("apikey", "123456")
myhost
}
def facilitiesXml: Future[Either[String, xml.Elem]] = {
val res: Future[Either[Throwable, xml.Elem]] = Http((facilitiesSvc) OK as.xml.Elem).either
for(exc <- res.left)
yield "Can't connect to facilities service: \n" +
exc.getMessage
}
This results in:
Left(Can't connect to facilities service: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.)
The upstream API isn't sending back a charset, and when retrieving it, Dispatch is showing it with a Byte Order Mark before the XML begins: <?xml version="1.0" encoding="utf-8"?>.
I can see that earlier versions of Dispatch solved this problem in the following way:
new Http apply(url(uri.toString).copy(defaultCharset = "iso-8859-1") as_str)
However I can't currently see a way to make this work with Dispatch 0.10. Does anybody have any tips for setting the charset on this response, so I can parse what's returned?