Enqueue liquidsoap request from script instead of command - liquidsoap

I'm trying to write my very first liquidsoap program. It goes something like this:
sounds_path = "../var/sounds"
# Log file
set("log.file.path","var/log/liquidsoap.log")
set("harbor.bind_addr", "127.0.0.1")
set("harbor.timeout", 5)
set("harbor.verbose", true)
set("harbor.reverse_dns", false)
silence = blank()
queue = request.queue()
def play(~protocol, ~data, ~headers, uri) =
request.push("#{sounds_path}#{uri}")
http_response(protocol=protocol, code=20000)
end
harbor.http.register(port=8080, method="POST", "^/(?!\0)+", play)
stream = fallback(track_sensitive=false, [queue, silence])
...output.whatever...
And I was wondering if there is any way to push to the queue from the harbor callback.
Else, how should I proceed about making requests originate from HTTP calls? I really want to avoid telnet. My final objective is having an endpoint that I can call to make my stream play a file on demand and be silent the rest of the time.

give this a go its liquidsoap so its tricky to understand but it should do the trick
########### functions ##############
def playnow(source,~action="override", ~protocol, ~data, ~headers, uri) =
queue_count = list.length(server.execute("playnow.primary_queue"))
arr = of_json(default=[("key","value")], data)
track = arr["track"];
log("adding playnow track '#{track}'")
if queue_count != 0 and action == "override" then
server.execute("playnow.insert 0 #{track}")
source.skip(source)
print("skipping playnow queue")
else
server.execute("playnow.push #{track}")
print("no skip required")
end
http_response(
protocol=protocol,
code=200,
headers=[("Content-Type","application/json; charset=utf-8")],
data='{"status":"success", "track": "#{track}", "action": "#{action}"}'
)
end
######## live stuff below #######
playlist= playlist(reload=1, reload_mode="watch", "/etc/liquidsoap/playlist.xspf")
requested = crossfade(request.equeue(id="playnow"))
live= fallback(track_sensitive=false,transitions=[crossfade, crossfade],[requested, playlist])
output.harbor(%mp3,id="live",mount="live_radio", radio)
harbor.http.register(port=MY_HARBOR_PORT, method="POST","/playnow", playnow(live))
to use the above you need to send a post request with json data like so:
{"track":"http://mydomain/mysong.mp3"}
this is also with the assumption you have the harbor running which you should be able to find out using the liquidsoap docs

there are multiple methods of sending into the queue, there is telnet, you can create a http input, or a metadata request to playnow via the harbor, let me know which one you opt for and i can provide you with a code example

Related

How to use Flink streaming to process Data stream of Complex Protocols

I'm using Flink Stream for the handling of data traffic log in 3G network (GPRS Tunnelling Protocol). And I'm having trouble in the synthesis of information in a user session of the user.
For example: how to map the start and end one session. I don't know that there Flink streaming suited to handle complex protocols like that?
p/s:
We capture data exchanging between SGSN and GGSN in 3G network (use GTP protocol with GTP-C/U messages). A session is started when the SGSN sends the CreateReq (TEID, Seq, IMSI, TEID_dl,TEID_data_dl) message and GGSN responses CreateRsp(TEID_dl, Seq, TEID_ul, TEID_data_ul) message.
After the session is established, others GTP-C messages (ex: UpdateReq, DeleteReq) sent from SGSN to GGSN uses TEID_ul and response message uses TEID_dl, GTP- U message uses TEID_data_ul (SGSN -> GGSN) and TEID_data_dl (GGSN -> SGSN). GTP-U messages contain information such as AppID (facebook, twitter, web), url,...
Finally, I want to handle continuous log data stream and map the GTP-C messages and GTP-U of the same one user (IMSI) to make a report.
I've tried this:
val sessions = createReqs.connect(createRsps).flatMap(new CoFlatMapFunction[CreateReq, CreateRsp, Session] {
// holds CreateReqs indexed by (tedid_dl,seq)
private val createReqs = mutable.HashMap.empty[(String, String), CreateReq]
// holds CreateRsps indexed by (tedid,seq)
private val createRsps = mutable.HashMap.empty[(String, String), CreateRsp]
override def flatMap1(req: CreateReq, out: Collector[Session]): Unit = {
val key = (req.teid_dl, req.header.seqNum)
val oRsp = createRsps.get(key)
if (!oRsp.isEmpty) {
val rsp = oRsp.get
println("OK")
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createRsps.remove(key)
} else {
createReqs.put(key, req)
}
}
override def flatMap2(rsp: CreateRsp, out: Collector[Session]): Unit = {
val key = (rsp.header.teid, rsp.header.seqNum)
val oReq = createReqs.get(key)
if (!oReq.isEmpty) {
val req = oReq.get
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createReqs.remove(key)
} else {
createRsps.put(key, rsp)
}
}
}).print()
This code always returns empty result. The fact that the input stream contains CreateRsp and CreateReq message of the same session. They appear very close together (within 1 second). When I debug, the oReq.isEmpty == true every time.
What i'm doing wrong?
To be honest it is a bit difficult to see through the telco specifics here, but if I understand correctly you have at least 3 streams, the first two being the CreateReq and the CreateRsp streams.
To detect the establishment of a session I would use the ConnectedDataStream abstraction to share state between the two aforementioned streams. Check out this example for usage or the related Flink docs.
Is this what you are trying to achieve?

How to properly use spray.io LruCache

I am quite an unexperienced spray/scala developer, I am trying to properly use spray.io LruCache. I am trying to achieve something very simple. I have a kafka consumer, when it reads something from its topic I want it to put the value it reads to cache.
Then in one of the routings I want to read this value, the value is of type string, what I have at the moment looks as follows:
object MyCache {
val cache: Cache[String] = LruCache(
maxCapacity = 10000,
initialCapacity = 100,
timeToLive = Duration.Inf,
timeToIdle = Duration(24, TimeUnit.HOURS)
)
}
to put something into cache i use following code:
def message() = Future { new String(singleMessage.message()) }
MyCache.cache(key, message)
Then in one of the routings I am trying to get something from the cache:
val res = MyCache.cache.get(keyHash)
The problem is the type of res is Option[Future[String]], it is quite hard and ugly to access the real value in this case. Could someone please tell me how I can simplify my code to make it better and more readable ?
Thanks in advance.
Don't try to get the value out of the Future. Instead call map on the Future to arrange for work to be done on the value when the Future is completed, and then complete the request with that result (which is itself a Future). It should look something like this:
path("foo") {
complete(MyCache.cache.get(keyHash) map (optMsg => ...))
}
Also, if singleMessage.message does not do I/O or otherwise block, then rather than creating the Future like you are
Future { new String(singleMessage.message) }
it would be more efficient to do it like so:
Future.successful(new String(singleMessage.message))
The latter just creates an already completed Future, bypassing the use of an ExecutionContext to evaluate the function.
If singleMessage.message does do I/O, then ideally you would do that I/O with some library (like Spray client, if it's an HTTP request) that returns a Future (rather than using Future { ... } to create another thread which will block).

Understanding Esper IO Http example

What is Trigger Event here ?
How to plug this to the EsperEngine for getting events ?
What URI should be passed ? how should engineURI look like ?
Is it the remote location of the esper engine ?
ConfigurationHTTPAdapter adapterConfig = new ConfigurationHTTPAdapter();
// add additional configuration
Request request = new Request();
request.setStream("TriggerEvent");
request.setUri("http://localhost:8077/root");
adapterConfig.getRequests().add(request);
// start adapter
EsperIOHTTPAdapter httpAdapter = new EsperIOHTTPAdapter(adapterConfig, "engineURI");
httpAdapter.start();
// destroy the adapter when done
httpAdapter.destroy();
Changed the stream from TriggerEvents to HttpEvents and I get this exception given below
ConfigurationException: Event type by name 'HttpEvents' not found
The "engineURI" is a name for the CEP engine instance and has nothing to do with the EsperIO http transport. Its a name for looking up what engines exists and finding the engine by name. So any text can be used here and the default CEP engine is named "default" when you allocate the default one.
You should define the event type of the event you expect to receive via http. A sample code is in http://svn.codehaus.org/esper/esper/trunk/esperio-socket/src/test/java/com/espertech/esperio/socket/TestSocketAdapterCSV.java
You need to declare your event type(s) in either Java, or through Esper's EPL statements.
The reason why you are getting exception is because your type is not defined.
Then you can start sending events by specifying type you are sending in HTTP request. For example, here is a bit of code in python:
import urllib
cepurl = "http://localhost:8084"
param = urllib.urlencode({'stream':'DataEvent',
'date': datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
'src':data["ipsrc"],
'dst':data["ipdst"],
'type':data["type"]})
# sending event:
f = urllib.urlopen(cepurl + "/sendevent?" + param);
rez = f.read()
in java this probably would be something like this:
SupportHTTPClient client = new SupportHTTPClient();
client.request(8084, "sendevent", "stream", "DataEvent", "date", "mydate");

Using Streams in Gatling repeat blocks

I've come across the following code in a Gatling scenario (modified for brevity/privacy):
val scn = scenario("X")
.repeat(numberOfLoops, "loopName") {
exec((session : Session) => {
val loopCounter = session.getTypedAttribute[Int]("loopName")
session.setAttribute("xmlInput", createXml(loopCounter))
})
.exec(
http("X")
.post("/rest/url")
.headers(headers)
.body("${xmlInput}"))
)
}
It's naming the loop in the repeat block, getting that out of the session and using it to create a unique input XML. It then sticks that XML back into the session and extracts it again when posting it.
I would like to do away with the need to name the loop iterator and accessing the session.
Ideally I'd like to use a Stream to generate the XML.
But Gatling controls the looping and I can't recurse. Do I need to compromise, or can I use Gatling in a functional way (without vars or accessing the session)?
As I see it, neither numberOfLoops nor createXml seem to depend on anything user related that would have been stored in the session, so the loop could be resolved at build time, not at runtime.
import com.excilys.ebi.gatling.core.structure.ChainBuilder
def addXmlPost(chain: ChainBuilder, i: Int) =
chain.exec(
http("X")
.post("/rest/url")
.headers(headers)
.body(createXml(i))
)
def addXmlPostLoop(chain: ChainBuilder): ChainBuilder =
(0 until numberOfLoops).foldLeft(chain)(addXmlPost)
Cheers,
Stéphane
PS: The preferred way to ask something about Gatling is our Google Group: https://groups.google.com/forum/#!forum/gatling

Send commands over socket, but wait every time for response (Node.js)

I need to send several commands over telnet to a server. If I try to send them without a time delay between every command, the server freaks out:
var net = require('net');
var conn = net.createConnection(8888, 'localhost');
conn.on('connect', function() {
conn.write(command_1);
conn.write(command_2);
conn.write(command_3);
//...
conn.write(command_n);
})
I guess the server needs some time to respond to command n before I send it command n+1. One way is to write something to the log and fake a "wait":
var net = require('net');
var conn = net.createConnection(8888, 'localhost');
conn.on('connect', function() {
console.log('connected to server');
console.log('I'm about to send command #1');
conn.write(command_1);
console.log('I'm about to send command #2');
conn.write(command_2);
console.log('I'm about to send command #3');
conn.write(command_3);
//...
console.log('I'm about to send command #n');
conn.write(command_n);
})
It might also be the fact that conn.write() is asynchronous, and putting one command after another doesn't guranty the correct order??
Anyway, what is the correct pattern to assure correct order and enough time between two consecutive commands, for the server to respond?
First things first: if this is truly a telnet server, then you should do something with the telnet handshaking (where terminal options are negotiated between the peers, this is the binary data you can see when opening the socket).
If you don't want to get into that (it will depend on your needs), you can ignore the negotiation and go straight to business, but you will have to read this data and ignore it yourself.
Now, in your code, you're sending the data as soon as the server accepts the connection. This may be the cause of your troubles. You're not supposed to "wait" for the response, the response will get to you asynchronously thanks to nodejs :) So you just need to send the commands as soon as you get the "right" response from the server (this is actually useful, because you can see if there were any errors, etc).
I've tried this code (based on yours) against a device I've got at hand that has a telnet server. It will do a login and then a logout. See how the events are dispatched according to the sever's response:
var net = require('net');
var conn = net.createConnection(23, '1.1.1.1');
var commands = [ "logout\n" ];
var i = 0;
conn.setEncoding('ascii');
conn.on('connect', function() {
conn.on('login', function () {
conn.write('myUsername\n');
});
conn.on('password', function () {
conn.write('myPassword\n');
});
conn.on('prompt', function () {
conn.write(commands[i]);
i++;
});
conn.on('data', function(data) {
console.log("got: " + data + "\n");
if (data.indexOf("login") != -1) {
conn.emit('login');
}
if (data.indexOf("password") != -1) {
conn.emit('password');
}
if (data.indexOf(">#") != -1) {
conn.emit('prompt');
}
});
});
See how the commands are in an array, where you can iteratively send them (the prompt event will trigger the next command). So the right response from the server is the next prompt. When the server sends (in this case) the string ># another command is sent.
Hope it helps :)
The order of writes is guaranteed. However:
You must subscribe to data event. conn.on('data', function(data)
{}) will do.
You must check return values of writes - if a write
fails, you must wait for 'drain' event. So you should check if any
write really fails and if it does then fix the problem. If it
doesn't - then you can leave current dirty solution as is.
You
must check if your server supports request piplining (sending
multiple requests without waiting for responses). If it doesn't -
you must not send next request before receiving a data event after
the previous one.
You must ensure that the commands you send are real telnet commands - telnet expects a \0 byte after \r\n (see the RFC), so certain servers may freak if \0 is not present.
So:
var net = require('net');
var conn = net.createConnection(8888, 'localhost');
conn.on('connect', function() {
console.log(conn.write(command_1) &&
conn.write(command_2) &&
conn.write(command_3) &&
//...
conn.write(command_n))
})
conn.on('data', function () {})
If it writes false - then you must wait for 'drain'. If it writes true - you must implement waiting. I discourage the event-based solution and suggest to look at using async or Step NPM modules instead.