How do I get a list of users connected in chat in an mIRC Remote script - mirc

I feel like this has got to be obvious, but I've been searching and just can't find the answer.
I'm programming a 'bot' which connects to my twitch channel's chat. I'd like to track the number of consecutive streams watched by users. I have a command that I type at the start of each stream to signify that a new stream has started, and so, users who join should have their number of consecutive watched streams increased.
I currently use the JOIN event to increase the users consecutive streams count, but if someone is sitting in chat before the start of the stream, they don't get credit because the JOIN event happened before the flag that a new stream has started has been set.
Is there any way to get a list of the current $nick's in the chat? If so, I could hook that into the command when I start the stream and update the users who are already in chat.

You can use $nick(#,N) to retrieve the number of users in a channel, where # is the name of your channel and N is a number.
You should first use $nick(#mychannel,0) to get the total amount of users in your channel and then you can loop with that number through the users list also with $nick(#,N).
For example, you do //echo $nick(#mychannel,0) it will say 10. When you use $nick(#mychannel,1) it will return the first user in the user list.
Simple code example:
alias getusers {
var %users = $nick($1,0), %n = 1
while (%n <= %users) {
; print all users in the channel
echo -ag $nick($1,%n)
; you can put your code here
inc %n
}
}
Type /getusers #channelname in a channel to get a list of all users.
Tell me if you need more help.

Related

Flutter Read Realtime Database

I'm trying to count the number of messages in my Firebase Realtime Database but I can't read further than the first 'messages' branch.
I use this command to display the exact number of conversations I've created.
FirebaseDatabase.instance.reference().child('messages').once().then((DataSnapshot snapshot){
print(snapshot.value.length);
});
He looks good to me 14 but I need to do a count of each message created in each branch (chat room).
To understand the tree structure:
messages
----------id chat room
-------------id message
One way to do this is to use the onChildAdded stream. This stream is called with each child node of the location on which you listen, so one level lower in the JSON tree.
It would look something like:
FirebaseDatabase.instance.reference().child('messages').onChildAdded.listen((Event event) {
print(event.snapshot.value.length);
});
When you first call onChildAdded.listen, the length for each existing child node will be printed. Then afterwards, if you add a new child node (directly) under messages, the length of that will be printed too.

How to get most recent message in a messaging DB? (RethinkDB)

Hi I'm building a chat messaging system and am reading/writing to the DB for my first time. I'm creating the method calls to retrieve relevant data I need for this chat. Like with most chat systems, I want to have a general list of message with the name, most recent message, and time/date associated with the most recent message. Once I click on that thread, the corresponding messages will show up. I'm trying to call the DB but am having trouble using the correct command to get the most recent message.
This is what one message contains:
{
"_id": "134a8cba-2195-4ada-bae2-bc1b47d9925a" ,
"clinic_id": 1 ,
"created": 1531157560 ,
"direction": "o" ,
"number": "14383411234" ,
"read": true ,
"text": "hey hows it going"
}
Every single message that is sent and received gets POSTed like this. I'm having trouble coming up with the correct commands to get the most recent message of all the distinct "number" so that for number x, I get its corresponding recent message and with number y, I get its corresponding recent message. "created" is the time when the message was created in UNIX time.
This is what I have started with:
Retrieve all thread numbers:
r.db('d2').table('sms_msg').between([1, r.minval], [1, r.maxval], {index:'clinic_number'}).pluck(['number']).distinct()
Retrieve all messages in specific thread:
r.db('d2').table('sms_msg').getAll([1, "14383411234"], {index:'clinic_number'})
Retrieve recent message for all distinct threads:
r.db('d2').table('sms_msg').filter()....???
Some help would really be appreciated!
That's a very tricky query for any database, and usually involves a multitude of sub-queries. You might want to consider denormalizing it, keeping a reference to last entry in another table, for each number.
But basically, with your current approach, this might work (untested) but might be highly inefficient:
r.table('sms_msg').orderBy(r.desc('created')).group('number').nth(0)
It's usually fast to get the lowest value of the property of a document, but when you want the whole document of a sorted list like this, it is very inefficient in my experience.

Perl code to retrieve currently logged user list of windows server

Clients are connecting to a windows server with different user names. For example:
client1 connects to server with user1
client2 connects to server with user2
client3 connects to server with user3
Now there are 3 currently logged users at server: user1, user2, user3.
Is it possible retrieve logged on users and client name? I can see this at task manager at user form as seen at below picture:
I don't use Windows, but I can Google enough to guess at a solution.
This page suggests that you can use query user to get a list of logged in users.
You can run that command in Perl and capture the output using qx[].
# All output in a scalar
my $users = qx[query users];
# One line of output per element in an array
my #users = qx[query users];
You know have the information that you want in a Perl variable. The next step is to parse that data to extract the specific fields that you need. As I don't currently have access to a machine running Windows, I can't see what format this command returns, so I can't help you with this second part of the process.
If you have trouble parsing the data, then post a sample of it in a new question here and we'll be happy to help you further.

Retrieving a list of records in deepstream.io

I'm currently implementing a simple chat in order to learn how to use deepstream.io. Is there an easy way to get an interval from, lets say, a list of records? Imagine the scenario that a user wants to get old chat messages by scrolling back in the history. I could not find anything about this in the documentation, and I have read through the source with no luck.
Is my best bet to work against a database (e.g. RethinkDb) directly or is there an easy way to do it through deepstream?
First: The bad news:
deepstream.io is purely a messaging server - it doesn't look into the data that passes through it. This means that any kind of querying functionality would need to be provided by another system, e.g. a client connected to RethinkDB.
Having said that: There's good news:
We're also looking into adding chat functionality (including extensive history keeping and searching) into our application.
Since chat messages are immutable (won't change once they are send) we will use deepstream events, rather than records. In order to facilitate chat history keeping, we’ll create a "chat history provider", a node process that sits between deepstream and our database and listens for any event that starts with 'chat-'. (Assuming your chat events are named
chat-<chat-name>/<message-id>, e.g. chat-idle-banter/254kbsdf-5mb2soodnv)
On a very high level our chat-history-provider will look like this:
ds.event.listen( /chat-*/, function( chatName, messageData ) {
//Add the timestamp on the server-side, otherwise people
//can change the order of messages by changing their system clock
messageData.timestamp = Date.now();
rethinkdbConnector.set( chatName, messageData );
});
ds.rpc.provide( 'get-chat-history', function( data, response ){
//Query your database here
});
Currently deepstream only supports "listening" for records, but the upcoming version will offer the same kind of functionality for events and rpcs.

How to optimize collection subscription in Meteor?

I'm working on a filtered live search module with Meteor.js.
Usecase & problem:
A user wants to do a search through all the users to find friends. But I cannot afford for each user to ask the complete users collection. The user filter the search using checkboxes. I'd like to subscribe to the matched users. What is the best way to do it ?
I guess it would be better to create the query client-side, then send it the the method to get back the desired set of users. But, I wonder : when the filtering criteria changes, does the new subscription erase all of the old one ? Because, if I do a first search which return me [usr1, usr3, usr5], and after that a search that return me [usr2, usr4], the best would be to keep the first set and simply add the new one to it on the client-side suscribed collection.
And, in addition, if then I do a third research wich should return me [usr1, usr3, usr2, usr4], the autorunned subscription would not send me anything as I already have the whole result set in my collection.
The goal is to spare processing and data transfer from the server.
I have some ideas, but I haven't coded enough of it yet to share it in a easily comprehensive way.
How would you advice me to do to be the more relevant possible in term of time and performance saving ?
Thanks you all.
David
It depends on your application, but you'll probably send a non-empty string to a publisher which uses that string to search the users collection for matching names. For example:
Meteor.publish('usersByName', function(search) {
check(search, String);
// make sure the user is logged in and that search is sufficiently long
if (!(this.userId && search.length > 2))
return [];
// search by case insensitive regular expression
var selector = {username: new RegExp(search, 'i')};
// only publish the necessary fields
var options = {fields: {username: 1}};
return Meteor.users.find(selector, options);
});
Also see common mistakes for why we limit the fields.
performance
Meteor is clever enough to keep track of the current document set that each client has for each publisher. When the publisher reruns, it knows to only send the difference between the sets. So the situation you described above is already taken care of for you.
If you were subscribed for users: 1,2,3
Then you restarted the subscription for users 2,3,4
The server would send a removed message for 1 and an added message for 4.
Note this will not happen if you stopped the subscription prior to rerunning it.
To my knowledge, there isn't a way to avoid removed messages when modifying the parameters for a single subscription. I can think of two possible (but tricky) alternatives:
Accumulate the intersection of all prior search queries and use that when subscribing. For example, if a user searched for {height: 5} and then searched for {eyes: 'blue'} you could subscribe with {height: 5, eyes: 'blue'}. This may be hard to implement on the client, but it should accomplish what you want with the minimum network traffic.
Accumulate active subscriptions. Rather than modifying the existing subscription each time the user modifies the search, start a new subscription for the new set of documents, and push the subscription handle to an array. When the template is destroyed, you'll need to iterate through all of the handles and call stop() on them. This should work, but it will consume more resources (both network and server memory + CPU).
Before attempting either of these solutions, I'd recommend benchmarking the worst case scenario without using them. My main concern is that without fairly tight controls, you could end up publishing the entire users collection after successive searches.
If you want to go easy on your server, you'll want to send as little data to the client as possible. That means every document you send to the client that is NOT a friend is waste. So let's eliminate all that waste.
Collect your filters (eg filters = {sex: 'Male', state: 'Oregon'}). Then call a method to search based on your filter (eg Users.find(filters). Additionally, you can run your own proprietary ranking algorithm to determine the % chance that a person is a friend. Maybe base it off of distance from ip address (or from phone GPS history), mutual friends, etc. This will pay dividends in efficiency in a bit. Index things like GPS coords or other highly unique attributes, maybe try out composite indexes. But remember more indexes means slower writes.
Now you've got a cursor with all possible friends, ranked from most likely to least likely.
Next, change your subscription to match those friends, but put a limit:20 on there. Also, only send over the fields you need. That way, if a user wants to skip this step, you only wasted sending 20 partial docs over the wire. Then, have an infinite scroll or 'load more' button the user can click. When they load more, it's an additive subscription, so it's not resending duplicate info. Discover Meteor describes this pattern in great detail, so I won't.
After a few clicks/scrolls, the user won't find any more friends (because you were smart & sorted them) so they will stop trying & move on to the next step. If you returned 200 possible friends & they stop trying after 60, you just saved 140 docs from going through the pipeline. There's your efficiency.