So basically I need a system like this:
We got users
Users have friends
Now the users if they come on the website they can post an "activity" just like twitter, they type in what they are doing, and all their friends get with a realtime update.
I have looked at lift for a week or three, and I digged into the chat server example, how ever as I said how can I make an comet actor for "activities" that where posted by friends?
I see two general approachs:
Do it the same way like the chat example: Just use one "Chat"server that holds all activities and every user is registred with. If a new activity is posted, every user would be informed and has to check if the activity was posted by one if its friends (via match/PartialFunction probably). If yes it display it, otherwise discard it.
Use one "Chat"server per user and register only the user's friends with this server. Note: I don't know if you need one comet listener per server for each of the user you are following in this case or if a comet listener can listen to several servers. If you need one listener per server, you will have to combine all activities you're listening to before displaying theese.
Approach 1 is closer to the ChatServer example but I would suggest to follow approach 2 since there's less communication in it so it should scale better. Also using different servers should improve scalability since you're able to do partitioning based on that. Of course you will have to some more management than using only one singleton "Chat"server.
Related
I apologize in advance for my bad english.
I've created simple training service in golang, which supports login and registration system with MongoDB. This service allows you to scrape rooms for rent in London in specified location if you loggedin. So, now I want to implement notifications for loggedin user's about new rooms in user's marked location. My first idea was to make some background process, which will scrape rooms every 30 seconds, save the results (in mongo, in cookies or somewhere else, advise me please), match new scrape results with previous and save differences (new rooms) in DB for future posting to user in some form (email or list on html page).
1) Is my idea about notifications generally correct? If not, please describe me better way to do this or point to some relating examples.
2) What is the best way to make that background process in go?
3) This would be great if you'd point me on some examples relating to the case.
The demo of service on Heroku
Github repo
I appreciate your concern.
Actually I'm more interested in the techniques to use to achieve this task more than really building a chat system (which is an excellent concrete example). I see 2 parts:
The client needs to get registered somewhere, and we then need a unique ID per client.
The server should be able to send something to the client only from another client.
For the first part, I do not know how to get this unique id. Possibly using the new meteor auth kit ?
For the second part, I thought about building a per-client collection in which one and only one client will have access to, but it sounds heavy and In my opinion not really in the Meteor best practice. I then thought of adding a "from" and to "field" to a Message (see the regular chat example). This would do it but I'm wondering about the no privacy on them. Would a custom publish returning a filtered find do it or it is risky too i.e. would other client get the items too ? Something like:
Meteor.publish("message", function (clientID) {
return Messages.find({"dest":clientID }, {});
});
The latest Meteor todos example uses the new auth system to identify private todo entries. I would imagine that you could use the same mechanism to identify the originator and recipient of a private message in a chat like system.
Of course the filtering of which messages someone sees would need to be filtered on the server side to maintain privacy.
I need to include chat, in my application. People sign in the chat and create their user and chat to other users. However it needs to be like facebook chat or pingchat where you add friends you want to talk to.
Can anyone give me pointers to what i need to do? I've heard about xmpp servers but not sure if that is the right thing for my app. Any help would be much appreciated
Thanks
Is your app going to create new users, and add them in the chat list, or going to use existing users (like Gtalk, Y! Messenger etc) on existing protocols (like IRC, XMPP etc)...?
If you are going to implement your own chat system, where your users are registering in your website, then you are going to do these things:
Setup your website
Create a protocol (that's, how you pass messages)
Write and implement an API (in PHP, ASP etc)
Connect that API with your iPhone app.
How it works?
You keep a table of chat messages. The table include:
Chat_From
Chat_To
Chat_Message
Timestamp
All what you do is, when you start a Chat session from Alice to Bob, you just enter them in the table. Next, you fetch the row from the Web Server to your App, by calling your PHP file (say, http://mychatserver.com/getChat.php) based on the condition SELECT CHAT_MESSAGE FROM CHAT_TABLE WHERE CHAT_FROM="ALICE" AND CHAT_TO="BOB";. This message is displayed in your App.
This process should be performed repeatedly, with an interval of, say 1 sec.
I hope you got this idea.
Does Facebook provide access to any real time APIs so that you can respond to events as soon as they happen? If not, what alternatives are there and what are their limitations? For example, if I use polling instead, will they limit my api calls? And if I try using RSS feeds, about how much delay can I expect? Or maybe it would be possible to receive and process email notifications (if I could convince a user to forward mail to another email address), as they seem to be dispatched pretty promptly.
I've never tried polling user data, but I think it will work without issues. As far as I know there are no restrictions on the number of API calls you can make on facebook.
As far as the Queries are concerned, what I have seen and I think this is how they implement it. If your query asks for too much data(takes too much time to process is how they measure this I think) - the query will just fail.
eg: I had this app that would pull all the status messages of all the friends of the user and display it in one place. I first queried for all the friends of the user - this worked okay. But at the same time if I ran a loop to get all the status messages for each friend - it would just fail.
I think you can call individual queries without issues, just be careful you query only data you need, cause, if the queries are too big or too many they will just fail. Best way to findout is running tests yourself.
The Facebook Graph API will allow you to subscribe to real time changes. You can currently only subscribe to users, permissions and errors, but they promise to allow subscribing to more objects in the future.
I have created Twitter bots for many geographic locations. I want to allow users to #-reply to the Twitter bot with commands and then have the bot respond with the results. I would like to have the bot reply to the user as quickly as possible (realtime).
Apparently, Twitter used to have an XMPP/Jabber interface that would provide this type of realtime feed of replies but it was shut down.
As I see it my options are to use one of the following:
REST API
This would involve polling every X minutes for each bot. The problem with this is that it is not realtime and each Twitter account would have to be polled.
Search API
The search API does allow specifying a "-to" parameter in the search and replies to all bots could be aggregated in a search such as "-to bot1 OR -to bot2...". Though if you have hundreds of bots then the search string would get very long and probably exceed the maximum length of a GET request.
Streaming API
The streaming API looks very promising as it provides realtime results. The API allows you to specify a follow and track parameters. follow is not useful as the bot does not know who will be sending it commands. track allows you to specify keywords to track. This could possibly work by creating a daemon process that connects to the Streaming API and tracks all references to the bot's names. Once again since there are lots of bots to track the length and complexity of the query may be an issue. Another idea would be to track a special hashtag such as #botcommand and then a user could send a command using this syntax #bot1 weather #botcommand. Then by using the Streaming API to track all references to #botcommand would give you a realtime stream of all the commands. Further parsing could then be done to determine which bot to send the command to. This blog post has more details on the Streaming API
Third-party service
Are there any third-party companies that have access to the Twitter firehouse and offer realtime data?
I haven't investigated these, but here are a few that I have found:
Gnip
Tweet.IM
excla.im
TwitterSpy - seems to use polling, not realtime
tweethook
I'm leaning towards using the Streaming API. Is there a better way to get near realtime #-replies for many (hundreds) of Twitter accounts?
UPDATE: Twitter just announced that in the future they will have User Streams which expands upon the Streaming API. User Streams Preview
Either track or follow will work for the cases you describe. See http://apiwiki.twitter.com/Streaming-API-Documentation#track for details on what track actually does. The doc on follow is on the same page.
There are rate limits of sorts on the streaming API, but they have to do with how big a slice of the total tweet stream you're consuming. For writing a bot like this you won't hit these limits without a pretty big user base. And when you get that user base you can apply for elevated access levels that increase the rate limets.
There's the twitter firehose but you're probably best off using the Streaming API. The firehose is open to Google (try googling your twitter name) and as the link says they're opening it up to all soon enough.
You'll want to get your IP whitelist too.
If your not already, you want to check out the GoogleGroup for twitter devs.
The track predicate for the streaming api would actually be useful because if you follow your bot's user IDs, you'll get all the messages made by your bots and all the other messages that mention your bots #usernames (including #replies). It really does track everything public on twitter relating to the user IDs you follow with it, give it a shot.
REST API:
The most comprehensive results with the least amount of false positives. Will include protected statuses if the bot is following the protected account. If you poll every thirty seconds it is pretty close to realtime and you will be well under your rate limit (350/hour) if you are using api.twitter.com/1 with OAuth.
Streaming API:
You will want to avoid the Search API. It is trending more and more towards popular results and not complete results.
Streaming API
The fastest but also likely to miss some statuses as well as include false positives. Protected statuses for example are not included. Track for a screen_name will return statuses with that screen_name in it but will also include tweets that just have the screen_name as a string without the # so be sure to filter on your side.