Having integrated our Billing system Meveo with RestComm to provision accounts and phone numbers, I would like to know where I can get CDR files and what is their format.
Thank you
One straightforward way it to fetch CDRs via the REST API. You can fetch records in a time frame (via list filter) and paginate to control overload of the Restcomm server:
Calls logs API:
http://docs.telestax.com/restcomm-api-calls/
SMS message logs API:
http://docs.telestax.com/restcomm-api-sms-messages/
Notice that some records returned by this API may be fore calls in-progress. To fetch only calls that completed or failed, use list filter parameters.
For very high load systems, it is better to read from the rolling CDR log files from a shared file system. We can discuss how this can be done in a separate thread.
Related
Hi i am for the time being building a REST Api and i am wondering about an idea i have for secure data posting.
I have data that i have to delegate to two different endpoints in the same Post and i am wondering if there is any way to "test-post" to the endpoints before posting for real solving the problematic scenarion in which one of the systems that i'm sending to would be down for a moment.
That is if one of the systems would be down i could return a failed request message instead.
For extra explanation of my thought:
User posts data objet to my api via endpoint.
I process data and later try to send processed data to two different systems via their endpoints in same process/method.
IF systems ok then data is sent, and i return a OK.
Else One of systems not ok i roll back (that is not sending processed data to either one of them), and return for example 500 Internal server error.
It is critical that when i send the data it has to be sent to both systems in same process, or none of them in same process if one system is down.
Hope i got you guys to understand what i am after.
I have a scenario for my app which is similar to sending friend request in Facebook.
When user A sends friend request to user B, internally a new friend request document is created. At a later time when user B also wants to send friend request to A, system would find out that a friend request document existed and so they should be friend of each other, no new friend request document would be created.
I'm trying to figure out the case when user A and user B both simultaneously sends friend request to each other which will then create 2 friend request documents and leading to undetermined behaviour...
Thanks for your suggestions.. Really appreciated!
Edit:
A few had suggested to use a request queue to solve this; however,
I'm confused about using queue because i thought it would make my rest api endpoint process requests sequentially. Wouldn't I lose all the benefit of multi-threading by using queue? I can't help but imagine how bad it would be if my service has millions of requests queued and waiting to be executed one by one just due to this issue. Has anyone seen something along similar problems seen in production?
I had similar situation with my client which has concurrent writes in the database, What I have implemented is a Queue service.
Create a request in the queue rather than writing in the database, a separate reader will
read one message from the queue at a time and check if it is valid to write it to
database, write only if there is no previous request.
You can implement your own queue or you can use service like AWS-SQS, rabbitmq, MSMQ etc.
// Specific to your case
In mongodb write operations on a single document are atomic.
mongodb has a feature of unique index.
Hence if you insert the document with an _id(or any other unique index) with person names A and B by creating unique index for both (such as "A_B" by lexicographically sorting the names) before doing insertion. You will inherently be able to insert only one instance of that document.
// General
What essentially we would like to have are transactions but since mongodb doesn't support such, as of now. There are a few tricks to achieve this:
2 phase commits :
https://docs.mongodb.org/v3.0/tutorial/perform-two-phase-commits/
Using an external source to maintain a flag, for example using memcache which supports insertion in transactional manner/Compare and Swap.
Here if you use system calls method in frontend then you should fire one request to frontend from Database when some user like, I send you request then within a sec database send you one system call and your frontend code immediate correct the button text like
"Add a friend" to "incoming request"
or else.
if you are only setting up database then just make a system call which send it to UI when friend request arrives or as you say Document created, the further process will be handled by UI Developer.
Thank you.
if you don't like the answer then I m apologize for that but don't downvote me because I M new in Stack Overflow Community.
I am connecting to O365 Outlook Mail Get Messages REST API, e.g.
GET https://outlook.office365.com/api/v1.0/me/messages?$top=50&$select=Id
and I am trying to retrieve just IDs so I can determine if messages have been deleted from my inbox (e.g. diff'ing against a previous ID list). I'm checking #odata.nextLink to perform a synchronous series of REST calls until complete.
I'm finding that this call has roughly the same performance as downloading the full message (e.g. without the $select clause), aka ~50 Ids / second. I'd like to know if there is a more efficient / quicker way of retrieving just a list of Ids of all messages in the Inbox. A call to retrieve a list of deleted/moved Ids from a point in time (e.g. tombstones) would also work, something like:
GET https://outlook.office365.com/api/v1.0/me/messages?$top=50&$select=Id&$filter=DateTimeTombstone gt 2014-09-01T00:00:00Z
Thanks!
No, currently there isn't. Sync is on our radar to add though, which sounds like it might help your scenario.
Don't know about the REST API, but EWS lets you sync any Exchange folder - this way you will know which items were created/modified/deleted without loading all items in the folder - see https://msdn.microsoft.com/en-us/library/office/Ee693003(v=EXCHG.80).aspx
I am using the GA Data Export API to interact with Google Analytics and I'm making a lot of progress, I am using this URL Endpoint initially to pull all the profiles under an account:
https://www.google.com/analytics/feeds/accounts/default
This URL retrieves each GA ID (profile) and each UA. One thing I've realized is one account can contain multiple UAs and when this happens, this request pulls all profiles. We have a client who has about 115 profiles under like 10 different UAs, and the request takes about 30 seconds for the initial request (and then I believe it must be cached, because it speeds up considerably after this, but then the next day the same thing occurs).
Is there a way to get a list of UA's without pulling the profiles? This way I can query the UA specifically for the profiles instead of pulling each one.
Any advice on this would be really helpful!
Thanks
UPDATE: Here's some documentation on the specific call I am using right now:
http://code.google.com/apis/analytics/docs/gdata/gdataReferenceAccountFeed.html
UPDATE 1: I have found some interesting information in the docs
Once your application has verified
that the user has Analytics access,
its next step is to find out which
Analytics accounts the user has access
to. Remember, users can have access to
many different accounts, and within
them, many different profiles. For
this reason, your application cannot
access any report information without
first requesting the list of accounts
available to the user. The resulting
accounts feed returns that list, but
most importantly, the list also
contains the account profiles that the
user can view.
So this means that you have to use the default accounts call to get these back? Surely, somebody has had this issue before?
So apparently, you can query the account if you know the UA-ID, however there is no way to get back a list of only UA IDs.
One way you can do it is have the user enter their own UA ID instead of having them choose one; not as user-friendly as it could be but better than making the user wait 30 seconds!
I have created Twitter bots for many geographic locations. I want to allow users to #-reply to the Twitter bot with commands and then have the bot respond with the results. I would like to have the bot reply to the user as quickly as possible (realtime).
Apparently, Twitter used to have an XMPP/Jabber interface that would provide this type of realtime feed of replies but it was shut down.
As I see it my options are to use one of the following:
REST API
This would involve polling every X minutes for each bot. The problem with this is that it is not realtime and each Twitter account would have to be polled.
Search API
The search API does allow specifying a "-to" parameter in the search and replies to all bots could be aggregated in a search such as "-to bot1 OR -to bot2...". Though if you have hundreds of bots then the search string would get very long and probably exceed the maximum length of a GET request.
Streaming API
The streaming API looks very promising as it provides realtime results. The API allows you to specify a follow and track parameters. follow is not useful as the bot does not know who will be sending it commands. track allows you to specify keywords to track. This could possibly work by creating a daemon process that connects to the Streaming API and tracks all references to the bot's names. Once again since there are lots of bots to track the length and complexity of the query may be an issue. Another idea would be to track a special hashtag such as #botcommand and then a user could send a command using this syntax #bot1 weather #botcommand. Then by using the Streaming API to track all references to #botcommand would give you a realtime stream of all the commands. Further parsing could then be done to determine which bot to send the command to. This blog post has more details on the Streaming API
Third-party service
Are there any third-party companies that have access to the Twitter firehouse and offer realtime data?
I haven't investigated these, but here are a few that I have found:
Gnip
Tweet.IM
excla.im
TwitterSpy - seems to use polling, not realtime
tweethook
I'm leaning towards using the Streaming API. Is there a better way to get near realtime #-replies for many (hundreds) of Twitter accounts?
UPDATE: Twitter just announced that in the future they will have User Streams which expands upon the Streaming API. User Streams Preview
Either track or follow will work for the cases you describe. See http://apiwiki.twitter.com/Streaming-API-Documentation#track for details on what track actually does. The doc on follow is on the same page.
There are rate limits of sorts on the streaming API, but they have to do with how big a slice of the total tweet stream you're consuming. For writing a bot like this you won't hit these limits without a pretty big user base. And when you get that user base you can apply for elevated access levels that increase the rate limets.
There's the twitter firehose but you're probably best off using the Streaming API. The firehose is open to Google (try googling your twitter name) and as the link says they're opening it up to all soon enough.
You'll want to get your IP whitelist too.
If your not already, you want to check out the GoogleGroup for twitter devs.
The track predicate for the streaming api would actually be useful because if you follow your bot's user IDs, you'll get all the messages made by your bots and all the other messages that mention your bots #usernames (including #replies). It really does track everything public on twitter relating to the user IDs you follow with it, give it a shot.
REST API:
The most comprehensive results with the least amount of false positives. Will include protected statuses if the bot is following the protected account. If you poll every thirty seconds it is pretty close to realtime and you will be well under your rate limit (350/hour) if you are using api.twitter.com/1 with OAuth.
Streaming API:
You will want to avoid the Search API. It is trending more and more towards popular results and not complete results.
Streaming API
The fastest but also likely to miss some statuses as well as include false positives. Protected statuses for example are not included. Track for a screen_name will return statuses with that screen_name in it but will also include tweets that just have the screen_name as a string without the # so be sure to filter on your side.