Good morning, I need to make attended transfers with SIP.js. Anyone succeded in this task?
I can only make blind transfers right now, i found an article that reports that in version 0.7.x there is support for attended transfer trough replace command.
https://www.onsip.com/blog/sipjs-070-adds-fixes-and-support-for-attended-transfer-recommended-upgrade
Maybe too late, but I write answer for future. I made it in this steps:
saved current session in other variable, for example var holded_session = session;
call in current session hold, session.hold()
make new call ua.invite()
make transfer session.refer(holded_session)
Functions hold() and unhold() are not documented in documentation, but when you output session into console, you will see its in there.
I solved this for audio this manner
sessionOne.hold();//already exists previously
var uri = phone + '#' + sip_server;
var options = {
media: {
constraints: {
audio: true,
video: false
},
render: {
remote: document.getElementById('audio-output')
},
stream: mediaStream
},
extraHeaders: [...],
inviteWithoutSdp: true,
rel100: SIP.C.supported.SUPPORTED,
activeAfterTransfer: false //die when the transfer is completed
};
sessionTwo = userAgent.invite(uri, options);
//if yo want handle the transfer at the application level, then implements the handler events as on('refer', function(request) {}) for the session object
...
sessionOne.refer(sessionTwo);//session two already must is accepted (sip server)
Related
Below is a screenshot of me accessing https://www.ted.com and inspecting the Google Chrome DevTools' "Network" tab, and viewing the Timing data for the root & child requests.
How can I get all of the above timing data programatically using Puppeteer? Ideally, it would look like this JSON structure:
{
name: "www.ted.com",
queueTime: 0,
startedTime: 1.93,
stalledTime: 4.59,
dnsLookupTime: 10.67,
initialConnectionTime: <the number of milliseconds>,
...
},
{
name: <the next child request>,
...
}
You want to check out HAR (HTTP Archive) files, which is what you would create by saving the data via Chrome.
Quotation what a HAR file is (source):
The HAR file format is an evolving standard and the information contained within it is both flexible and extensible. You can expect a HAR file to include a breakdown of timings including:
How long it takes to fetch DNS information
How long each object takes to be requested
How long it takes to connect to the server
How long it takes to transfer assets from the server to the browser of each object
The data is stored as a JSON document and extracting meaning from the
low-level data is not always easy. But with practice a HAR file can
quickly help you identify the key performance problems with a web
page, letting you efficiently target your development efforts at areas
of your site that will deliver the greatest results.
There are libraries like puppeteer-har and chrome-har which can create HAR files by using puppeteer.
Code sample (for puppeteer-har, quote from the page)
const har = new PuppeteerHar(page);
await har.start({ path: 'results.har' });
await page.goto('http://example.com');
await har.stop();
HAR files are a good option, but if want something a bit more custom you can use Puppeteer to record request timing data by navigating to the page you wish to analyze, and tapping into the Chrome DevTools Protocol.
(async function() {
// launch in headless mode & create a new page
const browser = await pptr.launch({
headless: true,
});
const page = await browser.newPage();
// attach cdp session to page
const client = await page.target().createCDPSession();
await client.send('Debugger.enable');
await client.send('Debugger.setAsyncCallStackDepth', { maxDepth: 32 });
// enable network
await client.send('Network.enable');
// attach callback to network response event
await client.on('Network.responseReceived', (params) => {
const { response: { timing } } = params;
/*
* See: https://chromedevtools.github.io/devtools-protocol
* /tot/Network/#type-ResourceTiming for complete list of
* timing data available under 'timing'
*/
});
await page.goto('https://www.ted.com/', {
waitUntil: 'networkidle2',
});
// cleanup
await browser.close();
})();
For your case, you can listen on the Network.responseReceived event, and parse out the responseTime parameter, nested within the response property on the response object provided in the event listener callback. Their documentation on the interfaces are quite good. I'll list them below:
Chrome DevTools Protocol Docs
Data you can expect to receive from every Network.responseReceived event callback: Network.responseReceived.
More specific response-related data, in the response property: Network.Response.
And, finally, the nested request timing data you are looking for, under timing: Network.ResourceTiming.
You may also want to check out the Network.requestWillBeSent interface. You will be able to match up requests and responses by requestId.
From here, you can capture more data than you could ever need about the page you're visiting. You can also format however you wish obviously.
Currently, you can also get this information without the HAR file.
Using performance.getEntriesByType("resource")
// Obtain PerformanceEntry objects for resources
const performanceTiming = JSON.parse(
await page.evaluate(() =>
JSON.stringify(performance.getEntriesByType("resource"))
)
);
// Optionally filter resource results to find your specifics - ex. filters on URL
const imageRequests = performanceTiming.filter((e) =>
e.name.endsWith("/images")
);
console.log("Image Requests " , imageRequests)
I am using mail-listener2 to monitor an email account, as check that an email has been recieved as part of a test.
I have used the same implementation defined here: here:Fetching values from email in protractor test case
However, getLastEmail() returns an old email, rather than an email received after the mail-listener2 has started. It returns the first UNSEEN email.
I've looked at whether I can used different mail-listener2 configurations to solve this, but I haven't found anything. I've also tried to use a .last() on the mail returned, but this hasn't worked either.
Does anyone have a configuration solution, or a custom solution that would help to solve this problem?
I think this may help you, I implemented mail-listener2 using the same post you followed and it works great me for me. I just added a few extra parameters:
Under my config's onPrepare, I create a date:
var emailDate = new Date().getTime();
Then under my mailListener initialization:
var mailListener = new MailListener({
username: ...
password: ...
...
searchFilter: ["NEW", "UNSEEN", ["SINCE", emailDate]]
});
This should configure mailListener to only look for emails delivered after the time of emailDate, which is created when your test is started. You can also specify an exact date, i.e. ['SINCE', 'May 20, 2010']
More info on the node-imap docs (which mailListener2 utilizes)
I have this configuration in conf file in the mailListener configurations:
markSeen: true,
Every-time you read the email, it is marked as read and will not be fetched next time. This means that you will always read new emails.
In my case was not enough the previous answer due differences in the server time with gmail server time. So a complete way could be:
var currentDate = new Date().toUTCString()
var mailListener = new MailListener({
...
markSeen: true,
port: 993,
tls: true,
searchFilter: ["UNSEEN", ["SINCE", currentDate]]
});
then your getLastEmail helper could be:
getLastEmail = function () {
var deferred = protractor.promise.defer();
console.log("Waiting for an email...");
const hrs = new Date().getUTCHours()
const minutes = new Date().getUTCMinutes()
mailListener.on("mail", function(mail){
if( hrs=== new Date().getUTCHours("mail.headers.date") &&
minutes- new Date().getUTCMinutes("mail.headers.date") <=1 ) {
deferred.fulfill(mail);
}
});
return deferred.promise;
}
This worked for me.
I am currently trying to log user page views in meteor app by storing the userId, Meteor.Router.page() and timestamp when a user clicks on other pages.
//userlog.js
Meteor.methods({
createLog: function(page){
var timeStamp = Meteor.user().lastActionTimestamp;
//Set variable to store validation if user is logging in
var hasLoggedIn = false;
//Checks if lastActionTimestamp of user is more than an hour ago
if(moment(new Date().getTime()).diff(moment(timeStamp), 'hours') >= 1){
hasLoggedIn = true;
}
console.log("this ran");
var log = {
submitted: new Date().getTime(),
userId: Meteor.userId(),
page: page,
login: hasLoggedIn
}
var logId = Userlogs.insert(log);
Meteor.users.update(Meteor.userId(), {$set: {lastActionTimestamp: log.submitted}});
return logId;
}
});
//router.js This method runs on a filter on every page
'checkLoginStatus': function(page) {
if(Meteor.userId()){
//Logs the page that the user has switched to
Meteor.call('createLog', page);
return page;
}else if(Meteor.loggingIn()) {
return 'loading';
}else {
return 'loginPage';
}
}
However this does not work and it ends up with a recursive creation of userlogs. I believe that this is due to the fact that i did a Collection.find in a router filter method. Does anyone have a work around for this issue?
When you're updating Meteor.users and setting lastActionTimestamp, Meteor.user will be updated and send the invalidation signal to all reactive contexts which depend on it. If Meteor.user is used in a filter, then that filter and all consecutive ones, including checkLoginStatus will rerun, causing a loop.
Best practices that I've found:
Avoid using reactive data sources as much as possible within filters.
Use Meteor.userId() where possible instead of Meteor.user()._id because the former will not trigger an invalidation when an attribute of the user object changes.
Order your filters so that they run with the most frequently updated reactive data source first. For example, if you have a trackPage filter that requires a user, let it run after another filter called requireUser so that you are certain you have a user before you track. Otherwise if you'd track first, check user second then when Meteor.logginIn changes from false to true, you'd track the page again.
This is the main reason we switched to meteor-mini-pages instead of Meteor-Router because it handles reactive data sources much easier. A filter can redirect, and it can stop() the router from running, etc.
Lastly, cmather and others are working on a new router which is a merger of mini-pages and Meteor.Router. It will be called Iron Router and I recommend using it once it's out!
I have a Kendo UI autocomplete bound to a remote transport that I need to tweak how it works and am coming up blank.
Currently, I perform a bunch of searches on the server and integrate the results into a JSON response and then return this to the datasource for the autocomplete. The problem is that this can take a long time and our application is time sensitive.
We have identified which searches are most important and found that 1 search accounts for 95% of the chosen results. However, I still need to provide the data from the other searches. I was thinking of kicking off separate requests for data on the server and adding them the autocomplete as they return. Our main search returns extremely fast and would be the first items added to the list. Then as the other searches return, I would like them to add dynamically to the list.
Our application uses knockout.js and I thought about making the datasource part of our view model, but from looking around, Kendo doesn't update based on changes to your observables.
I am currently stumped and any advice would be welcomed.
Edit:
I have been experimenting and have had some success simulating what I want with the following datasource:
var dataSource = new kendo.data.DataSource({
transport: {
read: {
url: window.performLookupUrl,
data: function () {
return {
param1: $("#Input").val()
};
}
},
parameterMap: function (options) {
return {
param1: options.param1
};
}
},
serverFiltering: true,
serverPaging: true,
requestEnd: function (e) {
if (e.type == "read") {
window.setTimeout(function() {
dataSource.add({ Name: "testin1234", Id: "X1234" })
}, 2000);
}
}
});
If the first search returns results, then after 2 seconds, a new item pops into the list. However, if the first search fails, then nothing happens. Is it proper to use (abuse??) the requestEnd like this? My eventual goal is to kick off the rest of the searches from this function.
I contacted Telerik and they gave me the following jsbin that I was able to modify to suit my needs.
http://jsbin.com/ezucuk/5/edit
i am a newbie for xmpp. i plan to start a 'chat' web application.at client,i prepare use 'Strophe',but i found strophe cannot support registeration module.
someone said can use 'XEP-0077: In-Band Registration'.can u tell me what i can do it?
thanks
XEP-0077 is the way to go here. Make sure you've read it thoroughly. Next, look at the strophejs-plugins project to get some examples of how to write a strophe plugin. Then, you'll want to create protocol that imlements XEP-0077, starting with something like:
Strophe.addConnectionPlugin('register', {
_connection: null,
init: function(conn) {
this._connection = conn;
Strophe.addNamespace('REGISTER', 'jabber:iq:register');
},
get: function(callback) {
var stanza = $iq({type: "get"}).c("query",
{xmlns: Strophe.NS.REGISTER});
return this._connection.sendIQ(stanza.tree(), callback, function(){});
}
});
Make sure to contribute your patch to strophejs-plugins back on github.