$.ajax({
type: 'POST',
url: "ajaxClients.php",
data: '&m=removeAlert&id='+ alertId,
success: function(resultData) {
if ($('#noteRow_'+ alertId).length)
{
alert('ROW FOUND - CONTENT IS: '+ $('#noteRow_'+ alertId).html() +' -- REPLACING CONTENT NOW');
}
$('#noteRow_'+ alertId).html('<font color="red">- Note has been removed</font>');
}
});
So it is simple enough. On the success i do get the alert, it shows the content in the alert, etc.
Then right after when i try to set the html to something else, it does nothing. I have tried .empty(), .remove()... there are no console errors. Any ideas what i am missing?
EDIT - html...
<div id="noteRow_127"><img onclick="removeAlert('127')" style="cursor:pointer;" alt="Remove Message" title="Remove Message" src="images/notificationRemove.png" border="0"> [04/04/2013 06:26 PM] <b>Austin</b>: afvazf</div>
The "removeAlert()" is what fires the ajax call...
EDIT 2...
I guess somehow this is being put on the page 2 times. Although in the PHP file there is only one instance of the function that builds the rows, so i guess i just need to figure out wth is going on there. So for anyone else with this issue, inspect the element (with chrome or similar) and ctrl+f for it and see if it is on the page more than once!
FINAL EDIT:
Since i could not figure out how it was ending up on the page twice
$('[id="noteRow_'+ alertId +'"]').html('<font color="red">- Note has been removed</font>');
Took care of it!
http://jsbin.com/ujimiw/2/edit
It seems to work fine from my perspective. I just bound the click handler to the button and it works.
Related
I am running an E2E test for an Angular 7.x app. The test runs straight forward on my local machine. But when I push it on the repo (GitLab), then pipeline fails and throws following error:
USER PROFILE - Check and change PROFILE
- Failed: element not interactable
(Session info: chrome=71.0.3578.80)
(Driver info: chromedriver=2.45.615279 (12b89733300bd268cff3b78fc76cb8f3a7cc44e5),platform=Linux 4.14.74-coreos x86_64)
Test Case:
it('USER PROFILE - Check and change PROFILE', () => {
page.navigateTo();
browser.sleep(1000);
expect(page.getProfileEditTagName()).toMatch('app-edit-profile');
expect(element(by.className('logged-as')).getText()).toBe( testEmail );
browser.actions().mouseMove( element.all( by.id('editIcon-salutation') ).get(0)).click().perform().then(function () {
browser.sleep(4000);
element( by.className('mat-select-arrow') ).click().then(function () {
browser.actions().mouseMove( element.all(by.className('option-value mat-option')).get(0)).click().perform();
browser.sleep(1000);
browser.actions().mouseMove( element.all( by.id('saveButton-salutation') ).get(0)).click().perform();
browser.sleep(1000);
});
});
});
navigateTo() is just a method in profile.po.ts:
navigateTo() {
browser.get('/profileComponentUrl');
}
What's confusing me and where I even can't localize the bug or what's wrong, is that it works fine locally. But once I push to repo, then it fails exactly at that test case. Any Hint please?
The reason element is not interactable could be - performing an action on hidden or obscured element.
You can try -
1. add sleep after by.className('mat-select-arrow') ).click(), as I can see you have not added any waits there.
2. Try to check if you running the test on your local and Jenkins machine with same screen resolution. (this could be the reason of obscured element)
I'd recommend to:
Enable the stacktrace in protractor config: new SpecReporter({ spec: { displayStacktrace: true } }) so you can see exactly what element is throwing the error. This won't solve it but at least should point you in the right direction.
Then if you're using tabs, buttons or other elements that hide/show/disable/enable or change the DOM view, you add a browser.sleep(100) after calling a .click()
I had a same kind of problem and I found this.
I copy pasted that (and some other minor tweaks for example force clicking on previous page in for-loop) and it worked. I believe that the browser.driver.manage().window().maximize(); was part of the solution.
One reason which i figure out is the scroll issue. You need to check the element is displaying properly or not. It may be hidden. So use scrollToTop/scrollToElement/scrollToElementView etc. You can write different scroll methods which suites the condition better.
Another reason is the locator. Try to change the locator, do not trim the locator too much. Just try with full body css locator, if it works then trim properly. Some time in chrome console it may work but not with the test case.
Using TinyMCE 4 and the paste plugin with a custom image upload handler (based on documentation sample), I've got the upload working just fine. However, when the upload fails, the image is still added to the content. The sample indicates calling the failure method for errors but that doesn't remove the image.
I've tried adding a paste_postprocess callback to filter the content, but at that point there is no difference in the content between a successfully uploaded image and a failed one. They both appear in the content like:
<div style="display:none"><img src="data:image/jpeg;base64, datahere" /></div>
The end result in the content is actually different. A successful upload has an image source like:
<img src="http://website/uploads/mceclip11.jpg" />
Whereas a failed upload looks like:
<img src="blob:http://website/dd3bdcda-b7b1-40fe-9aeb-4214a86a92a9">
To try this out, I created a TinyMCE Fiddle here.
Any ideas on how to remove the failed upload image from the content before it's displayed to the user?
For anyone who might try something similar, I figured out a way to deal with this.
After calling the failure method as shown in the examples, I now call a method to remove the failed image before it shows up in the editor.
The function looks something like this:
function removeFailedUpload() {
var editor = tinymce.EditorManager.get('editorId');
var $html = $('<div />',{html:editor.getContent()});
$html.find('img[src^="data:"]').remove();
editor.setContent($html.html());
}
The simplest solution is to make use of the undo manager:
tinymce.activeEditor.undoManager.undo();
In version 5 you can pass an optional object to remove the image in case of failure.
function images_upload_handler(blobInfo, success, failure) {
failure('Fail fail fail', { remove: true });
alert('failed upload');
return;
},
see the docs.
I suggest two methods for more compact removal with keeping the cursor position.
Method 1:
var editor = tinymce.activeEditor;
editor.selection.collapse();
$(editor.dom.doc).find('img[src^="blob:"]').remove();
Method 2:
var editor = tinymce.activeEditor;
var img = $(editor.dom.doc).find('img[src^="blob:"]').get(0);
if (img)
editor.execCommand('mceRemoveNode', false, img);
I am new to writing test cases using protractor for non angular application. I wrote a sample test case.Here the browser closes automatically after running test case.How can I prevent this. Here is my code
var submitBtnElm = $('input[data-behavior=saveContribution]');
it('Should Search', function() {
browser.driver.get('http://localhost/enrollments/osda1.html');
browser.driver.findElement(by.id('contributePercentValue')).sendKeys(50);
submitBtnElm.click().then(function() {
});
});
I was also struggling with a similar issue where i had a test case flow where we were interacting with multiple application and when using Protractor the browser was closing after executing one conf.js file. Now when I looked into the previous response it was like adding delay which depends on how quick your next action i performed or it was hit or miss case. Even if we think from debugging perspective most of the user would be performing overnight runs and they would want to have browser active for couple of hours before they analyze the issue. So I started looking into the protractor base code and came across a generic solution which can circumvent this issue, independent of any browser. Currently the solution is specific to requirement that browser should not close after one conf.js file is executed, then could be improved if someone could add a config parameter asking the user whether they want to close the browser after their run.
The browser could be reused for future conf.js file run by using tag --seleniumSessionId in command line.
Solution:
Go to ..\AppData\Roaming\npm\node_modules\protractor\built where your
protractor is installed.
Open driverProvider.js file and go to function quitDriver
Replace return driver.quit() by return 0
As far as my current usage there seems to be no side effect of the code change, will update if I came across any other issue due to this change. Snapshot of code snippet below.
Thanks
Gleeson
Snapshot of code snippet:
Add browser.pause() at the end of your it function. Within the function itself.
I found Gleeson's solution is working, and that really helped me. The solution was...
Go to %APPDATA%Roaming\npm\node_modules\protractor\built\driverProviders\
Find driverProviders.js
Open it in notepad or any other text editor
Find and Replace return driver.Quit() to return 0
Save the file
Restart your tests after that.
I am using
node v8.12.0
npm v6.4.1
protractor v5.4.1
This solution will work, only if you installed npm or protractor globally; if you have installed your npm or protractor locally (in your folder) then, you have to go to your local protractor folder and do the same.
I suggest you to use browser.driver.sleep(500); before your click operation.
See this.
browser.driver.sleep(500);
element(by.css('your button')).click();
browser.driver.sleep(500);
Add a callback function in It block and the browser window doesn't close until you call it.
So perform the action that you need and place the callback at your convenience
var submitBtnElm = $('input[data-behavior=saveContribution]');
it('Should Search', function(callback) {
browser.driver.get('http://localhost/enrollments/osda1.html');
browser.driver.findElement(by.id('contributePercentValue')).sendKeys(50);
submitBtnElm.click().then(function() {
// Have all the logic you need
// Then invoke callback
callback();
});
});
The best way to make browser NOT to close for some time, Use browser.wait(). Inside the wait function write logic for checking either visibilityOf() or invisibilityOf() of an element, which is not visible or it will take time to become invisible on UI. In this case wait() keep on checking the logic until either condition met or timeout reached. You can increase the timeout if you want browser visible more time.
var EC=protractor.ExpectedConditions;
var submitBtnElm = $('input[data-behavior=saveContribution]');
it('Should Search', function() {
browser.driver.get('http://localhost/enrollments/osda1.html');
browser.driver.findElement(by.id('contributePercentValue')).sendKeys(50);
submitBtnElm.click().then(function() {
browser.wait(function(){
EC.invisibilityOf(submitBtnElm).call().then(function(isPresent){
if(isPresent){
return true;
}
});
},20000,'error message');
});
});
I'm sure there is a change triggered on your page by the button click. It might be something as subtle as a class change on an element or as obvious as a <p></p> element with the text "Saved" displayed. What I would do is, after the test, explicitly wait for this change.
[...]
return protractor.browser.wait(function() {
return element(by.cssContainingText('p', 'Saved')).isPresent();
}, 10000);
You could add such a wait mechanism to the afterEach() method of your spec file, so that your tests are separated even without the Protractor Angular implicit waits.
var submitBtnElm = $('input[data-behavior=saveContribution]');
it('Should Search', function() {
browser.driver.get('http://localhost/enrollments/osda1.html');
browser.driver.findElement(by.id('contributePercentValue')).sendKeys(50);
submitBtnElm.click().then(function() {
});
browser.pause(); // it should leave browser alive after test
});
browser.pause() should leave browser alive until you let it go.
#Edit Another approach is to set browser.ignoreSynchronization = true before browser.get(...). Protractor wouldn't wait for Angular loaded and you could use usual element(...) syntax.
Protractor will close browsers, that it created, so an approach that I am using is to start the browser via the webdriver-reuse-session npm package.
DISCLAIMER: I am the author of this package
It is a new package, so let me know if it solves your problem. I am using it with great success.
I have a protractor test that navigates to another url, which cannot be found/resolved in my test environment, so I check if the title is not the previous title.
The test is as follows:
it('should navigate to another site in case of click on cancel link', function () {
page.navigate();
page.submit();
protractor.getInstance().ignoreSynchronization = true;
browser.wait(function(){
return element(by.id('submit')).isPresent();
});
page.closePage();
// the title of a 404, dns issue etc is at least different from the previous site:
expect(browser.getTitle()).not.toEqual('MyDummyTitle')
protractor.getInstance().ignoreSynchronization = false;
});
This works in most browsers, but in Internet Explorer I find that it often is not ready navigating to the non-existing page when the expect is fired.
Can I somehow wait for the 'submit' element to be gone, similar to what I do before firing the closePage?
What I do in this cases is an active wait of an element to disappear:
Using a custom waitAbsent() helper function that actively waits for an element to disappear either by becoming invisible or by not being present.
That helper waits up to specTimeoutMs ignoring useless webdriver errors like StaleElementError.
Usage: add require('./waitAbsent.js'); in your onPrepare block or file.
Example to wait for #submit to be gone:
expect(element(by.id('submit')).waitAbsent()).toBeTruthy();
When uploading a file using filepicker.io, the filepicker.pick success callback is getting called before the file is actually available. Here's the code:
filepicker.pick({
mimetypes: ['image/*'],
container: 'modal',
services:['COMPUTER', 'FACEBOOK', 'INSTAGRAM', 'WEBCAM']
},
function(inkBlob){
$('img.foo').attr('src', inkBlob.url);
},
function(FPError){
console.log(FPError.toString());
});
I get a url in the inkBlob that comes in the callback, but sometimes if I insert that url into the dom (as above), I get a 404. Other times it works. I'm looking for a reliable way to know when I can use the file returned by filepicker. I figured the success callback was it, but there seems to be this race condition.
I realize I could wrap the success callback in a setTimeout, but that seems messy, and I'd like to not keep the user waiting if the file is actually available.
You can also use an event listener.
I have an ajax call that downloads an image after it's cropped by Ink. This call was failing sporadically. I fixed it by doing roughly the following:
filepicker.convert(myBlob,
{
crop: cropDimensions
},
function(croppedBlob) {
function downloadImage() {
...
}
var imageObj = new Image();
imageObj.onLoad(downloadImage()); //only download when image is there
imageObj.src = croppedBlob.url;
}
);
I have the same issue as you. My workaround was to attach an onError event to the image and have it retry on a 404 (can set a limit of retries to avoid infinite loop), but it's quite ugly and messy, so it would be great if someone came around with a better solution.