Protractor file upload with Saucelabs (Chrome 90) redirected to "data:text/html,<html></html>" after upload, works locally but not on Saucelabs - protractor

I am using protractor for a Node.js application with Chrome, in one test case I have to upload image and then click add/update button. The test case is working perfectly on my machine, but on Saucelabs it cant interact with the button after the image is uploaded. If I don't upload image its able to find button and click on it but when using with image upload it doesn't.
I have printed the current URL after image upload and it shows
"data:text/html,", whereas in the Saucelabs video I can see previous page.
Approaches I have tried
Tried to print all available handles and it only prints 1 handle i.e.
"handles are [chrome 90.0 Windows 10 #01-1]
CDwindow-B29A329D03022E36745F1A844177FAA8"
Print Current URL "data:text/html,"
Code
const remote = require('selenium-webdriver/remote');
browser.driver.setFileDetector(new remote.FileDetector());
// absolute path resolution
const path = require('path');
const fileToUpload = './Test_Images/' + imageName;
const absolutePath = path.resolve(__dirname, fileToUpload);
console.log('path is ', absolutePath);
// Find the file input element
const fileElem = element(by.css('input[type="file"]'));
//upload image
await browser.sleep(2000);
fileElem.sendKeys(absolutePath);
console.log('\n Image Uploaded for report');
//print current url after image upload
const currentUrl = (await browser.getCurrentUrl()).toString();
console.log('\n Current URL', currentUrl);
await browser.sleep(2000);
//get all handles
browser.getAllWindowHandles().then(function (handles) {
for (const handle of handles) {
console.log('\n\nhandles are\n', handle.toString());
}});
//click on the submit button
await adminReportsPage.createReportButton.click();
console.log('Report Submit button Clicked ');

The reason why this fails is because a remote machine in the cloud, as with Sauce Labs, does not has access to your local file system and your local machine has.
I'm not 100% sure if Protractor has a workaround for this, but with a framework like for example WebdriverIO, you have the option to use a method like uploadFile. That method will only work on Chrome and will translate the file to a base64 string so you can upload it form your local machine to the remove machine.
A working example can be found here
Last but not least, keep in the back of your mind that Protractor is deprecated and with the upcoming release of Selenium 4 you can't use it anymore because it doesn't support W3C. Also check the Protractor repository for more information.

#wswebcreation I have found a solution, it worked by changing the
const fileElem = element(by.css('input[type="file"]'));
to
const fileElem = browser.driver.findElement(by.css('input[type="file"]'));

Related

Puppeteer: Launch Chromium with "Preserve log" enabled

There is this handy feature in DevTools that you are able to preserve log (so it does not clear the content of the console nor the network tab etc. on page reloads / navigation).
At the moment my hand needs to be as fast as a lightning to click the checkbox during debugging if I don't want to miss a thing. I've already looked for corresponding chrome launch flags on peter.sh without a luck.
Is there a way to launch chromium with this feature enabled? Can it be applied with puppeteer?
My set up is so far:
const browser = await puppeteer.launch({ headless: false, devtools: true })
Edit
Thanks to the comment of #wOxxOm I was able to enable it, but the solution requires three additional dependencies on the project: puppeteer-extra, puppeteer-extra-plugin-user-preferences and puppeteer-extra-plugin-user-data-dir.
I would be interested in a solution without extra dependencies, exclusively in puppeteer.
user-preferences example:
const puppeteer = require('puppeteer-extra')
const ppUserPrefs = require('puppeteer-extra-plugin-user-preferences')
puppeteer.use(
ppUserPrefs({
userPrefs: {
devtools: {
preferences: {
'network_log.preserve-log': '"true"'
}
}
}
})
)
I've had some success without any extra packages:
Launch and close a Browser instance for the sole purpose of generating a new user data directory. Ideally you have provided your own path to it.
Locate the Preferences file (it's a JSON file), read it and write to devtools.preferences.
Relaunch a Browser (using the user data directory created in step 1)
Here's some code to get you started:
I've used the official puppeteer-core package so that I can use my local installation of Chrome which is why I provided the executablePath option. You don't need this if you use the full puppeteer package.
const pp = require('puppeteer-core');
const fs = require("fs");
const run = async () => {
const opts = { executablePath: "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
, devtools: true
, userDataDir: "/tmp/pp-udd"
, args: ["--auto-open-devtools-for-tabs"]
};
// open & close to create the user data directory
let browser = await pp.launch(opts);
await browser.close();
// read & write Preferences
const prefs = JSON.parse(fs.readFileSync("/tmp/pp-udd/Default/Preferences"));
prefs.devtools.preferences['network_log.preserve-log'] = '"true"';
fs.writeFileSync("/tmp/pp-udd/Default/Preferences", JSON.stringify(prefs));
// relaunch with our own Preferences from our own user data directory
browser = await pp.launch(opts);
let page = await browser.newPage();
await page.goto("https://stackoverflow.com/q/63661366/1244884");
};
run();
And here's a screencast:
The first launch is the "launch & close" of step 1
Then there's the second launch that goes to this question ;) with the DevTools open and the "Preserve log" option checked right from the start.

How to get the original path of an image chosen with image picker plugin in Flutter instead of copying to cache?

I'm making an Android app, and when using the image picker plugin;
final pickedFile = await ImagePicker().getImage(source: ImageSource.gallery);
File image = File(pickedFile.path);
the 'image' is a copy of the original image in the app's cache, however, I would like to directly use the original image's path to save it in my app because I don't want the apps cache size to grow. I saw that the deprecated method "pickImage" accomplished this (not copying to cache), but the new "getImage" seems to copy automatically and 'image's path is the path of the cached image.
How can I accomplish getting just the original path of the selected image without it being cached? (I'm assuming that using the original file's path would still work to display it in the app with FileImage(File(originalPath)), this is correct assumption?)
On iOS it was never possible to retrieve the original path and on Android it was only possible until SDK 30.
From FAQ of file_picker plugin,
Original paths were possible until file_picker 2.0.0 on Android, however, in iOS they were never possible at all since iOS wants you to make a cached copy and work on it. But, 2.0.0 introduced scoped storage support (Android 10) and with and per Android doc recommendations, files should be accessed in two ways:
Pick files for CRUD operations (read, delete, edit) through files URI and use it directly — this is what you actually want but unfortunately isn’t supported by Flutter as it needs an absolute path to open a File descriptor;
Cache the file temporarily for upload or similar, or just copy into your app’s persistent storage so you can later access it — this is what’s being done currently and even though you may have an additional step moving/copying the file after first picking, makes it safer and reliable to access any allowed file on any Android device.
I have an example for the file_picker package:
var filePath = '';
var fileExtension = '';
var fileName = '';
void buttonOnTap() async {
try {
filePath = await FilePicker.getFilePath(type: FileType.IMAGE);
if (filePath != null) {
fileName = filePath.split('/').last;
fileExtension = fileName.split('.').last;
} else {
filePath = '';
fileName = '';
fileExtension = '';
}
} catch (e) {}
if (filePath != '') {
Image.file(File(filePath), fit: BoxFit.cover);
}
}

Is it possible to make the devtool status off when using Puppeteer?

is it possible to make the devtool status off when using Puppeteer?
because there are some websites that protect their pages from being inspected using devtool, so it can not be accessed using Puppeteer.
I opened this url https://jsbin.com/cecuzeb/edit?js,output to check devtool status
const browser = await puppeteer.launch({
headless: false,
devtools: false,
});
const page = await browser.newPage();
await page.goto('https://jsbin.com/cecuzeb/edit?js,output');
is there any way to make this status off?
The answer below is rather a theoretical version to prevent detection by normal dev tools detection using /./.toString method based on the provided jsbin link on main question.
Stop all intervals of page. Since most devtool detection libraries are simply running the interval check, we can stop them.
(function(w){w = w || window; var i = w.setInterval(function(){},100000); while(i>=0) { w.clearInterval(i--); }})(/*window*/);
Stop console.clear, if there are no console.clear running, then you won't have problem with the console getting cleared without your permission. Also stop printing "%c" or any other clearing characters.
console.clear = console.log = console.warn = console.error = () => {};

Page Blob download with Azure java SDK does not download the complete Blob

Trying to download a page blob of size 2 GB using java sdk and it fails with Storage exception because the file size downloaded does not match the actual file size
On multiple tries the same result is seen, although there is slight change in the downloaded file size. Setting timeout to maximum value also does not help.
Also, when I download the same vhd using the Azure portal, I see that the download completes but only partially. It is usually comparable to the one downloaded with SDK.
In the SDK code, I can see HTTpUrlConnection is being used. Could that be a problem ? The same code on a Windows machine has similar results but only the downloaded file is few more MB's in size but not complete.
Any thoughts on how to get it working ?
The code snippet used is
URI blobEndpoint = null;
String uriString = "http://" + "sorageaccount" + ".blob.core.windows.net";
blobEndpoint = new URI(uriString);
CloudBlobClient blobClient = new CloudBlobClient(blobEndpoint,
new StorageCredentialsAccountAndKey("abcd", "passed"));
CloudBlobContainer container = blobClient.getContainerReference(Constants.STORAGE_CONTAINER_NAME);
CloudPageBlob pageBlob = container.getPageBlobReference("http://abcd.blob.core.windows.net/sc/someimg.vhd");
System.out.println("Page Blob Name: " + pageBlob.getName());
OutputStream outStream = new FileOutputStream(new File("/Users/myself/Downloads/TestDownload.vhd"));
System.out.println("Starting download now ... ");
BlobRequestOptions options = new BlobRequestOptions();
options.setUseTransactionalContentMD5(true);
options.setStoreBlobContentMD5 (true); // Set full blob level MD5
options.setTimeoutIntervalInMs(Integer.MAX_VALUE);
options.setRetryPolicyFactory(new RetryLinearRetry());
pageBlob.download(outStream, null, options, null);
outStream.close();

How do you make the Chrome Developer Tools only show your console.log?

It adds logs when plugins say something. It adds logs when it gets anything from the cache manifest. It logs HTTP information sometimes.
My 1 little log gets flooded by 10,000 logs I don't need or want.
Use only in development:
(function(){
var originalConsole = window.console;
window.console = {};
window.console.log = window.console.debug = window.console.error = function(){};
window.myLog = function() {
originalConsole.log.apply(originalConsole, arguments);
};
}());
This will save a local copy of the original window.console object.
It will change the original window.console object to use empty functions.
And finally it will define a global myLog function which will use the local copy of the original window.console to actually log stuff.
This way all the other code will use the useless console.log() and your code could use myLog().
on the Console tab select No info on the left Hand I have attached a screenshot here
You should update to the google chrome latest version