I am running a test in headless chrome and part of it I want to prevent the browser from loading the images on the page, the page must be a data url and not a normal page.
I am using headless chrome with the next start command:
chrome --headless --remote-debugging-port=9222
I had created the next test to demonstrate what I am trying to achieve.
but nothing works...
const CDP = require('chrome-remote-interface');
const fs = require('fs');
CDP(async(client) => {
const {
Page,
Network
} = client;
try {
await Page.enable();
await Network.enable();
await Network.emulateNetworkConditions({
offline: true,
latency: 0,
downloadThroughput: 0,
uploadThroughput: 0
});
await Page.navigate({
url: "data:text/html,<h1>The next image should not be loaded</h1><img src='http://via.placeholder.com/350x150'>"
});
await Page.loadEventFired();
const {
data
} = await Page.captureScreenshot();
fs.writeFileSync((+new Date()) + '.png', Buffer.from(data, 'base64'));
} catch (err) {
console.error(err);
} finally {
await client.close();
}
}).on('error', (err) => {
console.error(err);
});
You can block images using this flag.
It works on canary and stable.
chrome --headless --remote-debugging-port=9222
--blink-settings=imagesEnabled=false
With puppeteer you can use the args option for passing the blink-settings argument
const browser = await puppeteer.launch({
args: [
'--blink-settings=imagesEnabled=false'
]
});
If you are using Puppeteer, you can use the code below:
await page.setRequestInterception(true);
page.on('request', (request) => {
if (request.resourceType() === 'image') request.abort();
else request.continue();
});
From: https://github.com/puppeteer/puppeteer/blob/main/examples/block-images.js
Related
I tried to have a one time authentication using session and use the same for all the tests in the spec file.
While trying to run my test , sometimes i get this below error which im unable to underdstand or fix. Any help on this would be appreciated.
browser.newContext: Cookie should have a valid expires, only -1 or a positive number for the unix timestamp in seconds is allowed
at C:\Users\v.shivarama.krishnan\source\repos\PlaywrightDemo\node_modules\#playwright\test\lib\index.js:595:23
at Object.context [as fn] (C:\Users\v.shivarama.krishnan\source\repos\PlaywrightDemo\node_modules\#playwright\test\lib\index.js:642:15)
Spec.ts
import { chromium, test, expect } from "#playwright/test";
test.describe('Launch Browser', () => {
await context.storageState({ path: 'storage/admin.json' });
await page.goto('abc.com');
await expect(page.locator('#ebiLink')).toBeVisible();
const texts = await page.locator('#ebiLink').textContent();
console.log("text of ebi is " + texts);
await page.goto('abc.com');
await expect(page.locator('text= Detailed Interfaces ')).toBeVisible();
await page.waitForSelector('#searchTab');
await page.waitForSelector('#InterfaceCard');
await page.locator('#searchTab').type('VISHW-I7939');
await page.locator("button[type='button']").click();
await page.locator('#InterfaceCard').first().click();
await expect(page.locator('#ngb-nav-0')).toBeVisible();
const interfaceID = await page.locator("//span[#class='value-text']").first().allInnerTexts();
console.log('interface id is' + interfaceID);
const dp = await page.waitForSelector('role=listbox');
await page.locator('role=listbox').click();
const listcount = await page.locator('role=option').count();
await page.locator('role=option').nth(1).click();
await expect(page.locator('#ngb-nav-0')).toBeVisible();
});
test('Move to shells screen', async ({ page, context }) => {
await context.storageState({ path: 'storage/admin.json' });
await page.goto('abc.com');
await expect(page.locator('#ListHeader')).toBeVisible();
const shells = await page.locator('#ListHeader').textContent();
console.log('Text of shells header is '+shells);
});
});
global-setup.ts (for one time login and getting the session)
import { Browser, chromium, FullConfig } from '#playwright/test'
async function globalSetup(config: FullConfig) {
const browser = await chromium.launch({
headless: false
});
await saveStorage(browser, 'Admin', 'User', 'storage/admin.json')
await browser.close()
}
async function saveStorage(browser: Browser, firstName: string, lastName: string, saveStoragePath: string) {
const page = await browser.newPage()
await page.goto('abc.com');
await page.waitForSelector("//input[#type='email']", { state: 'visible' });
await page.locator("//input[#type='email']").type('ABC#com');
await page.locator("//input[#type='submit']").click();
await page.locator("//input[#type='password']").type('(&^%');
await page.locator("//input[#type='submit']").click();
await page.locator('#idSIButton9').click();
await page.context().storageState({ path: saveStoragePath })
}
export default globalSetup
Have you registered global-setup.ts script in the Playwright configuration file: like below?
// playwright.config.ts
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
globalSetup: require.resolve('./global-setup'),
};
export default config;
again you don't have to write code to use the session-storage at each test level, you can use - use attribute of Playwright configuration as below:
// playwright.config.ts
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
globalSetup: require.resolve('./global-setup'),
use: {
// Tell all tests to load signed-in state from 'storageState.json'.
storageState: 'storageState.json'
}
};
export default config;
Seems like you are trying to use the same context in both tests, could that be a problem?
can you please try with isolated context and page for each tests?
Also please check if it make sense to use session storage at test level instead of context-
test.use({ storageState: './storage/admin.json' })
Update about the tests-
General structure of tests would be -
test.describe('New Todo', () => {
test('Test 1', async ({context, page }) => {});
test('Test 2', async ({context, page }) => {});
});
I looked into the source code of playwright and found these two lines which show the error message you see.
assert(!(c.expires && c.expires < 0 && c.expires !== -1), 'Cookie should have a valid expires, only -1 or a positive number for the unix timestamp in seconds is allowed');
assert(!(c.expires && c.expires > 0 && c.expires > kMaxCookieExpiresDateInSeconds), 'Cookie should have a valid expires, only -1 or a positive number for the unix timestamp in seconds is allowed');
The kMaxCookieExpiresDateInSeconds is defined as 253402300799.
So basically the cookie that you captured could breach one of above rules. In my case it's that the expiry of a cookie is greater than this figure :).
refer to source code - https://github.com/microsoft/playwright/blob/5fd6ce4de0ece202690875595aa8ea18e91d2326/packages/playwright-core/src/server/network.ts#L53
I am using dio to make a network request. In testing phases I was using local host port 3000. I was using a javascript file and node to run it in testing mode. I would simply run node on the javascript file it would fire up the port at it would work. This was great but whenever I run it on a real device it does not work. So I am assuming I need to change it to something else for release...? I am new bare with me. Any suggestion or guidance would be helpful thank you.
const muxServerUrl = 'http://localhost:3000';
initializeDio() {
BaseOptions options = BaseOptions(
baseUrl: muxServerUrl,
connectTimeout: 8000,
receiveTimeout: 5000,
headers: {
"Content-Type": contentType, // application/json
},
);
_dio = Dio(options);
}
Implementation
late Response response;
try {
// print(response);
response = await _dio.post(
"/assets",
data: {
"videoUrl": videoUrl,
},
);
} catch (e) {
print('ran 2');
throw Exception('Failed to store video on MUX');
}
if (response.statusCode == 200) {
print('ran 4');
VideoData videoData = VideoData.fromJson(response.data);
String status = videoData.data!.status;
while (status == 'preparing') {
await Future.delayed(Duration(seconds: 1));
videoData = (await checkPostStatus(videoId: videoData.data!.id))!;
status = videoData.data!.status;
}
print('Video READY, id: ${videoData.data!.id}');
return videoData;
}
That Node Temp JS file
require("dotenv").config();
const express = require("express");
const bodyParser = require("body-parser");
const Mux = require("#mux/mux-node");
const { Video } = new Mux(
process.env.MUX_TOKEN_ID,
process.env.MUX_TOKEN_SECRET
);
const app = express();
const port = 3000;
var jsonParser = bodyParser.json();
app.post("/assets", jsonParser, async (req, res) => {
console.log("BODY: " + req.body.videoUrl);
const asset = await Video.Assets.create({
input: req.body.videoUrl,
playback_policy: "public",
});
res.json({
data: {
id: asset.id,
status: asset.status,
playback_ids: asset.playback_ids,
created_at: asset.created_at,
},
});
});
app.get("/assets", async (req, res) => {
const assets = await Video.Assets.list();
res.json({
data: assets.map((asset) => ({
id: asset.id,
status: asset.status,
playback_ids: asset.playback_ids,
created_at: asset.created_at,
duration: asset.duration,
max_stored_resolution: asset.max_stored_resolution,
max_stored_frame_rate: asset.max_stored_frame_rate,
aspect_ratio: asset.aspect_ratio,
})),
});
});
app.get("/asset", async (req, res) => {
let videoId = req.query.videoId;
const asset = await Video.Assets.get(videoId);
console.log(asset);
res.json({
data: {
id: asset.id,
status: asset.status,
playback_ids: asset.playback_ids,
created_at: asset.created_at,
duration: asset.duration,
max_stored_resolution: asset.max_stored_resolution,
max_stored_frame_rate: asset.max_stored_frame_rate,
aspect_ratio: asset.aspect_ratio,
},
});
});
app.listen(port, () => {
console.log(`Mux API listening on port ${port}`);
});
localhost is what's called your loopback address and that's only working because you are running the application on your machine. When you release the app you have to host your Nodejs app in some server and use the IP address of that server instead. Before you host that app I encourage you to spend more time making sure that it secure.
If you just want to run the app on an Android emulator you can use 10.0.2.2 to reach the hosting machine loopback
I am trying to get change my request url and see the new url in the response
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', interceptedRequest => {
if (interceptedRequest.url().includes('some-string')) {
interceptedRequest.respond({
status: 302,
headers: {
url: 'www.new.url.com'
},
})
}
interceptedRequest.continue()
});
page.on('response', response => {
console.log(response.url())
})
await page.goto('www.orginal.url.com')
// some code omitted
})();
In the interceptedRequest.respond method I'm trying to update the value of the url. Originally I was trying:
interceptedRequest.continue({url: 'www.new.url.com'})
but that way is not long supported in the current version of Puppeteer.
I was expecting to get www.new.url.com in the response, but I actually get the orignial url with www.new.url.com appended to the end.
Thanks in advance for any help.
It helped me. You need to change url to location
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', interceptedRequest => {
if (interceptedRequest.url().includes('some-string')) {
interceptedRequest.respond({
status: 302,
headers: {
location: 'www.new.url.com'
},
})
}
});
page.on('response', response => {
console.log(response.url())
})
await page.goto('www.orginal.url.com')
// some code omitted
})();
I can get access to the entire HTML for any URL by opening dev-tools and typing:
document.documentElement
I am trying to replicate the same behavior using puppeteer, however, the snippet below returns {}
const puppeteer = require('puppeteer'); // v 1.1.0
const iPhone = puppeteer.devices['Pixel 2 XL'];
async function start(canonical_url) {
const browserURL = 'http://127.0.0.1:9222';
const browser = await puppeteer.connect({browserURL});
const page = await browser.newPage();
await page.emulate(iPhone);
await page.goto(canonical_url, {
waitUntil: 'networkidle2',
});
const data = await page.evaluate(() => document.documentElement);
console.log(data);
}
returns:
{}
Any idea on what I could be doing wrong here?
I am using Puppeteer to generate PDF files from HTML strings.
Reading the documentation, I found two ways of generating the PDF files:
First, passing an url and call the goto method as follows:
page.goto('https://example.com');
page.pdf({format: 'A4'});
The second one, which is my case, calling the method setContent as follows:
page.setContent('<p>Hello, world!</p>');
page.pdf({format: 'A4'});
The thing is that I have 3 different HTML strings that are sent from the client and I want to generate a single PDF file with 3 pages (in case I have 3 HTML strings).
I wonder if there exists a way of doing this with Puppeteer? I accept other suggestions, but I need to use chrome-headless.
I was able to do this by doing the following:
Generate 3 different PDFs with puppeteer. You have the option of saving the file locally or to store it in a variable.
I saved the files locally, because all the PDF Merge plugins that I found only accept URLs and they don't accept buffers for instance. After generating synchronously the PDFs locally, I merged them using PDF Easy Merge.
The code is like this:
const page1 = '<h1>HTML from page1</h1>';
const page2 = '<h1>HTML from page2</h1>';
const page3 = '<h1>HTML from page3</h1>';
const browser = await puppeteer.launch();
const tab = await browser.newPage();
await tab.setContent(page1);
await tab.pdf({ path: './page1.pdf' });
await tab.setContent(page2);
await tab.pdf({ path: './page2.pdf' });
await tab.setContent(page3);
await tab.pdf({ path: './page3.pdf' });
await browser.close();
pdfMerge([
'./page1.pdf',
'./page2.pdf',
'./page3.pdf',
],
path.join(__dirname, `./mergedFile.pdf`), async (err) => {
if (err) return console.log(err);
console.log('Successfully merged!');
})
I was able to generate multiple PDF from multiple URLs from below code:
package.json
{
............
............
"dependencies": {
"puppeteer": "^1.1.1",
"easy-pdf-merge": "0.1.3"
}
..............
..............
}
index.js
const puppeteer = require('puppeteer');
const merge = require('easy-pdf-merge');
var pdfUrls = ["http://www.google.com","http://www.yahoo.com"];
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
var pdfFiles=[];
for(var i=0; i<pdfUrls.length; i++){
await page.goto(pdfUrls[i], {waitUntil: 'networkidle2'});
var pdfFileName = 'sample'+(i+1)+'.pdf';
pdfFiles.push(pdfFileName);
await page.pdf({path: pdfFileName, format: 'A4'});
}
await browser.close();
await mergeMultiplePDF(pdfFiles);
})();
const mergeMultiplePDF = (pdfFiles) => {
return new Promise((resolve, reject) => {
merge(pdfFiles,'samplefinal.pdf',function(err){
if(err){
console.log(err);
reject(err)
}
console.log('Success');
resolve()
});
});
};
RUN Command: node index.js
pdf-merger-js is another option. page.setContent should work just the same as a drop-in replacement for page.goto below:
const PDFMerger = require("pdf-merger-js"); // 3.4.0
const puppeteer = require("puppeteer"); // 14.1.1
const urls = [
"https://news.ycombinator.com",
"https://en.wikipedia.org",
"https://www.example.com",
// ...
];
const filename = "merged.pdf";
let browser;
(async () => {
browser = await puppeteer.launch();
const [page] = await browser.pages();
const merger = new PDFMerger();
for (const url of urls) {
await page.goto(url);
merger.add(await page.pdf());
}
await merger.save(filename);
})()
.catch(err => console.error(err))
.finally(() => browser?.close())
;