Mochawesome with Cypress - how to get aggregated charts at higher level? - aggregate

I've just started using mochawesome with Cypress (9.7). Our test structure is basically a number of spec files, each following something like the following format:
describe('(A): description of this spec', () => {
describe ('(B): description of test abc', () => {
before(() => {
// do specific set up bits for this test
})
it('(C): runs test abc', () => {
// do actual test stuff
})
})
})
Where within each spec file there would be a single 'A' describe block, but there can be many 'B' level blocks (each with a single 'C') - done this way because the before block for each 'C' is always different - I couldn't use a beforeEach.
When I run my various spec files, each structured similarly to the above, the mochaewsome output is mostly correct - I get a collapsible block for each spec file at level 'A', each with multiple collapsible blocks at level B, each with test info as expected at level C.
But... The circular charts are only displayed at level B. What I was hoping, was that it might be possible to have aggregated charts at level A, and a further aggregated chart for all the level A blocks.
Not sure I've explained this brilliantly(!), but hopefully someone understands, and can offer a suggestion?!

In cypress-mochawesome-reporter there's an alternative setup using on('after:run') which can perform the aggregation.
In Cypress v9.7.0
// cypress/plugins/index.js
const { beforeRunHook, afterRunHook } = require('cypress-mochawesome-reporter/lib');
const { aggregateResults } = require('./aggregate-mochawesome-report-chart');
module.exports = (on, config) => {
on('before:run', async (details) => {
await beforeRunHook(details);
});
on('after:run', async () => {
aggregateResults(config)
await afterRunHook();
});
};
In Cypress v10+
// cypress.config.js
const { defineConfig } = require('cypress');
const { beforeRunHook, afterRunHook } = require('cypress-mochawesome-reporter/lib');
const { aggregateResults } = require('./aggregate-mochawesome-report-chart');
module.exports = defineConfig({
reporter: 'cypress-mochawesome-reporter',
video: false,
retries: 1,
reporterOptions: {
reportDir: 'test-report',
charts: true,
reportPageTitle: 'custom-title',
embeddedScreenshots: true,
inlineAssets: false,
saveAllAttempts: false,
saveJson: true
},
e2e: {
setupNodeEvents(on, config) {
on('before:run', async (details) => {
await beforeRunHook(details);
});
on('after:run', async () => {
aggregateResults(config)
await afterRunHook();
});
},
},
});
The module to do the aggregation is
// aggregate-mochawesome-reporter-chart.js
const path = require('path');
const fs = require('fs-extra')
function aggregateResults(config) {
const jsonPath = path.join(config.reporterOptions.reportDir , '/.jsons', '\mochawesome.json');
const report = fs.readJsonSync(jsonPath)
const topSuite = report.results[0].suites[0]
aggregate(topSuite)
fs.writeJsonSync(jsonPath, report)
}
function aggregate(suite, level = 0) {
const childSuites = suite.suites.map(child => aggregate(child, ++level))
suite.passes = suite.passes.concat(childSuites.map(child => child.passes)).flat()
suite.failures = suite.failures.concat(childSuites.map(child => child.failures)).flat()
suite.pending = suite.pending.concat(childSuites.map(child => child.pending)).flat()
suite.skipped = suite.skipped.concat(childSuites.map(child => child.skipped)).flat()
if (!suite.tests.length && suite.suites[0].tests.length) {
// trigger chart when to describe has no tests
suite.tests = [
{
"title": "Aggregate of tests",
"duration": 20,
"pass": true,
"context": null,
"err": {},
"uuid": "0",
"parentUUID": suite.uuid,
},
]
}
return suite
}
module.exports = {
aggregateResults
}
The function aggregate() recursively loops down through child suites and adds the test results to the parent.
json files
Note the json file is different at the point where afterRunHook runs and at the end of the test run.
If you have the option saveJson: true set, you will get a final json file in the report directory called index.json.
At the afterRunHook stage the file is mochawesome.json.
Before aggregation
After aggregation

Related

How to differentiate prefetch requests from normal fetch requests?

I am having trouble differentiating regular query requests from prefetch query requests in RTK Query. My goal is simple; I want a global loading spinner whenever a query fetches data. However, I also want to implement prefetch in a List so that paginating through different pages feels instantaneous for the end-user. What is happening now is that when I go to the next page in my List it is correctly prefetched so switching happens instantly. But then my global loading spinner is triggered for the prefetching of the page after that (which I obviously don't want happening). So I want to find out how to differentiate the prefetching requests from the regular fetching requests.
I have done extensive searching both on SO and the issue-tracker of redux-toolkit, but without success. Also, I have looked into the query requests that are made from prefetch requests and regular requests but those seem identical (which I would understand since the rtk-query team probably abstracted this).
Relevant code below:
LoadingWrapper.tsx
const LoadingWrapper = ({ children }) => {
// HOW TO DIFFERENTIATE HERE BETWEEN QUERIES?
const isSomeQueryPending = useSelector((state: RootState) => Object.values(state.api.queries).some((query) => query.status === 'pending'));
return (
<>
<LoadingScreen loading={isSomeQueryPending} />
{children}
</>
);
};
EntityList.tsx
import React, {
useCallback, useEffect, useState,
} from 'react';
import { useGetEntitiesQuery, usePrefetch } from '../../../Path/To/My/Api';
const DEFAULT_PAGE_SIZE = 50;
const EntityList: (): ReactElement => {
const [filter, setFilter] = useState<IEntityFilter>({
search: '',
offset: 0,
limit: DEFAULT_PAGE_SIZE,
});
const { data } = useGetEntitiesQuery(filter);
const prefetchPage = usePrefetch('getEntities');
const prefetchNext = useCallback(() => {
const prefetchFilter = { ...filter, offset: filter.offset + filter.limit };
prefetchPage(prefetchFilter);
}, [prefetchPage, filter.offset]);
useEffect(() => {
if (!(filter.offset + filter.limit >= data?.numberOfEntities)) {
prefetchNext();
}
}, [data, prefetchNext, filter.offset]);
... // Some data handling and showing of data in a list unrelated.
}
Api.ts
// This is my (injected) API endpoint
getEntities: builder.query<IEntities, IEntityFilter>({
query: (filter) => ({ url: 'entities', params: filter }),
transformResponse: (baseQueryReturnValue: IEntitiesResponse) => baseQueryReturnValue.body,
providesTags: (result) => (result
? [
...result.entities.map(({ object_id }) => ({ type: 'Entity', id: object_id } as const)),
{ type: 'Entity', id: 'LIST' },
]
: [{ type: 'Entity', id: 'LIST' }]
),
}),
So the question is as follows: how can I differentiate between the normal fetch query (useGetEntitiesQuery) and the prefetched version of that in my LoadingWrapper.tsx. And if this is not possible, what is the recommended way of achieving my goal?

I am developing VS Code extension and I need to capture the call stack records and log the result

I am writing a simple VS Code extension that suppose to just log the call stack in the console at specific point while debugging a code.
I was able to write a code to retrieve the current session of debugging, the break points and things like this, but I failed to find any property or method to allow me retrieve the call stack records.
This is the code I wrote:
export function activate(context: vscode.ExtensionContext) {
console.log('Congratulations, your extension "sampleextension1" is now active!');
let disposable = vscode.commands.registerCommand('sampleextension1.hello', () => {
vscode.window.showInformationMessage('Hello World from sampleextension1!');
vscode.commands.executeCommand('editor.action.addCommentLine');
vscode.debug.onDidStartDebugSession(x => {
});
vscode.debug.onDidChangeActiveDebugSession(c => {
var b = vscode.debug.breakpoints[0];
});
});
context.subscriptions.push(disposable);
}
As you see in the code, there is an event handler for onDidChangeActiveDebugSession which enables me to capture the session of the debugging but no chance to find how to capture the stack trace.
I went through the documentation but it's not helpful though.
I was able to achieve what I want by sending a CutomRequest to the debugging session to retrieve the stack frames.
More information could be found in the DAP page here
The code is as shown below:
x.customRequest('stackTrace', { threadId: 1 }).then(reply => {
const frameId = reply.stackFrames[0].id;
}, error => {
vscode.window.showInformationMessage(`error: ${error.message}`);
});
or more efficient is to register tracker as shown below:
vscode.debug.registerDebugAdapterTrackerFactory('*', {
createDebugAdapterTracker(session: vscode.DebugSession) {
return {
onWillReceiveMessage: m => console.log(`> ${JSON.stringify(m, undefined, 2)}`),
onDidSendMessage: m => console.log(`< ${JSON.stringify(m, undefined, 2)}`)
};
}
});
The full example is shown here:
export function activate(context: vscode.ExtensionContext) {
console.log('Congratulations, your extension "sampleextension1" is now active!');
let disposable = vscode.commands.registerCommand('sampleextension1.hello', () => {
vscode.window.showInformationMessage('Hello World from sampleextension1!');
vscode.commands.executeCommand('editor.action.addCommentLine');
vscode.debug.onDidStartDebugSession(x => {
// x.customRequest("evaluate", {
// "expression": "Math.sqrt(10)"
// }).then(reply => {
// vscode.window.showInformationMessage(`result: ${reply.result}`);
// }, error => {
// vscode.window.showInformationMessage(`error: ${error.message}`);
// });
x.customRequest('stackTrace', { threadId: 1 }).then(reply => {
const frameId = reply.stackFrames[0].id;
}, error => {
vscode.window.showInformationMessage(`error: ${error.message}`);
});
});
vscode.debug.onDidChangeActiveDebugSession(c => {
var b = vscode.debug.breakpoints[0];
});
vscode.debug.registerDebugAdapterTrackerFactory('*', {
createDebugAdapterTracker(session: vscode.DebugSession) {
return {
onWillReceiveMessage: m => console.log(`> ${JSON.stringify(m, undefined, 2)}`),
onDidSendMessage: m => console.log(`< ${JSON.stringify(m, undefined, 2)}`)
};
}
});
});
Steps to run:
F5 to run the Extension Dev Environment.
Ctl+Shift+P then write your cmd, in my case it was Hello
Then F5 to start the debugging in the Dev Environment then you will be able to see the result.
Hope it helps

Redux Toolkit Query: Reduce state from "mutation" response

Let's say I have an RESTish API to manage "posts".
GET /posts returns all posts
PATCH /posts:id updates a post and responds with new record data
I can implement this using RTK query via something like this:
const TAG_TYPE = 'POST';
// Define a service using a base URL and expected endpoints
export const postsApi = createApi({
reducerPath: 'postsApi',
tagTypes: [TAG_TYPE],
baseQuery,
endpoints: (builder) => ({
getPosts: builder.query<Form[], string>({
query: () => `/posts`,
providesTags: (result) =>
[
{ type: TAG_TYPE, id: 'LIST' },
],
}),
updatePost: builder.mutation<any, { formId: string; formData: any }>({
// note: an optional `queryFn` may be used in place of `query`
query: (data) => ({
url: `/post/${data.formId}`,
method: 'PATCH',
body: data.formData,
}),
// this causes a full re-query.
// Would be more efficient to update state based on resp.body
invalidatesTags: [{ type: TAG_TYPE, id: 'LIST' }],
}),
}),
});
When updatePost runs, it invalidates the LIST tag which causes getPosts to run again.
However, since the PATCH operation responds with the new data itself, I would like to avoid making an additional server request and instead just update my reducer state for that specific record with the content of response.body.
Seems like a common use case, but I'm struggling to find any documentation on doing something like this.
You can apply the mechanism described in optimistic updates, just a little bit later:
import { createApi, fetchBaseQuery } from '#reduxjs/toolkit/query'
import { Post } from './types'
const api = createApi({
// ...
endpoints: (build) => ({
// ...
updatePost: build.mutation<void, Pick<Post, 'id'> & Partial<Post>>({
query: ({ id, ...patch }) => ({
// ...
}),
async onQueryStarted({ id, ...patch }, { dispatch, queryFulfilled }) {
const { data } = await queryFulfilled
dispatch(
api.util.updateQueryData('getPost', id, (draft) => {
Object.assign(draft, data)
})
)
},
}),
}),
})

Facebook photo upload date timestamp

I've downloaded all my Facebook data and wish to upload some of the images that I've sent via Messenger to Google Photos. I wish to have them to have the correct metadata so they are uploaded under the correct day, not under today. Unfortunately, they have the date of download for Date created.
I tried parsing the title, but it doesn't seem to be a timestamp.
My question is: is there a way to create a script that adds the correct metadata to a photo downloaded from Facebook (via Download your information archive)? An example title is: 142666616_209126620919024_535058535265435125_n.jpg. This photo should have the date Jan 27, 2021, 10:53 AM.
After some digging I found a solution.
The archive that Facebook gives you has folders for each friend with the following structure:
\friend_name_a1b2c3
\photos
12345678_123456788996_123124421.jpg
\gifs
\audio
messages_1.json
messages_1.json has all your messages with that friend and here is an example how a message looks like:
{
"sender_name": "Your Name",
"timestamp_ms": 1562647443588,
"photos": [
{
"uri": "messages/inbox/friend_name_a1b2c3/photos/12345678_123456788996_123124421.jpg",
"creation_timestamp": 1562647443
}
],
"type": "Generic",
"is_unsent": false
},
So, using glob and utimes I came up with the following script:
var glob = require("glob")
var Promise = require('bluebird');
var fs = Promise.promisifyAll(require("fs"));
var { utimes } = require("utimes");
const readJSONFiles = async () => {
const messagesFiles = glob.sync(`**/message_*.json`)
const promises = [];
messagesFiles.forEach(mFile => {
promises.push(fs.readFileAsync(mFile, 'utf8'));
})
return Promise.all(promises);
}
readJSONFiles().then(result => {
const map = {};
result.forEach(data => {
const messagesContents = JSON.parse(data);
messagesContents.messages
.filter(m => m.photos)
.forEach(m => {
m.photos.filter(p => {
const splitted = p.uri.split("/")
const messagePhotoFileName = splitted[splitted.length - 1];
map[messagePhotoFileName] = m.timestamp_ms;
})
})
})
fs.writeFileSync("./map.json", JSON.stringify(map))
}).then(() => {
fs.readFileAsync("./map.json", 'utf8').then(data => {
const map = JSON.parse(data);
glob("**/*.jpg", function (er, files) {
files.forEach(file => {
const [, , photo] = file.split("/");
utimes(file, {
btime: map[photo],
atime: map[photo],
mtime: map[photo]
});
})
})
})
});
It creates a map of file-name:date-taken then loops over all .jpg files and changes its metadata. It definitely is a little rough around the edges but gets the job done, after all.

Meteor Mongo Collections find forEach cursor iteration and saving to ElasticSearch Problem

i have Meteor App which is connected to MongoDB.
In mongo i have a table which has ~700k records.
I have a cron job each week, where i read all the records from the table (using Mongo Cursor) and in batches of 10k i want to insert them inside Elastic Search so they are indexed.
let articles = []
Collections.Articles.find({}).forEach(function(doc) {
articles.push({
index: {_index: 'main', _type: 'article', _id: doc.id }
},
doc);
if (0 === articles.length % 10000) {
client.bulk({ maxRetries: 5, index: 'main', type: 'article', body: articles })
data = []
}
})
Since for each is synchronous, goes over each record before it continues, and client.bulk is async, this is overloading the elastic search server and it crashes with Out of Memory Exception.
Is there a way to pause the forEach during the time when the insert is being done? I tried async/await but this does not seem to work as well.
let articles = []
Collections.Articles.find({}).forEach(async function(doc) {
articles.push({
index: {_index: 'main', _type: 'article', _id: doc.id }
},
doc);
if (0 === articles.length % 10000) {
await client.bulk({ maxRetries: 5, index: 'main', type: 'article', body: articles })
data = []
}
})
Any way how to achieve this?
EDIT: I am trying to achieve something like this - if i use promises
let articles = []
Collections.Articles.find({}).forEach(function(doc) {
articles.push({
index: {_index: 'main', _type: 'article', _id: doc.id }
},
doc);
if (0 === articles.length % 10000) {
// Pause FETCHING rows with forEach
client.bulk({ maxRetries: 5, index: 'main', type: 'article', body: articles }).then(() => {
console.log('inserted')
// RESUME FETCHING rows with forEach
console.log("RESUME READING");
})
data = []
}
})
Managed to get this working with ES2018 Async iteration
Got an idea from
Using async/await with a forEach loop
Here is the code that is working
let articles = []
let cursor = Collections.Articles.find({})
for await (doc of cursor) {
articles.push({
index: {_index: 'main', _type: 'article', _id: doc.id }
},
doc);
if (articles.length === 10000) {
await client.bulk({ maxRetries: 5, index: 'trusted', type: 'artikel', body: articles })
articles = []
}
}
This works correctly and it manages to insert all the records into Elastic Search without crashing.
If you are concerned with the unthrottled iteration, then may use the internal Meteor._sleepForMs method, that allows you to put a async timeout in your sync-styled code:
Collections.Articles.find().forEach((doc, index) => {
console.log(index, doc._id)
Meteor._sleepForMs(timeout)
})
Now this works fine within the Meteor environment (Meteor.startup, Meteor.methods, Meteor.publish).
You cron is likely to be not within this environment (= Fiber) so you may write a wrapper that binds the environment:
const bound = fct => Meteor.bindEnvironment(fct)
const iterateSlow = bound(function (timeout) {
Collections.Articles.find().forEach((doc, index) => {
console.log(index, doc._id)
Meteor._sleepForMs(timeout)
})
return true
})
iterateSlow(50) // iterates with 50ms timeout
Here is a complete minimal example, that you can reproduce with a fresh project:
// create a minimal collection
const MyDocs = new Mongo.Collection('myDocs')
// fill the collection
Meteor.startup(() => {
for (let i = 0; i < 100; i++) {
MyDocs.insert({})
}
})
// bind helper
const bound = fct => Meteor.bindEnvironment(fct)
// iterate docs with interval between
const iterateSlow = bound(function (timeout) {
MyDocs.find().forEach((doc, index) => {
console.log(index, doc._id)
Meteor._sleepForMs(timeout)
})
return true
})
// simulate external environment, like when cron runs
setTimeout(() => {
iterateSlow(50)
}, 2000)