NestJS: e2e testing using an axios-based client is not working as expected - axios

I have a NestJS application that exposes some endpoints, and I have written a client application that I plan on releasing as an NPM package to work with the nest server. I am attempting to write end-to-end tests that start the nest server, connect it to a test database in a local docker container, and test it using the client. Here are some snippets of what I'm attempting to do:
Controller:
#Controller('/api/v1/messages')
export class MessagesController {
constructor(
private messagesService: MessagesService
) {}
#Get()
#UsePipes(new ValidationPipe({ whitelist: true, forbidNonWhitelisted: true }))
private findAll(
#Query() searchCriteria: MessageSearchDto
): Promise<MessageDto[]> {
if (Object.keys(searchCriteria).length > 0)
return this.messagesService.search(searchCriteria);
return this.messagesService.findAll();
}
}
Client:
const http = require('axios');
const dotenv = require('dotenv');
dotenv.config();
export class MessageClient {
public baseUri: string = process.env.MessageClientTarget ?? './';
constructor() {}
public async findAll() {
return await http.get(this.baseTarget());
}
private baseTarget() {
return `${this.baseUri}/api/v1/messages`;
}
}
e2e Test:
describe('MessageController (e2e)', () => {
let app: INestApplication;
let client: MessageClient = new MessageClient();
beforeAll(async () => {
let moduleFixture: TestingModule = await Test.createTestingModule({
imports: [AppModule, MessagesModule],
}).compile();
app = moduleFixture.createNestApplication();
await app.init();
});
afterAll(async () => {
await app.close();
});
it('/ (GET)', async done => {
const { data: messages } = await client.findAll()
expect(messages).toEqual([]);
done();
});
});
And .env:
MessageClientTarget=http://localhost:3000
When I attempt to run this, I get the following error: "connect ECONNREFUSED 127.0.0.1:3000"
From what I understand, this is because the createNestApplication method doesn't actually start a server but instead creates a mocked version of the application.
My question is how can I work with INestApplication or TestingModule in order to start the application or what other way do I have to programmatically start a NestJS application. It is important to me that I perform the e2e testing with the Axios based client rather than supertest as a way of testing both the client and the server.
I have verified that the server is supposed to start on port 3000 locally, and I have verified that the client has the correct baseUri set. The address used by the client during testing is: http://localhost:3000/api/v1/messages and was verified by outputting the value to the console during the test. Also, only the database currently lives in a docker container and I have that running correctly. The whole application works perfectly when ran locally and it is only in the test that it is failing.

Please make sure that the test environment where you are running tests is set to node. Add testEnvironemnt:'node' jest parameter in your jest configuration file.
You can run e2e tests with any client of your choice regardless of whether or not you are running a nestjs testing module or the actual nestjs application instance.
I personally use testing module as it makes very easy to mock any third-party dependencies of the application when testing it (that is the main purpose of the testing module, providing an elegant manner to substitute or mock any component that you may want to).

Related

Hardhat: how to deploy using a custom signer

Normally, to deploy contracts to a network, we specify the private keys in accounts section of the network config, Like below, and these accounts get used in signing the transactions.
module.exports = {
defaultNetwork: "rinkeby",
networks: {
hardhat: {
},
rinkeby: {
url: "https://eth-rinkeby.alchemyapi.io/v2/1234",
accounts: [privateKey1, privateKey2, ...]
}
},
But we need to use a custom signer that will sign the transactions instead. All transactions that are part of the deployment process needs to be signed via this custom signer.
How do I do this using Hardhat/ethers.js ?
You have to modify the deployment script to attach your custom signer to the contract factory (https://docs.ethers.io/v4/api-contract.html):
async function main() {
const HelloBar = await ethers.getContractFactory("HelloBar");
const signer = createYourCustomSigner();
// attach the signer to the factory
HelloBar = HelloBar.connect(signer);
const hellobar = await HelloBar.deploy();
await hellobar.deployed();
console.log("Address:", hellobar.address);
}
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});
Then run the script with:
npx hardhat run --network localhost scripts/deploy.js

Can Bunjs be used as a backend server?

Now we can start a react App with bun as a server
Can we use Bunjs as complete backend server?
For Example, Can bun run this code?
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('hello world')
})
app.listen(3000)
I guess Bun does not YET implement all node.js api's. I tried http and it seems currently missing. And as much I understand it currently has its own built-in HTTP server.
Check the "Getting started" section on -> https://bun.sh/
A sample server:
export default {
port: 3000,
fetch(request) {
return new Response("Welcome to Bun!");
},
};
(This example reminds me of serverless functions.)
As this is the case, it seems you can not rely on Node.js http, or most probably any server framework like express.
At least for now, bun's roadmap (https://github.com/oven-sh/bun/issues/159) shows a line, which I am not sure is talking about node's http server or sth. else about Bun's own server.
Once complete, the next step is integration with the HTTP server and
other Bun APIs
Bun api is really different from nodejs, I created a library called bunrest, a express like api, so new user does not need to learn much about bun.
Here is how to use it
Install the package from npm
npm i bunrest
To create a server
const App = require('bunrest');
const server = new App.BunServer();
After that, you can call it like on express
server.get('/test', (req, res) => {
res.status(200).json({ message: 'succeed' });
});
server.put('/test', (req, res) => {
res.status(200).json({ message: 'succeed' });
});
server.post('/test', (req, res) => {
res.status(200).json({ message: 'succeed' });
});
To start the server
server.listen(3000, () => {
console.log('App is listening on port 3000');
});
Other way is using Hono: https://honojs.dev/
There is a working demo: https://github.com/cachac/bun-api
Import package and set new instance
import { Hono } from 'hono'
const app = new Hono()
Create routes:
Instead of NodeJs res.send(), use c.json({})
app.get('/hello', c => c.json({ message: 'Hello World' }))
export and run api server:
export default {
fetch: app.fetch,
port: 3000
}

Network Error Error: connect ECONNREFUSED 127.0.0.1:5000 after a certain number of tests in react-testing library

I am testing an app with react-testing-library and I am using a mock service worker.
All my tests pass until the last one which gives this error in the title.
When testing only the part which gives error isolated (test.only), then it doesn't throw error.
The error points to localhost:5000 which is my data server (my app is running on 3000)
This is my last test which only works when running alone:
import { findByRole, getByRole, render, screen } from '#testing-library/react'
import userEvent from '#testing-library/user-event'
import Layout from '../../layout'
describe('tests for headers and content of the table', () => {
test('to check headers in inital state', async () => {
render(<Layout />)
const headerFirstCell = await screen.findByRole('columnheader', {name: /name/i, })
expect(headerFirstCell).toHaveTextContent('Name')
const headerSecondCell = await screen.findByRole('columnheader', {name: /courses/i,}) //second
expect(headerSecondCell).toHaveTextContent('Courses')
})
It is also strange that when I run only one part of the last code + all the rest, fex:
const headerFirstCell = await screen.findByRole('columnheader', {name: /name/i, })
expect(headerFirstCell).toHaveTextContent('Name')
Or the other part + all the rest:
const headerSecondCell = await screen.findByRole('columnheader', {name: /courses/i,}) //second
expect(headerSecondCell).toHaveTextContent('Courses')
Then all tests pass. It seems like any line of code I add after that point will make the error throw.
I know it might not be easy to see from this info...but I'm lost...Any clue??
Just in case this is my setupTests.js:
import '#testing-library/jest-dom'
import { server } from './mocks/server.js'
beforeAll(() => server.listen())
afterEach(() => server.resetHandlers())
afterAll(() => server.close())
Just was a typo in one of the server url addresses in my mock service worker handlers.
So when testing a part which gets requests to that url it threw an error sometimes, in case I place that function at the end, the tests finished before doing the get request to the wrong url. That's why it had that strange behaviour.

Firestore emulator for testing security rules - running the tests

I have installed the emulator following the instructions at enter link description here and I can start it, so far so good.
After picking some code here and there I have written my first test, here it is:
import * as firebasetesting from '#firebase/testing';
import * as firebase from 'firebase';
import * as fs from 'fs';
const projectId = 'my-firebase-project';
const rules = fs.readFileSync('firestore.rules', 'utf8');
beforeAll(async () => {
// Make your test app load your firestore rules
await firebasetesting.loadFirestoreRules({ projectId, rules });
});
beforeEach(async () => {
// Reset our data from our test database
await firebasetesting.clearFirestoreData({ projectId });
});
after(async () => {
// Shut down all testing Firestore applications after testing is done.
await Promise.all(firebasetesting.apps().map(app => app.delete()));
});
describe("TRACKERS AND ALLIES", () => {
it('TRACKER UP', async () => {
let user = {username: "Bob", uid: 'bobuid'}
let target = { username: "Alice", uid: 'aliceuid'}
const auth = { uid: bob.uid, token: {isadmin: false} };
const app = firebasetesting.initializeTestApp({ projectId, auth }).firestore();
const ref = app.doc('users/'+ user.uid + '/contact/' + target.uid);
await firebasetesting.assertSucceeds(ref.update({ up: true, username: target.uid, timestamp: firebase.firestore.FieldValue.serverTimestamp() }));
});
})
And my question is very simple: how do I run it?
EDIT: I may just add that I am new to Firestore and Javascript in general... The link above simply states
After running a suite of tests, you can access test coverage reports that show how each of your security rules was evaluated.
So I guess it must be simple, but I cannot find the "run" command anywhere...
If you have a nodejs script, run it with node your-script.js. You must have node installed.
If you want to run the script along with the emulator, and shut the emulator down after the script finishes, the page you linked to says:
In many cases you want to start the emulator, run a test suite, and
then shut down the emulator after the tests run. You can do this
easily using the emulators:exec command:
firebase emulators:exec --only firestore "./my-test-script.sh"
If you found the documentation confusing or incomplete, you should use the "send feedback" button at the top right of the page.

Using Grunt to Mock Endpoints

I'm using Yeoman, Grunt, and Bower, to construct a platform for building a frontend independently of a a backend. The idea would be that all of my (AngularJS) controller, services, factories, etc live in this project, and get injected afterwards into my serverside codebase based off the result of grunt build.
My question is:
How can I mock endpoints so that the Grunt server responds to the same endpoints as my (Rails) App will?
At the moment I am using:
angular.module('myApp', ['ngResource'])
.run(['$rootScope', function ($rootScope) {
$rootScope.testState = 'test';
}]);
And then in each of my individual services:
mockJSON = {'foo': 'myMockJSON'}
And on every method:
if($rootScope.testState == 'test'){
return mockJSON;
}
else {
real service logic with $q/$http goes here
}
Then after grunt build, testState = 'test' gets removed.
This is clearly a relatively janky architecture. How can I avoid it? How can I have Grunt respond to the same endpoints as my app (some of which have dynamic params) apply some logic (if necessary), and serve out a json file (possibly dependent on path params)?
I've fixed this issue by using express to write a server that responds with static json.
First I created a directory in my project called 'api'. Within that directory I have the following files:
package.json:
{
"name": "mockAPI",
"version": "0.0.0",
"dependencies": {
"express": "~3.3.4"
}
}
Then I run npm install in this directory.
index.js:
module.exports = require('./lib/server');
lib/server.js:
express = require('express');
var app = express();
app.get('/my/endpoint', function(req, res){
res.json({'foo': 'myMockJSON'});
});
module.exports = app
and finally in my global Gruntfile.js:
connect: {
options: {
port: 9000,
hostname: 'localhost',
},
livereload: {
options: {
middleware: function (connect, options) {
return [
lrSnippet,
mountFolder(connect, '.tmp'),
mountFolder(connect, yeomanConfig.app),
require('./api')
];
}
}
},
Then the services make the requests, and the express server serves the correct JSON.
After grunt build, the express server is simply replaced by a rails server.
As of grunt-contrib-connect v.0.7.0 you can also just add your custom middleware to the existing middleware stack without having to manually rebuild the existing middleware stack.
livereload: {
options: {
open: true,
base: [
'.tmp',
'<%= config.app %>'
],
middleware: function(connect, options, middlewares) {
// inject a custom middleware into the array of default middlewares
middlewares.push(function(req, res, next) {
if (req.url !== '/my/endpoint') {
return next();
}
res.writeHead(200, {'Content-Type': 'application/json' });
res.end("{'foo': 'myMockJSON'}");
});
return middlewares;
}
}
},
See https://github.com/gruntjs/grunt-contrib-connect#middleware for the official documentation.
Alternatively you can use the grunt-connect-proxy to proxy everything that is missing in your test server to an actual backend.
It's quite easy to install, just one thing to remember when adding proxy to your livereload connect middleware is to add it last, like this:
middleware: function (connect) {
return [
lrSnippet,
mountFolder(connect, '.tmp'),
mountFolder(connect, yeomanConfig.app),
proxySnippet
];
}
grunt-connect-prism is similar to the Ruby project VCR. It provides an easy way for front end developers to record HTTP responses returned by their API (or some other remote source) and replay them later. It's basically an HTTP cache, but for developers working on a Single Page Application (SPA). You can also generate stubs for API calls that don't exist, and populate them the way you want.
It's useful for mocking complex & high latency API calls during development. It's also useful when writing e2e tests for your SPA only, removing the server from the equation. This results in much faster execution of your e2e test suite.
Prism works by adding a custom connect middleware to the connect server provided by the grunt-contrib-connect plugin. While in 'record' mode it will generate a file per response on the filesystem with content like the following:
{
"requestUrl": "/api/ponies",
"contentType": "application/json",
"statusCode": 200,
"data": {
"text": "my little ponies"
}
}
DISCLAIMER: I'm the author of this project.
You can use Apache proxy and connect your REST server with gruntjs.
Apache would do this:
proxy / -> gruntjs
proxy /service -> REST server
you would use your application hitting Apache and angular.js application would think that is talking with itself so no cross domain problem.
Here is a great tutorial on how to set this up:
http://alfrescoblog.com/2014/06/14/angular-js-activiti-webapp-with-activiti-rest/
Just my alternative way that based on Abraham P's answer. It does not need to install express within 'api' folder. I can separate the mock services for certain files. For example, my 'api' folder contains 3 files:
api\
index.js // assign all the "modules" and then simply require that.
user.js // all mocking for user
product.js // all mocking for product
file user.js
var user = function(req, res, next) {
if (req.method === 'POST' && req.url.indexOf('/user') === 0) {
res.end(
JSON.stringify({
'id' : '5463c277-87c4-4f1d-8f95-7d895304de12',
'role' : 'admin'
})
);
}
else {
next();
}
}
module.exports = user;
file product.js
var product = function(req, res, next) {
if (req.method === 'POST' && req.url.indexOf('/product') === 0) {
res.end(
JSON.stringify({
'id' : '5463c277-87c4-4f1d-8f95-7d895304de12',
'name' : 'test',
'category': 'test'
})
);
}
else {
next();
}
}
module.exports = product;
index.js just assigns all the "modules" and we simply require that.
module.exports = {
product: require('./product.js'),
user: require('./user.js')
};
My Gruntfile.js file
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
hostname: 'localhost',
livereload: 35729
},
livereload: {
options: {
open: true,
middleware: function (connect) {
return [
connect.static('.tmp'),
connect().use(
'/bower_components',
connect.static('./bower_components')
),
connect.static(appConfig.app),
require('./api').user,
require('./api').product,
];
}
}
}