I have a project on Sails.js 1
I try to use TDD on development, so I've added route testing. To prevent real API call in helpers I've mocked them via sinon
So I can successfully pass tests with this code
action2 controller code
module.exports = {
friendlyName: 'Some action',
description: '',
inputs: {
requiredParam: {
example: 'some_value',
required: true,
type: 'string',
}
},
exits: {
someError: {
description: 'Handles some error happens in controller',
message: 'Some error handled in controller',
}
},
fn: async function (inputs, exits) {
const someProcesses = await sails.helpers.someHelper(inputs.requiredParam);
if (someProcesses === 'someError') {
exits.someError(someProcesses);
} else {
exits.success(someProcesses);
}
}
};
Helper code
module.exports = {
friendlyName: 'Some helper',
description: '',
inputs: {
requiredParam: {
example: 'some_value',
required: true,
type: 'string',
}
},
exits: {
someError: {
description: 'Just some error happens in workflow',
message: 'Some error happens!',
}
},
fn: async function (inputs, exits) {
// All done.
return inputs.requiredParam === 'isError' ? exits.success('someError') : exits.success('someProcessedValue');
}
};
test code
const request = require('supertest');
const {createSandbox} = require('sinon');
describe('GET /some-page/some-action', () => {
before(() => {
this.sandbox = createSandbox();
this.sandbox.stub(sails.helpers, 'someHelper')
.withArgs('some_value').returns('someProcessedValue')
.withArgs('isError').returns('someError');
});
after(() => {
this.sandbox.restore();
});
it('should response OK', async () => {
await request(sails.hooks.http.app)
.get('/some-page/some-action')
.query({requiredParam: 'some_value'})
.expect(200);
});
it('should response with error', async () => {
await request(sails.hooks.http.app)
.get('/some-page/some-action')
.query({requiredParam: 'isError'})
.expect(500);
});
});
Then I've read on the official doc helper page that we should use exits and intercept to correctly handle errors from helpers on Sails.js 1.
So I've rewritten my action2 and helper to its usage and got next
Action2 controller with action2 intercept handling
module.exports = {
friendlyName: 'Some action',
description: '',
inputs: {
requiredParam: {
example: 'some_value',
required: true,
type: 'string',
}
},
exits: {
someError: {
description: 'Handles some error happens in controller',
message: 'Some error handled in controller',
}
},
fn: async function (inputs, exits) {
const someProcesses = await sails.helpers.someHelper(inputs.requiredParam)
.intercept('someError', exits.someError);// Error happens here
return exits.success(someProcesses);
}
};
Helper with exit triggering
I noticed this is problematic to test action2 controllers with preventing real helpers calls, cause when we stub helper methods using we got an error on next chained call but without these calls we can't user recommended error handling.
While I haven't used chained machine methods (tolerate, intercept etc) all test passes successfully, but when I affect it, I got an error.
Error
error: Sending 500 ("Server Error") response:
TypeError: sails.helpers.someHelper(...).intercept is not a function
I've prepared a repo on GitHub if someone wants to try some idea.
Repo tags
correct-test-result-without-chained-helper-methods - tests without reverse
broken-tests-with-chained-methods - broken tests.
Here is a related question but it is for Sails.js 0.12.14-
Any idea how to avoid this problem? I will appreciate any answer.
Related
I am trying to use Pulumi to create an AWS Lambda that manipulates a DynamoDB table and is triggered by an API Gateway HTTP request.
My configuration works perfectly when I run pulumi up, but when I run Vitest, my test passes but exits with non-zero and this message:
⎯⎯⎯ Unhandled Rejection ⎯⎯⎯
Error: Could not find property info for real property on object: sdk
I can see that the error comes from this code in Pulumi, but I can't figure out what causes it. Am I doing something wrong or is this a bug (in which case I can create an issue)?
Below is a summary that I think has all the relevant info, but there is a minimal repo demonstrating the problem here (GitHub actions fail with the problem I'm describing).
I have an index.ts file that creates a database, gateway, and lambda:
import * as aws from '#pulumi/aws'
import * as apigateway from '#pulumi/aws-apigateway'
import handler from './handler'
const table = new aws.dynamodb.Table('Table', {...})
const tableAccessPolicy = new aws.iam.Policy('DbAccessPolicy', {
// removed for brevity. Allows put, get, delete
})
const lambdaRole = new aws.iam.Role('lambdaRole', {...})
new aws.iam.RolePolicyAttachment('RolePolicyAttachment', {...})
const callbackFunction = new aws.lambda.CallbackFunction(
'callbackFunction',
{
role: lambdaRole,
callback: handler(table.name),
}
)
const api = new apigateway.RestAPI('api', {
routes: [{
method: 'GET',
path: '/',
eventHandler: callbackFunction,
}]
})
export const dbTable = table
export const url = api.url
The handler is imported from a separate file:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import * as pulumi from '#pulumi/pulumi';
import * as aws from '#pulumi/aws';
export default function (tableName: pulumi.Output<string>) {
return async function handleDocument(
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> {
try {
const client = new aws.sdk.DynamoDB.DocumentClient();
await client
.put({
TableName: tableName.get(),
Item: { PK: 'hello', roomId: '12345' },
})
.promise();
const result = await client
.get({
TableName: tableName.get(),
Key: { PK: 'hello' },
})
.promise();
await client
.delete({
TableName: tableName.get(),
Key: { PK: 'hello' },
})
.promise();
return {
statusCode: 200,
body: JSON.stringify({
item: result.Item,
}),
};
} catch (err) {
return {
statusCode: 200,
body: JSON.stringify({
error: err,
}),
};
}
};
}
Finally, I have a simple test:
import * as pulumi from '#pulumi/pulumi';
import { describe, it, expect, beforeAll } from 'vitest';
pulumi.runtime.setMocks(
{
newResource: function (args: pulumi.runtime.MockResourceArgs): {
id: string;
state: Record<string, any>;
} {
return {
id: `${args.name}_id`,
state: args.inputs,
};
},
call: function (args: pulumi.runtime.MockCallArgs) {
return args.inputs;
},
},
'project',
'stack',
false
);
describe('infrastructure', () => {
let infra: typeof import('./index');
beforeAll(async function () {
// It's important to import the program _after_ the mocks are defined.
infra = await import('./index');
});
it('Creates a DynamoDB table', async () => {
const tableId = await new Promise((resolve) => {
infra?.dbTable?.id.apply((id) => resolve(id));
});
expect(tableId).toBe('Table_id');
});
});
Your function is importing the Pulumi SDK, and you're trying to set the table name as a pulumi.Output<string>
Using the Pulumi SDK inside a lambda function isn't recommended or support.
I would recommend removing the Pulumi dependency from your function
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
export default function (tableName: string) {
return async function handleDocument(
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> {
try {
const client = new aws.sdk.DynamoDB.DocumentClient();
await client
.put({
TableName: tableName.get(),
Item: { PK: 'hello', roomId: '12345' },
})
.promise();
const result = await client
.get({
TableName: tableName.get(),
Key: { PK: 'hello' },
})
.promise();
await client
.delete({
TableName: tableName.get(),
Key: { PK: 'hello' },
})
.promise();
return {
statusCode: 200,
body: JSON.stringify({
item: result.Item,
}),
};
} catch (err) {
return {
statusCode: 200,
body: JSON.stringify({
error: err,
}),
};
}
};
}
The callback function should take take non inputty types, which should then remove the need to call the Pulumi SDK during your test suite. You can see an example here:
https://github.com/pulumi/examples/blob/258d3bad0a00020704743e37911c51be63c06bb4/aws-ts-lambda-efs/index.ts#L32-L40
this is my interceptor:
axios.interceptors.response.use(
(response) => {
if (error.response?.status === 403) {
unstable_batchedUpdates(() => {
// to force react state changes outside of React components
useSnackBarStore.getState().show({
message: `${i18n.t('forbidden')}: ${error.toJSON().config.url}`,
severity: 'error',
})
})
}
return Promise.reject(error)
}
)
I want this behavior all the time except when I make this specific call or at least except every head call
export const companiesQueries = {
headCompany: {
name: 'headCompany',
fn: async (companyId) => {
return await axios.head(`/companies/${companyId}`)
},
},
fixed by applying these changes to the api call:
const uninterceptedAxiosInstance = axios.create()
headCompany: {
name: 'headCompany',
fn: async (companyId) => {
return await
uninterceptedAxiosInstance.head(`/companies/${companyId}`)
},
}
I have modalform component made using Alpine js and axios for POST request.
But I cannot understand few things:
How to reset form data after succesfull POST request.
I see error in console TypeError: this.resetFields is not a function
How to get errors to show them for the user if POST request is failed due to validation errors with 422 status code.
I want to bind errors.message to AlpineJs variable errors and then show it on the webpage using <p x-text="errors" class="text-red-600"></p>, but this.errors = error.message; seems not working, because in AlpineJS devtools in Chrome errors variable doesn't change.
function modalform() {
return {
mailTooltip: false,
instagramTooltip: false,
openModal: false,
formData: {
name: '',
phone: '',
email: '',
address: '',
message: '',
_token: '{{ csrf_token() }}'
},
message: '',
errors: '',
loading: false,
sent: false,
buttonLabel: 'Send',
resetFields() {
this.formData.name = '',
this.formData.phone = '',
this.formData.email = '',
this.formData.address = '',
this.formData.message = ''
},
submitData() {
this.buttonLabel = 'Sending...';
this.loading = true;
this.message = '';
axios.post('/modalform', this.formData)
.then(function (response) {
console.log(response);
this.resetFields();
this.message = response.data.name;
})
.catch(function (error) {
console.log(error);
this.errors = error.message;
});
},
}
}
```
You have a scoping issue. If you use the old function(response){...} style, then this refers to the object it was called on (axios). However is you replace it with the arrow function, then this will refer to the first non-arrow function object, in this case: the Alpine.js component.
axios.post('/modalform', this.formData)
.then((response) => {
console.log(response);
this.resetFields();
this.message = response.data.name;
})
.catch((error) => {
console.log(error);
this.errors = error.message;
});
I am using Protractor and Jasmine to test my hybrid mobile app, which works fine. I'd like to create an incident on my Team Foundation Server (TFS), when a test fails. Therefore, I have to send an REST-Call to the Api, which also works fine in my Angular App. But it does not work, when I am inside my test environment.
My Code:
var BrowsePage = require('./browse.page');
var tfsIncident = require('./tfsIncident_service');
var request = require('request');
describe('Testing the browse state', function () {
var browsePage = new BrowsePage();
var specsArray = [];
var reporterCurrentSpec = {
specDone: function (result) {
if (result.status === 'failed') {
var mappedResult = tfsIncident.create(result);
console.log(mappedResult); //This works so far, but then it crashes
var options = {
method: 'PATCH', //THis Method requiered the API
url: 'MY_COOL_API_ENDPOINT',
headers: {
'Authorization': 'Basic ' + btoa('USERNAME' + ':' + 'PASSWORD'),
'Content-Type': 'application/json-patch+json'
},
body: mappedResult
};
function callback(error, response, body) {
if (!error && response.statusCode == 200) {
var info = JSON.parse(body);
console.log(response);
console.log(info);
}
}
request(options, callback);
}
}
};
jasmine.getEnv().addReporter(reporterCurrentSpec);
//This test passes
it('should be able to take display the heading', function () {
expect(browsePage.headline.isPresent()).toBe(true);
});
// Test is supposed to fail
it('should be able to fail', function () {
expect(browsePage.headline).toBe(1);
});
// Test is supposed to fail as well
it('should be able to fail too', function () {
expect(browsePage.headline).toBe(2);
});
});
So the problem is, that my only console output is (after the console.log(mappedResult)): E/launcher - BUG: launcher exited with 1 tasks remaining
So I have no idea, why this does not work.
Any help appreciated.
Edit
Protractor: 5.0.0
Appium Desktop Client: 1.4.16.1
Chromedriver: 2.27
Windows 10 64 Bit
Jasmine: 2.4.1
I finally got my problem solved. The problem was caused by ignoring the promises by jasmine. I had to add a .controllFlow().wait() to my protractor.promise
The following code works fine:
var BrowsePage = require('./browse.page');
describe('Testing the browse state', function () {
var browsePage = new BrowsePage();
var reporterCurrentSpec = {
specDone: function (result) {
if (result.status === 'failed') {
//Mapping of the result
var incident = [
{
op: 'add',
path: '/fields/System.Title',
value: 'Test: ' + result.fullName + ' failed'
},
{
op: 'add',
path: '/fields/System.Description',
value: result.failedExpectations[0].message
},
{
op: 'add',
path: '/fields/Microsoft.VSTS.Common.Priority',
value: '1'
},
{
op: 'add',
path: '/fields/System.AssignedTo',
value: 'Name Lastname <e#mail.com>'
}
];
protractor.promise.controlFlow().wait(create(incident)).then(function (done) { //The magic happens in this line
console.log("test done from specDone:" + done);
});
}
}
};
jasmine.getEnv().addReporter(reporterCurrentSpec); //Add new Jasmine-Reporter
function create(incident) {
var request = require('request');
var defer = protractor.promise.defer(); //new promise
request({
url: 'https://MY_COOL_ENDPOINT.COM',
method: "PATCH",
json: true, // <--Very important!!!
headers: {
'Authorization': 'Basic ' + new Buffer('USERNAME' + ':' + 'PASSWORD').toString('base64'),
'Content-Type': 'application/json-patch+json'
},
body: incident
}, function (error, response, body) {
console.log(error);
console.log(response.statusCode);
console.log(body.id); //Id of created incident on TFS
defer.fulfill({
statusCode: response.statusCode
}); //resolve the promise
});
return defer.promise; //return promise here
}
it('should be able to display the heading', function () {
expect(browsePage.headline.isPresent()).toBe(true);
});
it('should be able to fail', function () {
expect(browsePage.headline.isPresent()).toBe(false);
});
it('should be able to fail 2', function () {
expect(browsePage.headline.isPresent()).toBe(false);
});
});
Attention
When the test suite is done and the last promise is not resolved at this moment, the last incident is not created. I'll try to work around by adding to the last test a browser.sleep(5000); so that the create(incident) function gets more time to finish.
Thanks to this StackOverflow answer for helping me.
Let's say I have following action:
export function signIn(data) {
return {
type: USER_SIGN_IN,
promise: api.post('/sign_in', data)
}
}
and following middleware:
export default function promiseMiddleware() {
return next => action => {
const { promise, type, ...rest } = action
if (!promise) {
return next(action)
}
const SUCCESS = `${type}_SUCCESS`
const REQUEST = `${type}_REQUEST`
const ERROR = `${type}_ERROR`
next({ type: REQUEST, ...rest })
return promise
.then(res => {
next({ response: res.data, type: SUCCESS, ...rest })
return true
})
.catch(error => {
...
})
}
}
This code is loosely based on https://github.com/reactGo/reactGo/
But what if in then callback after calling next I want to make a redirect to another path?
I did following. I passed redirect url through action:
export function signIn(data) {
return {
type: USER_SIGN_IN,
promise: api.post('/sign_in', data),
redirect: '/'
}
}
and added another call of next method with push from react-router-redux.
import { push } from 'react-router-redux'
export default function promiseMiddleware() {
return next => action => {
const { promise, type, redirect, ...rest } = action
...
return promise
.then(res => {
next({ response: res.data, type: SUCCESS, ...rest })
next(push(redirect))
return true
})
.catch(error => {
...
})
}
}
It seems like it works, but I'm not sure if this is a good idea or if there are some pitfalls of multiple next calls and I shouldn't do like this?
Maybe there are some better approaches for implementing such redirects?