ArangoDB Joi Foxx mapping to Swagger is incorrect - joi

I define a bean called TestBean on joi syntax. Then defined another bean BeanMethodDocument which uses TestBean schema/bean. Generated Swagger/model ignores this argument, yet an array defined with TestBean works?
The following JOI syntax seems to lose the TestBean definition: "arg: joi.object().schema(TestBean).required(),"
'use strict';
var createRouter = require('#arangodb/foxx/router');
var joi = require('joi');
var router = createRouter();
module.context.use(router);
const TestBean = joi.object().required().keys({
member1: joi.array().items(joi.string().required()),
member2: joi.number().required()
});
const BeanMethodDocument = joi.object().required().keys({
arg: joi.object().schema(TestBean).required(),
argArray: joi.array().items(TestBean).required(),
option: joi.string().valid('Empty','Full','HalfFull','HalfEmpty')
});
router.post('/beanMethod', function (req, res) {
const arg = req.body.arg;
const argArray = req.body.argArray;
const option = req.body.option;
res.send({result:true});
})
.body(BeanMethodDocument, 'beanMethod POST request')
.response(joi.boolean().required(), 'beanMethod POST response')
.summary('beanMethod summary')
.description('beanMethod description');
Generated Swagger document shows the arg argument as empty?
"arg": {
"type": "object",
"properties": {},
"additionalProperties": false
},

According to the JOI documentation (https://github.com/hapijs/joi/blob/v15.1.0/API.md#objectschema)
The schema function you used there does return the schema of the current object, it does not set it. You can instead of the joi.object() simply use the TestBean like this:
const BeanMethodDocument = joi.object().required().keys({
arg: TestBean.required(),
argArray: joi.array().items(TestBean).required(),
option: joi.string().valid('Empty','Full','HalfFull','HalfEmpty')
});
in my local tests this work end you end up with:
{
"arg": {
"member1": [
"string"
],
"member2": 0
},
"argArray": [
{
"member1": [
"string"
],
"member2": 0
}
],
"option": "Empty"
}

Related

Mongo Bulkwrite with $addToSet

I have been trying a bulkwrite but get complained about typing (I think it's about the syntax):
Type '{ roles: string; }' is not assignable to type 'SetFields<any>'.
Type '{ roles: string; }' is not assignable to type 'NotAcceptedFields<any, readonly any[]>'.
Property 'roles' is incompatible with index signature.
Type 'string' is not assignable to type 'never'.ts(2345)
I can't find any examples or docs about using $addToSet in a builkwrite. Here it is (INTER_ADMINS is just an array of string):
const bulkUpdates = INTER_ADMINS.map((ethAddress) => {
return {
updateOne: {
filter: { ethAddress },
update: {
$addToSet: {
roles: 'INTERNAL_ADMIN',
},
},
upsert: true,
},
};
});
const res = await db.collection('users').bulkWrite(bulkUpdates);
users collection sample:
{
ethAddress: 'something',
roles: ['role1','role2']
}
Appreciate your help
The syntax is correct, this is just a typescript error. I recommend you just add a #ts-ignore and move on.
Here is the type definition:
export type UpdateQuery<TSchema> = {
....
$addToSet?: SetFields<TSchema> | undefined;
....
};
export type SetFields<TSchema> = ({
readonly [key in KeysOfAType<TSchema, ReadonlyArray<any> | undefined>]?:
| UpdateOptionalId<Unpacked<TSchema[key]>>
| AddToSetOperators<Array<UpdateOptionalId<Unpacked<TSchema[key]>>>>;
} &
NotAcceptedFields<TSchema, ReadonlyArray<any> | undefined>) & {
readonly [key: string]: AddToSetOperators<any> | any;
};
As you can see because the Schema is not provided typescript doesn't know which are the valid "keys" of the schema so the only valid type left in the SetFields is the NotAcceptedFields fields type (which are null and undefined, not string )
If you provide a Schema to the operations I believe it should sort the issue:
const bulkUpdates: BulkWriteOperation<UserSchema>[] = ...

Mochawesome with Cypress - how to get aggregated charts at higher level?

I've just started using mochawesome with Cypress (9.7). Our test structure is basically a number of spec files, each following something like the following format:
describe('(A): description of this spec', () => {
describe ('(B): description of test abc', () => {
before(() => {
// do specific set up bits for this test
})
it('(C): runs test abc', () => {
// do actual test stuff
})
})
})
Where within each spec file there would be a single 'A' describe block, but there can be many 'B' level blocks (each with a single 'C') - done this way because the before block for each 'C' is always different - I couldn't use a beforeEach.
When I run my various spec files, each structured similarly to the above, the mochaewsome output is mostly correct - I get a collapsible block for each spec file at level 'A', each with multiple collapsible blocks at level B, each with test info as expected at level C.
But... The circular charts are only displayed at level B. What I was hoping, was that it might be possible to have aggregated charts at level A, and a further aggregated chart for all the level A blocks.
Not sure I've explained this brilliantly(!), but hopefully someone understands, and can offer a suggestion?!
In cypress-mochawesome-reporter there's an alternative setup using on('after:run') which can perform the aggregation.
In Cypress v9.7.0
// cypress/plugins/index.js
const { beforeRunHook, afterRunHook } = require('cypress-mochawesome-reporter/lib');
const { aggregateResults } = require('./aggregate-mochawesome-report-chart');
module.exports = (on, config) => {
on('before:run', async (details) => {
await beforeRunHook(details);
});
on('after:run', async () => {
aggregateResults(config)
await afterRunHook();
});
};
In Cypress v10+
// cypress.config.js
const { defineConfig } = require('cypress');
const { beforeRunHook, afterRunHook } = require('cypress-mochawesome-reporter/lib');
const { aggregateResults } = require('./aggregate-mochawesome-report-chart');
module.exports = defineConfig({
reporter: 'cypress-mochawesome-reporter',
video: false,
retries: 1,
reporterOptions: {
reportDir: 'test-report',
charts: true,
reportPageTitle: 'custom-title',
embeddedScreenshots: true,
inlineAssets: false,
saveAllAttempts: false,
saveJson: true
},
e2e: {
setupNodeEvents(on, config) {
on('before:run', async (details) => {
await beforeRunHook(details);
});
on('after:run', async () => {
aggregateResults(config)
await afterRunHook();
});
},
},
});
The module to do the aggregation is
// aggregate-mochawesome-reporter-chart.js
const path = require('path');
const fs = require('fs-extra')
function aggregateResults(config) {
const jsonPath = path.join(config.reporterOptions.reportDir , '/.jsons', '\mochawesome.json');
const report = fs.readJsonSync(jsonPath)
const topSuite = report.results[0].suites[0]
aggregate(topSuite)
fs.writeJsonSync(jsonPath, report)
}
function aggregate(suite, level = 0) {
const childSuites = suite.suites.map(child => aggregate(child, ++level))
suite.passes = suite.passes.concat(childSuites.map(child => child.passes)).flat()
suite.failures = suite.failures.concat(childSuites.map(child => child.failures)).flat()
suite.pending = suite.pending.concat(childSuites.map(child => child.pending)).flat()
suite.skipped = suite.skipped.concat(childSuites.map(child => child.skipped)).flat()
if (!suite.tests.length && suite.suites[0].tests.length) {
// trigger chart when to describe has no tests
suite.tests = [
{
"title": "Aggregate of tests",
"duration": 20,
"pass": true,
"context": null,
"err": {},
"uuid": "0",
"parentUUID": suite.uuid,
},
]
}
return suite
}
module.exports = {
aggregateResults
}
The function aggregate() recursively loops down through child suites and adds the test results to the parent.
json files
Note the json file is different at the point where afterRunHook runs and at the end of the test run.
If you have the option saveJson: true set, you will get a final json file in the report directory called index.json.
At the afterRunHook stage the file is mochawesome.json.
Before aggregation
After aggregation

How to get data from Amazon RDS using a Lambda function in Node.js?

I have a database in Aurora PostgreSQL and I’m using an API Gateway to invoke the Lambda functions made in Node.js for the APIs. Here is my code for a simple GET request with no URL parameters:
var pg = require("pg");
exports.handler = function(event, context) {
var conn = “//Connection string";
var client = new pg.Client(conn);
client.connect();
//var id = event.id;
console.log('Connected to PostgreSQL database');
var query = client.query("SELECT * from USERS;");
query.on("row", function (row, result) {
result.addRow(row);
});
query.on("end", function (result) {
var jsonString = JSON.stringify(result.rows);
var jsonObj = JSON.parse(jsonString);
console.log(jsonString);
client.end();
context.succeed(jsonObj);
});
};
I am able to get all the records from this table successfully. What changes must I make to the code and to the API Gateway itself to make a GET request with a parameter for a WHERE clause to select a specific user from their username, and a POST request to insert new users into the table?
For proxy lambda integration, all the GET and POST parameters submitted into the API Gateway will be available in in the event object. Thus, you have to get the values submitted for WHERE and INSERT from the event.
The event structure is shown in:
Input format of a Lambda function for proxy integration
You will also need to ensure correct return data from the lambda. Return data also requires proper format:
Output format of a Lambda function for proxy integration
You can code your Lambda function to connect to a RDS database. Now when you do this, there are some things to consider.
You need to configure your Lambda function to use the same VPC as the security group that RDS is using. This is discussed here: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html.
Next from your Lambda function, you still have to set a connection to the database. How this is done is dependent on the programming language that you are using. For example, if you are using the Lambda Java runtime API, you can use a Java connection. See below.
Once you do this - you can connect to RDS and perform SQL statements. This makes it possible to write Lambda functions that can query data from RDS and then use that Lambda function within a larger cloud based workflow using AWS Step Functions.
Code:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class ConnectionHelper {
private String url;
private static ConnectionHelper instance;
private ConnectionHelper() {
url = "jdbc:mysql://formit.xxxxxxshym6k.us-west-2.rds.amazonaws.com:3306/mydb?useSSL=false";
}
public static Connection getConnection() throws SQLException {
if (instance == null) {
instance = new ConnectionHelper();
}
try {
Class.forName("com.mysql.jdbc.Driver").newInstance();
return DriverManager.getConnection(instance.url, "root","root1234");
} catch (SQLException | ClassNotFoundException | InstantiationException | IllegalAccessException e) {
e.getStackTrace();
}
return null;
}
public static void close(Connection connection) {
try {
if (connection != null) {
connection.close();
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Accessing table data from RDS using lambda function with encrypted key (KMS) and Environment variable
Step 1 :- First Enable key in KMS(Key Management Service (KMS))
Review your key Policy and Done! with KMS creation
{
"Id": "key-consolepolicy-3",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::163806924483:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::163806924483:user/User1#gmail.com"
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::163806924483:user/User1#gmail.com",
"arn:aws:iam::163806924483:user/User2#gmail.com",
"arn:aws:iam::163806924483:user/User3#gmail.com"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::163806924483:user/User1.dilip#gmail.com",
"arn:aws:iam::163806924483:user/User2#gmail.com",
"arn:aws:iam::163806924483:user/User3#gmail.com"
]
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
}
]
}
Step:- 2 Create a policy in IAM for KMS assign to ur each lambda function
"StringEquals": {
"kms:EncryptionContext:LambdaFunctionName": [
"LambdaFunction-1",
"LambdaFunction-2",
"LambdaFunction-3"
]
}
Step 3:- Assign a Policy created in Step-2 to ur default lambda Role(1st Lambda need to be created to get default lambda role)
Step 4:- Create lambda Function
Node.js Code for lambda Function
const mysql = require('mysql');
const aws = require("aws-sdk");
const functionName = process.env.AWS_LAMBDA_FUNCTION_NAME;
let res;
let response={};
exports.handler = async(event) => {
reset_globals();
// load env variables
const rds_user = await kms_decrypt(process.env.RDS_USERNAME);
const rds_pwd = await kms_decrypt(process.env.RDS_PASSWORD)
// setup rds connection
var db_connection = await mysql.createConnection({
host: process.env.RDS_HOSTNAME,
user: rds_user,
password: rds_pwd,
port: process.env.RDS_PORT,
database: process.env.RDS_DATABASE
});
var sqlQuery = `SELECT doc_id from documents`;
await getValues(db_connection,sqlQuery);
}
async function getValues(db_connection,sql) {
await new Promise((resolve, reject) => {
db_connection.query(sql, function (err, result) {
if (err) {
response = {statusCode: 500, body:{message:"Database Connection Failed",
error: err}};
console.log(response);
resolve();
}
else {
console.log("Number of records retrieved: " + JSON.stringify(result));
res = result;
resolve();
}
});
});
}
async function kms_decrypt(encrypted) {
const kms = new aws.KMS();
const req = { CiphertextBlob: Buffer.from(encrypted, 'base64'), EncryptionContext: {
LambdaFunctionName: functionName } };
const decrypted = await kms.decrypt(req).promise();
let cred = decrypted.Plaintext.toString('ascii');
return cred;
}
function reset_globals() {
res = (function () { return; })();
response = {};
}
Now u should see KMS in Lambda.
Step 5:- Set Environment Variable and encrypt it.
Lambda ->Functions -> Configuration -> Environment Variable -> Edit
RDS_DATABASE docrds
RDS_HOSTNAME docrds-library.c1k3kcldebmp.us-east-1.rds.amazonaws.com
RDS_PASSWORD root123
RDS_PORT 3306
RDS_USERNAME admin
In Lambda Function to decrypt the encrypted environment variabled use below code
function kms_decrypt(encrypted) {
const kms = new aws.KMS();
const req = { CiphertextBlob: Buffer.from(encrypted, 'base64'), EncryptionContext: {
LambdaFunctionName: functionName } };
const decrypted = await kms.decrypt(req).promise();
let cred = decrypted.Plaintext.toString('ascii');
return cred;
}
My RDS document table looks like:-
I am accessing column doc_id using sqlQuery in lambda function
var sqlQuery = `SELECT doc_id from documents`;
After testing the lambda function, I get below output.
If u gets SQL import Error, then can must add a layer.
errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'mysql'\nRequire stack:\n-
/var/task/index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Runtime.ImportModuleError: Error: Cannot find module 'mysql'",
You can configure your Lambda function to use additional code and
content in the form of layers. A layer is a ZIP archive that contains
libraries, a custom runtime, or other dependencies. With layers, you
can use libraries in your function without needing to include them in
your deployment package.
To include libraries in a layer, place them in the directory structure
that corresponds to your programming language.
Node.js – nodejs/node_modules
Python – python
Ruby – ruby/gems/2.5.0
Java – java/lib
First create a zip archieve that contain mysql archieve.
First create a folder with name same as lambda function in any react-project . Let say for example:- mkdir lambda-function
Then in terminal $project-path > lambda-function-folder > # npm init
Then $project-path > lambda-function-folder > # npm install mysql
You should see node_modules folder created.
Create another folder name nodejs inside lambda-function-folder [mkdir nodejs]
Move the node_modules inside nodejs folder
Zip nodejs folder [nodejs.zip] and upload on layer as shown below.
Then, Goto Lambda--> Layer-->Create layer.

How to validate for ObjectID

Using Joi schema validation, is it possible to validate against MongoDB ObjectID's?
Something like this could be great:
_id: Joi.ObjectId().required().error(errorParser),
const Joi = require('#hapi/joi')
Joi.objectId = require('joi-objectid')(Joi)
const schema = Joi.object({
id: Joi.objectId(),
name: Joi.string().max(100),
date: Joi.date()
})
checkout https://www.npmjs.com/package/joi-objectid
I find that if I do
Joi.object({
id: Joi.string().hex().length(24)
})
it works without installing any external library or using RegEx
The hex makes sure the string contains only hexadecimal characters and the length makes sure that it is a string of exactly 24 characters
This package works if you are using the new version of Joi.
const Joi = require('joi-oid')
const schema = Joi.object({
id: Joi.objectId(),
name: Joi.string(),
age: Joi.number().min(18),
})
package: https://www.npmjs.com/package/joi-oid
If you want a TypeScript version of the above library integrated with Express without installing anything:
import Joi from '#hapi/joi';
import { createValidator } from 'express-joi-validation';
export const JoiObjectId = (message = 'valid id') => Joi.string().regex(/^[0-9a-fA-F]{24}$/, message)
const validator = createValidator({
passError: true,
});
const params = Joi.object({
id: JoiObjectId().required(),
});
router.get<{ id: string }>(
'/:id',
validator.params(params),
(req, res, next) => {
const { id } = req.params; // id has string type
....
}
);
I share mine
let id =
Joi.string()
.regex(/^[0-9a-fA-F]{24}$/)
.message('must be an oid')
With joi naked package you can use this:
const ObjectId = joi.object({
id: joi.binary().length(12),
})
This is a core function without any external package. I have used Joi and mongoose packages to achieve it.
const Joi = require("joi");
const ObjectID = require("mongodb").ObjectID;
const schema = Joi.object({
id: Joi.string().custom((value, helpers) => {
const filtered = ObjectID.isValid(value)
return !filtered ? helpers.error("any.invalid") : value;
},
"invalid objectId", ).required(),
name: Joi.string(),
age: Joi.number().min(18),
})
I see many correct answers here, but I also want to express my opinion.
I am not against installing npm packages, but installing one package just to validate an object id is overkill. I know programmers are lazy but C'mooon :)
Here is my full code function that validates two object id properties, very simple:
function validateRental(rental) {
const schema = Joi.object({
movieId: Joi.string()
.required()
.regex(/^[0-9a-fA-F]{24}$/, 'object Id'),
customerId: Joi.string()
.required()
.regex(/^[0-9a-fA-F]{24}$/, 'object Id'),
})
const { error } = schema.validate(rental)
return {
valid: error == null,
message: error ? error.details[0].message : null,
}
}
This way, if any of the properties contain invalid id like this:
{
"movieId": "123456",
"customerId": "63edf556f383d108d54a68a0"
}
This will be the error message:
`"movieId" with value "123456" fails to match the object Id pattern`

graphql-compose-mongoose generating an error: Expected [object Object] to be a GraphQL schema

I'm new to graphql-compose
I'm trying to launch a first service on a simple mongoose schema:
graphql.js :
import mongoose from 'mongoose'
import { composeWithMongoose} from 'graphql-compose-mongoose'
import { schemaComposer } from 'graphql-compose'
const db = require( '../models/db' )
//const mongoose = require('mongoose');
const folderDAO = mongoose.model('folder');
const customizationOptions = {}; // left it empty for simplicity, described below
const folderTC = composeWithMongoose(folderDAO, customizationOptions);
schemaComposer.rootQuery().addFields({
folderOne: folderTC.getResolver('findOne'),
})
const graphqlSchema = schemaComposer.buildSchema()
console.log("Schema built : ", graphqlSchema )
export default graphqlSchema
Now in my server code, I have this:
const express = require('express');
const graphqlHTTP = require('express-graphql')
const GraphQLSchema = require('./app_api/routes/graphql')
app.use('/graphql', graphqlHTTP({
schema: GraphQLSchema,
graphiql: true,
formatError: error => ({
message: error.message,
locations: error.locations,
stack: error.stack ? error.stack.split('\n') : [],
path: error.path
})
}));
On graphiql, when I attempt the following query:
{
folderOne(filter: {}, sort: _ID_ASC) {
name
}
}
I get the following error:
{
"errors": [
{
"message": "Expected [object Object] to be a GraphQL schema.",
"stack": [
"Error: Expected [object Object] to be a GraphQL schema.",
" at invariant (/Users/zied/work/share_place/node_modules/graphql/jsutils/invariant.js:19:11)",
" at validateSchema (/Users/zied/work/share_place/node_modules/graphql/type/validate.js:55:60)",
" at assertValidSchema (/Users/zied/work/share_place/node_modules/graphql/type/validate.js:80:16)",
" at validate (/Users/zied/work/share_place/node_modules/graphql/validation/validate.js:58:35)",
" at /Users/zied/work/share_place/node_modules/express-graphql/dist/index.js:139:52",
" at <anonymous>",
" at process._tickDomainCallback (internal/process/next_tick.js:228:7)"
]
}
]
}
What could I be missing???
p.s: sorry I attempted to tag the question with graphql-compose-mongoose but the tag doesn't exist, so I tagged it with graphql-js
Actually the issue was here:
const GraphQLSchema = require('./app_api/routes/graphql')
has to be replaced with
const GraphQLSchema = require('./app_api/routes/graphql').default
since we exported it as default
More info can be found here: https://github.com/graphql-compose/graphql-compose-mongoose/issues/103