How to explicitly allow additional fields when using stripUnknown in Yup? - yup

I have generic framework code that validates incoming requests using Yup with stripUnknown: true so that excess fields are removed. However, I have one place where I explicitly want to allow any JSON object as input.
How can I explicitly allow one object within a schema to have any fields while otherwise using stripUnknown: true?
Things I've considered but haven't figured out how to implement:
Use yup.object().test(...) or similar to explicitly allow the object
Use yup.addMethod to add a method to yup.object() which would short-circuit the stripping
Use yup.lazy to generate a schema which allows anything (but the type should allow nested JSON, not only top-level fields)
Add a new top-level type yup.anyObject() which would allow any object
Sandbox example

Allowing (and keeping) any value is actually as simple as:
const schema = yup.object().shape({
json: yup.mixed()
});
This allows any value, not just an object. If you want to validate that json is an object containing anything, you can use yup.lazy to map it into a schema having yup.mixed() for every key existing in the object:
const schema = yup.object().shape({
json: yup.lazy((value) =>
yup
.object()
.shape(
Object.keys(value).reduce(
(map, key) => ({ ...map, [key]: yup.mixed() }),
{}
)
)
)
});

Related

AngularFire valueChanges with idField and non-existent document

If I call valueChanges on a Firestore document that doesn't exist, it returns undefined:
this.afs.doc('bad_document_ref').valueChanges().subscribe(snapshot => {
console.log(snapshot) // undefined
});
But if I call valueChanges on the same bad ref, but I pass in the idField parameter, it returns an object with just the id:
this.afs.doc('bad_document_ref').valueChanges({ idField: 'custom_doc_id' }).subscribe(snapshot => {
console.log(snapshot) // { custom_doc_id: 'bad_document_ref' }
});
I would like for the two above examples to return the same thing. I can do this by adding a pipe:
this.afs.doc('bad_document_ref').valueChanges({ idField: 'custom_doc_id' })
.pipe(map(snapshot => {
if(!snapshot) return undefined;
if (Object.keys(snapshot).length === 1 && Object.keys(snapshot)[0] === 'custom_id_field') {
return undefined;
}
return snapshot;
}))
.subscribe(snapshot => {
console.log(snapshot) // undefined
});
Is there a reason why the first two examples don't return the same thing? It seems like the logical thing to do, for the sake of consistency. Maybe there is a reason I'm not thinking of for why they would return different values?
The valueChange() method is basically the current state of your collection. You can listen for changes on the collection’s documents by calling valueChanges() on the collection reference. It returns an Observable of data as a synchronized array of JSON objects. All Snapshot metadata is stripped and just the document data is included.
When you pass an option object with an idField key containing a string like .valueChanges({ idField: 'propertyId' }); , it returns JSON objects with their document ID mapped to a property with the name provided by idField.
When the document doesn't actually exist, you would expect it to return nothing. In the first piece of code, you are not providing idField and the document doesn't exist, the observable returned undefined, which is justified. However, when you specify idField in the second piece of code, you basically say that when you return the data, you want the id of the document to be added to it. However if there is no data, there should not be any value returned, which is what you wanted to point out and which is quite justified. In other words, if the document does not exist, ideally it should return undefined even if you specify the idField parameter.
A GitHub link pointing towards the same issue says that the appropriate behavior is addressed in version 7 api.
Another GitHub link to be followed on this.
I am using:
angularFirestore.collection<Item>('items');
Note that for the object mapped I use <Item>, so maybe you can use it in your doc.

Resolving auto-generated typescript-mongodb types for GraphQL output

I'm using the typescript-mongodb plugin to graphql-codegen to generate Typescript types for pulling data from MongoDB and outputting it via GraphQL on Node.
My input GraphQL schema looks like this
type User #entity{
id: ID #id,
firstName: String #column #map(path: "first_name"),
...
The generated output Typescript types look correct
export type User = {
__typename?: 'User',
id?: Maybe<Scalars['ID']>,
firstName?: Maybe<Scalars['String']>,
...
And the corresponding DB object
export type UserDbObject = {
_id?: Maybe<String>,
first_name: Maybe<string>,
...
The problem is when actually sending back the mongo document as a UserDbObject I do not get the fields mapped in the output. I could write a custom resolver that re-maps the fields back to the User type, but that would mean I'm mapping the fields in two different places.
i.e. I do not get mapped fields from a resolver like this
userById: async(_root: any, args: QueryUserByIdArgs, _context: any) : Promise<UserDbObject> => {
const result = await connectDb().then((db) => {
return db.collection<UserDbObject>('users').findOne({'_id': args.id}).then((doc) => {
return doc;
});
})
...
return result as UserDbObject;
}
};
Is there a way to use the typescript-mongodb plugin to only have to map these fields in the schema, then use the auto-generated code to resolve them?
You can use mappers feature of codegen to map between your GraphQL types and your models types.
See:
https://graphql-code-generator.com/docs/plugins/typescript-resolvers#mappers---overwrite-parents-and-resolved-values
https://graphql-code-generator.com/docs/plugins/typescript-resolvers#mappers-object
Since all codegen plugins are independent and not linked together, you should do it manually, something like:
config:
mappers:
User: UserDbObject
This will make typescript-resolvers plugin to use UserDbObject at any time (as parent value, or as return value).
If you wish to automate this, you can either use the codegen programmatically (https://graphql-code-generator.com/docs/getting-started/programmatic-usage), or you can also create a .js file instead of .yaml file that will create the config section according to your needs.

How to insert jsonb[] data into column using pg-promise

Given a table with a column of type jsonb[], how do I insert a json array into the column?
Using the provided formatters :array, :json won't work in this instance - unless I am missing the correct combination or something.
const links = [
{
title: 'IMDB',
url: 'https://www.imdb.com/title/tt0076759'
},
{
title: 'Rotten Tomatoes',
url: 'https://www.rottentomatoes.com/m/star_wars'
}
];
const result = await db.none(`INSERT INTO tests (links) VALUES ($1:json)`, [links]);
You do not need the library's :json filter in this case, as you need an array of JSON objects, and not a JSON with an array of JSON objects.
The former is formatted correctly by default, which then only needs ::json[] type casting:
await db.none(`INSERT INTO tests(links) VALUES($1::json[])`, [links]);
Other Notes
Use pg-monitor or event query to output queries being executed, for easier diagnostics.
Method none can only resolve with null, no point storing the result in a variable.
Library pg-promise does not have any :array filter, see supported filters.

get route only with specified parameter

I am new to MongoDB and CRUD APIs.
I have created my first database and inserted some data. I can do get, post and delete requests.
Now I want to request a 'get' by adding a parameter, so I do the following:
router.get('/:story_name', async function (req, res, next) {
const selectedStory = await loadStoryToRead()
res.send(await selectedStory.find({}).toArray())
})
say that story_name is S1C1,
I can do http://localhost:3000/api/whatever/s1c1 to get the data.
I would have expected to retrieve the data ONLY by using the specified parameter, however I can use the ID or the date or any other parameter found in the json file to get the data.
for example I can do
http://localhost:3000/api/whatever/5d692b6b21d5fdac2... // the ID
or
http://localhost:3000/api/whatever/2019-08-30T13:58:03.035Z ... // the created_at date
and obtain the same result.
Why is that?
How can I make sure that if I use router.get('/:story_name' ... I can retrieve the data only if I use the 'story_name' parameter?
Thanks!
* UPDATE *
my loadStoryToRead() looks like this:
async function loadStoryToRead () {
const client = await mongodb.MongoClient.connect(
'mongodb+srv://...', {
useNewUrlParser: true
})
return client.db('read').collection('selectedStory')
}
I will try to reformulate my question.
I want to ensure that the data is retrieved only by adding the 'story_name' parameter in the URL and not by adding any other parameter within the file.
The reading that I have done suggested to add the parameter to the get request, but when I do it, it doesn't matter what parameter I enter, I can still retrieve the data.
The delete request, however, is very specific. If I use router.delete('/:id'... the only way to delete the file is by using the ID parameter.
I would like to obtain the same with a get request and not using the 'id' but by using 'story_name'.
you can use regular expression capabilities for pattern matching strings in queries.
Syntax is:
db.<collection>.find({<fieldName>: /<string>/})
for example, you can use
var re = new RegExp(req.params.story_name,"g");
db.users.find({$or:[{"name": re}, {"_id": re}, {..other fields}]});
You can use $and and $or according to your requirement.
To read more follow the link
https://docs.mongodb.com/manual/reference/operator/query/regex/

Given GraphQL schema, is it possible to do client-side pre-mutation validation?

I have a Relay app and it shares a GraphQL schema with the server. For every mutation, it queries the server, and the server returns back with the error message about what field value is invalid. But given that schema is present on the client, too, is it possible to do client-side validation against this schema?
A pragmatic solution could be by using the Yup and Formik symbiose and then manual create the yup schema object around your inputType which is shared on both front- and backend.
While you're not validating 1-1 against the schema provided by the relay compiler, it is still a pragmatic way for validating on the client side.
JavaScript solution: Create a validation schema based on the custom input type, and pass the validationSchema to Formik:
const Schema = object().shape({
coolOrWhat: boolean()
});
return (
<Formik
initialValues={{
coolOrWhat: true
}}
validationSchema={Schema}
...
>
{/* form inputs here */}
</Formik>
)
TypeScript solution: create the object for validation, infer the type and annotate this object when instantiating the formik component:
const Schema = object({
foo: string()
});
export type SchemaType = InferType<typeof Schema>;
type Props = {
onConfirm: (value: SchemaType) => void;
onCancel: () => void;
};
<Formik<SchemaType>
validationSchema={Schema}
...>
...
</>