How to set fetchPolicy globally on apollo-client queries? - graphql-js

I have a few mutations that should trigger some refetchQueries, but I need those queries to have a fetchPolicy other than the default.
Is there a way to set fetchPolicy globally instead of per query? So to avoid setting fetchPolicy on each query.

It is now possible!
const defaultOptions = {
watchQuery: {
fetchPolicy: 'cache-and-network',
errorPolicy: 'ignore',
},
query: {
fetchPolicy: 'network-only',
errorPolicy: 'all',
},
mutate: {
errorPolicy: 'all'
}
}
const client = new ApolloClient({
link,
cache,
defaultOptions,
})
See documentation: Apollo Client

Related

Does migrating from MongoDB to SQL breaks basic Strapi queries?

const PostPopulateObj = () => ({
path: 'posts',
model: 'Post',
select: 'id order title alias',
...({ match: { published_at: { $ne: null } } })
})
const GroupPopulateObj = {
path: 'groups',
model: 'Group',
select: 'id order label posts groups'
}
module.exports = {
async getNavigationByAlias(ctx) {
const { alias } = ctx.params
const nav = await strapi.query('navigation').find({ alias }, [
{
...GroupPopulateObj,
populate: [PostPopulateObj, {
...GroupPopulateObj,
populate: PostPopulateObj
}]
},
PostPopulateObj
])
return nav.length > 0 ? nav : null
}
};
I have this and using PostgresSQL instead of MongoDB breaks the above query. But my understanding is that it shouldn't break basic queries and only custom queries as shown in the documentations.
https://docs-v3.strapi.io/developer-docs/latest/development/backend-customization.html#queries
https://github.com/strapi/migration-scripts/tree/main/v3-mongodb-v3-sql
I used the scripts and was able to repopulate the db, but like I said I am getting different results, where I am getting some generic default post (converted null post?) instead of 2 specific posts. The post now returned by Postgres seems to not be inside the db, not sure what's going on, but for some reason it's not returning an error.
A little below the section, they mention custom queries and how to use Bookshelf and Mongoose. I used the Mongoose library for custom queries in my understanding, but the above doesn't use Bookshelf or Mongoose at all, so it should work.

How do I perform a count on a relation with a where clause in prisma?

I have the following query which gives all posts and a count of all comments. Now I'd like to get a count of all comments with the post that have the approved field set to true. I can't seem to figure this out.
prisma.post.findMany({
include: {
_count: { select: { Comment: true } },
},
});
Thanks for any help.
You would need to use Raw Query to achieve this as the filter on _count for relations is not supported yet.
Here's the Feature Request for the same: Ability to filter count in "Count Relation Feature"
[Edit: 14-Nov-2022]
Prisma has added support for filteredRelationCount since version 4.3.0
Available since 4.3.0.
enable in your schema file:
generator client {
provider = "prisma-client-js"
previewFeatures = ["filteredRelationCount"] << add this
}
and then query:
await prisma.post.findMany({
select: {
_count: {
select: {
comment: { where: { approved: true } },
},
},
},
})

MongoDB: is it possible to capture TTL events with Change Stream to emulate a scheduler (cronjob)?

I'm new to MongoDB and I'm looking for a way to do the following:
I have a collection of a number of available "things" to be used.
The user can "save" a "thing" and decrement the number of available things.
But he has a time to use it before it expires.
If it expires, the thing has to go back to the collection, incrementing it again.
It would be ideal if there was a way to monitor "expiring dates" in Mongo. But in my searches I've only found a TTL (time to live) for automatically deleting entire documents.
However, what I need is the "event" of the expiration... Than I was wondering if it would be possible to capture this event with Change Streams. Then I could use the event to increment "things" again.
Is it possible or not? Or would there be a better way of doing what I want?
I was able to use Change Streams and TTL to emulate a cronjob. I've published a post explaining what I did in details and gave credits at:
https://www.patreon.com/posts/17697287
But, basically, anytime I need to schedule an "event" for a document, when I'm creating the document I also create an event document in parallel. This event document will have as its _id the same id of the first document.
Also, for this event document I will set a TTL.
When the TTL expires I will capture its "delete" change with Change Streams. And then I'll use the documentKey of the change (since it's the same id as the document I want to trigger) to find the target document in the first collection, and do anything I want with the document.
I'm using Node.js with Express and Mongoose to access MongoDB.
Here is the relevant part to be added in the App.js:
const { ReplSet } = require('mongodb-topology-manager');
run().catch(error => console.error(error));
async function run() {
console.log(new Date(), 'start');
const bind_ip = 'localhost';
// Starts a 3-node replica set on ports 31000, 31001, 31002, replica set
// name is "rs0".
const replSet = new ReplSet('mongod', [
{ options: { port: 31000, dbpath: `${__dirname}/data/db/31000`, bind_ip } },
{ options: { port: 31001, dbpath: `${__dirname}/data/db/31001`, bind_ip } },
{ options: { port: 31002, dbpath: `${__dirname}/data/db/31002`, bind_ip } }
], { replSet: 'rs0' });
// Initialize the replica set
await replSet.purge();
await replSet.start();
console.log(new Date(), 'Replica set started...');
// Connect to the replica set
const uri = 'mongodb://localhost:31000,localhost:31001,localhost:31002/' + 'test?replicaSet=rs0';
await mongoose.connect(uri);
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function () {
console.log("Connected correctly to server");
});
// To work around "MongoError: cannot open $changeStream for non-existent database: test" for this example
await mongoose.connection.createCollection('test');
// *** we will add our scheduler here *** //
var Item = require('./models/item');
var ItemExpiredEvent = require('./models/scheduledWithin');
let deleteOps = {
$match: {
operationType: "delete"
}
};
ItemExpiredEvent.watch([deleteOps]).
on('change', data => {
// *** treat the event here *** //
console.log(new Date(), data.documentKey);
Item.findById(data.documentKey, function(err, item) {
console.log(item);
});
});
// The TTL set in ItemExpiredEvent will trigger the change stream handler above
console.log(new Date(), 'Inserting item');
Item.create({foo:"foo", bar: "bar"}, function(err, cupom) {
ItemExpiredEvent.create({_id : item._id}, function(err, event) {
if (err) console.log("error: " + err);
console.log('event inserted');
});
});
}
And here is the code for model/ScheduledWithin:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var ScheduledWithin = new Schema({
_id: mongoose.Schema.Types.ObjectId,
}, {timestamps: true});
// timestamps: true will automatically create a "createdAt" Date field
ScheduledWithin.index({createdAt: 1}, {expireAfterSeconds: 90});
module.exports = mongoose.model('ScheduledWithin', ScheduledWithin);
Thanks for the detailed code.
I have two partial alternatives, just to give some ideas.
1.
Given we at least get the _id back, if you only need a specific key from your deleted document, you can manually specify _id when you create it and you'll at least have this information.
(mongodb 4.0)
A bit more involved, this method is to take advantage of the oplog history and open a watch stream at the moment of creation (if you can calculate it), via the startAtOperationTime option.
You'll need to check how far back your oplog history goes, to see if you can use this method:
https://docs.mongodb.com/manual/reference/method/rs.printReplicationInfo/#rs.printReplicationInfo
Note: I'm using the mongodb library, not mongoose
// https://mongodb.github.io/node-mongodb-native/api-bson-generated/timestamp.html
const { Timestamp } = require('mongodb');
const MAX_TIME_SPENT_SINCE_CREATION = 1000 * 60 * 10; // 10mn, depends on your situation
const cursor = db.collection('items')
.watch([{
$match: {
operationType: 'delete'
}
}]);
cursor.on('change', function(change) {
// create another cursor, back in time
const subCursor = db.collection('items')
.watch([{
$match: {
operationType: 'insert'
}
}], {
fullDocument : 'updateLookup',
startAtOperationTime: Timestamp.fromString(change.clusterTime - MAX_TIME_SPENT_SINCE_CREATION)
});
subCursor.on('change', function(creationChange) {
// filter the insert event, until we find the creation event for our document
if (creationChange.documentKey._id === change.documentKey._id) {
console.log('item', JSON.stringify(creationChange.fullDocument, false, 2));
subCursor.close();
}
});
});

Perform a facet search query with Algolia autocomplete

My index objects has a city field and I'd like to retrieve these with autocomplete, but documentation seems missing about how to perform a query (only basic search documentation is available), I found a prototype IndexCore.prototype.searchForFacetValues in the autocomplete.js but I have no idea to use it.
You should be able to use the following source:
var client = algoliasearch("YourApplicationID", "YourSearchOnlyAPIKey");
var index = client.initIndex("YourIndex");
autocomplete("#search-input", { hint: false }, [
{
source: function(query, callback) {
index
.searchForFacetValues({
facetName: "countries",
facetQuery: query
})
.then(function(answer) {
callback(answer.hits);
})
.catch(function() {
callback([]);
});
},
displayKey: "my_attribute",
templates: {
suggestion: function(suggestion) {
return suggestion._highlightResult.my_attribute.value;
}
}
}
]);
This uses the searchForFacetValues method to get the results.

GraphQL & MongoDB cursors

I'm confused about what should be the relation between GraphQL's cursors and MongoDB's cursors.
I'm currently working on a mutation that creates an object (mongo document) and add it to an existing connection (mongo collection). When adding the object, the mutation returns the added edge. Which should look like:
{
node,
cursor
}
While node is the actual added document, I'm confused on what should be returned as the cursor.
This is my Mutation:
const CreatePollMutation = mutationWithClientMutationId({
name: 'CreatePoll',
inputFields: {
title: {
type: new GraphQLNonNull(GraphQLString),
},
multi: {
type: GraphQLBoolean,
},
options: {
type: new GraphQLNonNull(new GraphQLList(GraphQLString)),
},
author: {
type: new GraphQLNonNull(GraphQLID),
},
},
outputFields: {
pollEdge: {
type: pollEdgeType,
resolve: (poll => (
{
// cursorForObjectInConnection was used when I've tested using mock JSON data,
// this doesn't work now since db.getPolls() is async
cursor: cursorForObjectInConnection(db.getPolls(), poll),
node: poll,
}
)),
},
},
mutateAndGetPayload: ({ title, multi, options, author }) => {
const { id: authorId } = fromGlobalId(author);
return db.createPoll(title, options, authorId, multi); //promise
},
});
Thanks!
Probably a bit late for you, but maybe it will help someone else stumbling across this problem.
Just return a promise from your resolve method. Then you can create your cursor using your actual polls array. Like so:
resolve: (poll =>
return db.getPolls().then(polls => {
return { cursor: cursorForObjectInConnection(polls, poll), node: poll }
})
)
But careful, your poll and the objects in polls have different origins now and are not strictly equal. As cursorForObjectInConnection uses javascript's indexOf your cursor would probably end up being null. To prevent this you should find the object index yourself and use offsetToCursor to construct your cursor as discussed here.