Is there a way to cancel a streamed, long running query in knex.js?
Tried stream.emit('close') but it doesn't seem to close the query, as a knex.destroy() call never resolves.
const query = knex('tablename')
const stream = query.stream({ batchSize: 2000 })
process.on('SIGINT', () => stream.emit('close', 128 + 2))
process.on('SIGTERM', () => stream.emit('close', 128 + 15))
stream
.on('data', onData)
.on('close', (exitCode = 0) => {
knex.destroy()
.then(() => console.log('This is never resolved if the query has finished'))
.catch(err => console.error('could not close connection gracefully', err))
console.log('Finished')
process.exitCode = exitCode
})
.on('end', () => {
console.log('End')
stream.emit('close')
})
Related
Im attempting to fetch from my localhost URL but the all the console.log() except for "done" do nothign and my .catch isnt receiving an error
fetch(tURL, {
method: 'POST',
body: JSON.stringify(post),
mode: 'cors',
headers:{
'content-type': 'application/json'
}
}).then(res => {
if (!res.ok) {
throw new Error(`HTTP error! Status: ${res.status}`);
}
form.reset();
console.log("Response received from server");
console.log(res.json);
return res.json();
})
.then(npost => {
console.log("Parsing response as JSON...");
form.style.display = '';
console.log(npost);
})
.catch(error => console.error(error));
console.log('done');
})
I put several debugging logs to see what was being reached but i get nothing
It looks like this is because of a syntax error, the }) at the end of your file should not be there, I think it is because the formatting is fairly difficult to read. If you write a .then statement between catch and console.log it will work fine. If you count the parenthesis between the fetch and the end, there is an extra set to close a block that doesn't exist. If you delete the }) at the end, your code will work, however the log statement will run before the fetch is complete; which I assume you don't want. To solve this, delete that }) and add this to the end of the chain:
.then(() => console.log('done'))
Original code
fetch(tURL, {
method: 'POST',
body: JSON.stringify(post),
mode: 'cors',
headers:{
'content-type': 'application/json'
}
}).then(res => {
if (!res.ok) {
throw new Error(`HTTP error! Status: ${res.status}`);
}
form.reset();
console.log("Response received from server");
console.log(res.json);
return res.json();
})
.then(npost => {
console.log("Parsing response as JSON...");
form.style.display = '';
console.log(npost);
})
.catch(error => console.error(error));
// This is where the error is, the catch statement was the last "block" in the callback chain, so the `})` is closing a "block" that doesn't exist.
// Delete the semicolon above and uncomment the next line to fix
// .then(() => {
console.log('done');
})
I am using RTK query with typescript in my react application and its working fine however storybookjs is not able to mock data for RTK query.
I am trying to mock store object as shown in this storybook document.
example -
export const Test = Template.bind({});
Test.decorators = [
(story) => <Mockstore data={myData}>{story()}</Mockstore>,
];
.
.
.
const customBaseQuery = (
args,
{ signal, dispatch, getState },
extraOptions
) => {
return { data: [] }; // <--- NOT SURE ABOUT THIS
};
const Mockstore = ({ myData, children }) => (
<Provider
store={configureStore({
reducer: {
[myApi.reducerPath]: createApi({
reducerPath: 'myApi',
baseQuery: customBaseQuery,
endpoints: (builder) => ({
getMyData: myData, //<-- my mock data
}),
}).reducer,
},
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware().concat(myApi.middleware),
})}
>
{children}
</Provider>
);
Since RTK query hook is autogenerated, I am not sure how to mock it in storybookjs. Instead of getting mock data storybook if trying to fetch actual data.
Please help me.
You'd do
endpoints: (builder) => ({
getMyData: builder.query({
queryFn: () => { data: myData }, //<-- my mock data
})
}),
alternatively you could leave the store setup just as it is and use msw to mock the real api.
This Mongoose delete method seems to work Ok locally with HttpRequester
router.delete('/', (req, res) => {
Book.findOneAndRemove({ title: req.body.title })
.then(() => res.json({ 'book deleted': 'success' }))
.catch(err => console.log('Couldn\'t delete book:', err));
}
);
but the MongoLab collection still shows the document. How to get it deleted remotely too? findOneAndDelete() didn't make a difference.
The complete repo is on https://github.com/ElAnonimo/booklist
findOneAndRemove had issues earlier.
findByIdAndRemove works fine.
router.delete('/', (req, res) => {
Book.findOne({ title: req.body.title })
.then((doc) => if(doc)return Book.findByIdAndRemove(doc._id))
.then(() => res.json({ 'book deleted': 'success' }))
.catch(err => console.log('Couldn\'t delete book:', err));
}
);
or even better you can do as follows
router.delete('/', (req, res) => {
Book.deleteOne({ title: req.body.title })
.then(() => res.json({ 'book deleted': 'success' }))
.catch(err => console.log('Couldn\'t delete book:', err));
}
);
Have you change your URI connection to MongoDB on mLab?. I think you've changed it yet.
Please sure use mongodb://<dbuser>:<dbpassword>#ds12xxxx.mlab.com:27342/[database_name], not locally 'mongodb://localhost/[database_name]'
If you've changed it, please use deleteOne https://mongoosejs.com/docs/api.html#model_Model.deleteOne, it's working well.
I'm trying to cleanup 2 collections before each test. I'm using mocha --watch to rerun tests while editing the test source files. First run always executes as expected, but consecutive runs gives Topology was destroyed error from mongodb(indicated via result of http request).
I am not really sure why deleteMany deletes my inserted object in consecutive runs.
describe('myCollection1 related tests', () => {
// myCollection1 documents should refer to a valid myCollection2 document.
var foo;
const exampleObject = {name: 'TEST OBJECT', attr1: 'TO'};
beforeEach(() => {
return Promise.all([
mongo.db('mydb').collection('myCollection1').deleteMany({}), // clear collection 1
mongo.db('mydb').collection('myCollection2').deleteMany({}) // clear collection 2
.then(() => mongo.db('mydb').collection('myCollection2').insertOne(exampleObject) // and add a sample object
.then((value) => {
foo = value.ops[0]; // save this as test specific variable so I can use it in my tests.
return Promise.resolve();
})),
]);
});
it('should create a related object', (done) => {
chai.request(server)
.post('/api/v1/foos/')
.send({ related: foo._id })
.then((res) => {
res.should.have.status(200);
res.body.should.be.an('object').with.all.keys('status', 'errors', 'data');
done();
}).catch((err) => {
done(err);
});
});
});
I spotted issue with your promise structure in beforeEach. I'm not sure it is intended or not. I'm afraid it is the culprit. I'm fixing that into below:
beforeEach(() => {
return Promise.all([
mongo.db('mydb').collection('myCollection1').deleteMany({}),
mongo.db('mydb').collection('myCollection2').deleteMany({})
]) // close the promise.all here
.then(() => collections.engines().insertOne(exampleObject)) // close `then` here
.then((value) => {
foo = value.ops[0];
return Promise.resolve();
});
});
Hope it helps
Hi when I try to do continue on error for insert many it doesnt work in mongoid.
I set the following unique index
db.push_notifications.createIndex({ actor_vid: 1,campaign_id: 1 },{ unique: true, partialFilterExpression: { campaign_id: { $exists: true } } })
PushNotification.collection.insert_many([{:campaign_id => "1",:actor_vid => 9},{:campaign_id => "1",:actor_vid => 8}],:continue_on_error => true, :safe => false)
PushNotification.collection.insert_many([{:campaign_id => "1",:actor_vid => 9},{:campaign_id => "1",:actor_vid => 10}],:continue_on_error => true, :safe => false)
throws
Mongo::Error::BulkWriteError: Mongo::Error::BulkWriteError
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/bulk_write/result.rb:184:in validate!'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/bulk_write/result_combiner.rb:73:inresult'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/bulk_write.rb:65:in execute'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/collection.rb:385:inbulk_write'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/collection.rb:363:in insert_many'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/mongoid-5.1.3/lib/mongoid/query_cache.rb:168:ininsert_many_with_clear_cache'
from (irb):133
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/railties-4.2.6/lib/rails/commands/console.rb:110:in start'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/railties-4.2.6/lib/rails/commands/console.rb:9:instart'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/railties-4.2.6/lib/rails/commands/commands_tasks.rb:68:in console'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/railties-4.2.6/lib/rails/commands/commands_tasks.rb:39:inrun_command!'
from /home/deploy/.bundler/notification_service/ruby/2.2.0/gems/railties-4.2.6/lib/rails/commands.rb:17:in <top (required)>'
from script/rails:6:inrequire'
from script/rails:6:in `'
OR
What is the mongo equivalent for mysql insert ignore? I need to perform insert_many operation with bypassing error on unique keys
continue_on_error is not an option for insert_many , use unordered inserts instead.By specifying ordered: false , inserts will happen in an unordered fashion and it will try to insert all requests.Including an try catch block will make sure it won't break after an exception, so you are achieving an MYSQL INSERT IGNORE equivalent.
If your using ROR, this is how your code should be,
begin
PushNotification.collection.insert_many([{:campaign_id => "1",:actor_vid => 10},{:campaign_id => "1",:actor_vid => 11},{:campaign_id => "1",:actor_vid => 12}],{:ordered => false})
PushNotification.collection.insert_many([{:campaign_id => "1",:actor_vid => 10},{:campaign_id => "1",:actor_vid => 11},{:campaign_id => "1",:actor_vid => 13}],{:ordered => false})
resque => ex
puts ex.message
end
So, after the block is executed , you will have 4 new entries inserted and 2 Mongo::Error::BulkWriteError Exception.