Scoped redirect when using Express.js Routers - redirect

When using Express Routers, a route prefix can be provided:
var router = new Express.Router();
app.use("/scope", router);
Let us say that I have some CRUD routes in my Router.
router.post("/", function(req, res) {
var result = create(req);
res.redirect(result.id);
});
router.put("/:id", function(req, res) {
var result = update(req);
res.redirect(result.id);
});
router.get("/:id", function(req, res) {
var result = findById(req.params.id);
res.render("show.html", {data: result});
});
When requesting to the PUT above, the response is a redirect to "/scope/[id]" as it should be. When requesting to the POST, the response is a redirect to "/[id]" instead of the previous result.
I am not certain where to find the root of the issue, nor how to correct it.

I will answer the question anyway, for the benefit of anyone else who might encounter this issue.
What I suspect your original issue was that, since you had strict mode disabled (Express' default), your client side code was likely POSTing to /scope' (which in non-strict node is the same as/scope/). Within your post handler you did a relative redirect usingres.redirect(result.id). Let sayresult.idis123, then a relative redirect would take you to/123`.
NOTE: Don't forget that Express has little to do with the actual redirects, which are taken care of client-side by the browser.
The issue could have easily been solved by ensuring your client side code POSTed to /scope/, after which a relative redirect to id would take you to /scope/id, just as you expected.

Related

intercepting PUT routes on /resources and forwarding internally to /resources/:id

I want to mock a backend with a PUT API slightly different from the standard PUT /resources/:id.
I found a working ugly solution by intercepting my '/tasks', extracting the id from the JSON body and redirecting to '/tasks/:id'.
With:
server.put('/tasks/', (req, res, next) => {
res.redirect(302, 'http://localhost:3300/tasks/' + req.body.id);
});
But how to do it cleanly ? How to forward internally to the standard json-server processing for /tasks/:id without redirecting ?

Is there a way to save ParseObject without make a HTTP request to the REST API?

I didn't find very much about this topic, so I wonder if it is an easy task to achieve or if it's actually not possible. My problem is that I have a lot of HTTP requests on my server even if a Cloud function is called only once. So I suppose that all the object updating / savings / queries are made by using the REST API. I have so many HTTP requests that several hundred are going timeout, I suppose for the huge traffic that it's generated.
Is there a way to save a ParseObject by executing the query directly to MongoDB? If it's not possible at the moment can you give me some hints if there are already some helper functions to convert a ParseQuery and a ParseObject to the relative in MongoDB so that I can use the MongoDB driver directly?
It's really important for my application to reduce HTTP requests traffic at the moment.
Any idea? Thanks!
EDIT:
Here an example to reproduce the concept:
Make a cloud function:
Parse.Cloud.define('hello', async (req, res) => {
let testClassObject = new Parse.Object('TestClass');
await testClassObject.save(null, {useMasterKey: true});
let query = new Parse.Query('TestClass');
let testClassRecords = await query.find({useMasterKey: true});
return testClassRecords;
});
Make a POST request:
POST http://localhost:1337/parse/functions/hello
Capture HTTP traffic on port 1337 using Wireshark:
You can see that for 1 POST request other 2 are made because of the saving / query code. My goal would be to avoid these two HTTP calls and instead make a DB call directly so that less traffic will go through the whole webserver stack.
Link to the Github question: https://github.com/parse-community/parse-server/issues/6549
The Parse Server directAccess option should do the magic for you. Please make sure you are initializing Parse Server like this:
const api = new ParseServer({
...
directAccess: true
});
...

RESTfull implementation and general informatino

I have been reading a lot lately, and even more experimenting with web Development. There are some things that I simply cant understand, therefore any help is appreciated.
I am not trying to get my homework done for me. I have some holes in my knowledge, that I desire to fill. Please, help me out with your views :)
REST questions:
Reading documentation this is perfectly understandable (NODE.JS / Express) example:
EXAMPLE ONE (get):
app.get('/', function(req, res) {
res.send('please select a collection, e.g., /collections/messages')
})
My explanation: When the root of the server is hit, send thie following message
EXAMPLE TWO (get):
app.get('/collections/:collectionName/:id', function(req, res) {
req.collection.findOne({name: req.collection.id(req.params.id)},
function(e, result){
if (e) return next(e)
res.send(result)
})
})
My explanation: When the url in hit, take id from the URL (that is located in params.id) and make search based on it (that is MongoDB).
EXAMPLE THREE (post):
app.post('/collections/:collectionName', function(req, res) {
req.collection.insert(req.body, {}, function(e, results){
if (e) return next(e)
res.send(results)
})
})
My explanation: When the URL is hit, take the payload(JSON in this case) that is located in req.body, and insert it as a new document.
Questions:
Are example one and two both RESTfull?
I am now totally confused with params.id. I do understand that POST is transmitted in rew.body... what is params.id? Is it containing URL variables, such as :ID?
My explanations... are they correct?
Example three is also REST, regardless of the fact that POST is used?
Example three, '/collections/:collectionName. Why is the ':collectionName' passed in URL, I could have placed it in req.body as a parameter (along with new data) and take it from there? What is the benefit of doing it?
Thank you
An API must be using HATEOAS to be RESTful. On first example, if / is the entry point of your API, the response should contain links for the available collections, not a human readable string like that. That's definitely not RESTful.
Exactly.
They're OK, except that there's nothing in the third example implying it's a JSON body. It should check for a Content-Type header sent by the client.
REST isn't dependent on HTTP. As long as you're using the HTTP methods as they were standardized, it's fine. POST is the method to use for any action that isn't standardized, so it's fine to use POST for anything, if there isn't a method specific for that. For instance, it's not correct to use POST for retrieval, but it's fine to use it for creating a new resource if you don't have the full representation.
POST means the data body is subordinated to the resource at the target URI. If collectionName were in the POST body, this would mean you were POSTing to /collections, which would make more sense to create a new collection, not a new item of a collection.

node.js and socket.io: different connections for different "sessions"

I've got a node.js application that 'streams' tweets to users. At the moment, it just searches Twitter for a hard-coded string, but I'd like to allow users to configure this in the URL (eg. by visiting /?q=stackoverflow).
At the moment, my code looks a bit like this:
app.get('/', function (req, res) {
// page rendering skipped
io.sockets.on('connection', function (socket) {
twit.stream('user', {track: 'stackoverflow'}, function(stream) {
stream.on('data', function (data) {
socket.volatile.emit('tweet', data);
}
});
});
});
The question is, how do I make it so that each user can see a different stream of tweets simultaneously? At the moment, it works fine in a single browser tab, but it falls over as soon as a second one is opened - and the error is fairly deep down inside socket.io. Am I misusing it?
I haven't fully got my head around socket.io yet, so that could be the issue.
Thanks in advance!
Every time a new request comes in, you are redefining the connection callback with io.sockets.on - you should move that block of code outside of app.get, after your initialization statement of the io object.

node.js: expressjs with mongoose

I'm working on my first node.js / express / mongoose app and I'm facing a problem due to asynchronisation mechanism of node.js. It seems I do not do the thing correctly...
Here is the test route I defined using express:
app.get('/test', function(req, res){
var mod = mongoose.model('MyModel');
mod.find({},function(err, records){
records.forEach(function(record){
console.log('Record found:' + record.id);
// res.send('Thing retrieved:' + record.id);
});
});
});
When I issue a http://localhost/test, I'd like to get the list of records of type 'MyModel' in the response.
The code above is working fine but when it comes to returning this whole list to the client... it does not work (the commented res.send line) and only returned the first record.
I'm very new to node.js so I do not know if it's the good solution to embed several callback functions within the first callback function of app.get . How could I have the whole list returned ?
Any idea ?
What you should be doing is:
mod.find({},function(err, records){
res.writeHead(200, {'Content-Length': body.length});
records.forEach(function(record){
res.write('Thing retrieved:' + record.id);
});
});
Please always check the documentation as well:
http://nodejs.org/docs/v0.3.8/api/http.html#response.write
I missed that you was using express, the send function is part of express and extend's the serverResponse object of node (my bad).
but my answer still applies, express's send function sends the data using ServerResponse.end() so there for the socket get's closed and you cannot send data any more, using the write function uses the native function.
You may also want to call res.end() when the request is fully completed as some item's within express may be affected