I am trying to query mongodb simple findOne in using mongodb. Cloudflare worker is giving 10ms CPU time but during preview/publish throwing error
I have tried installing these npm modules
npm i mongodb, mongodb-core, dgram, fs
var MongoClient = require('mongodb').MongoClient;
try{
var db = await MongoClient.connect('mongodb+srv://mongoURL',{ useNewUrlParser: true,useUnifiedTopology: true });
var dbo = db.db("test");
var result = await dbo.collection("testcollection").findOne()
const init = {
status: 200,
headers: { "Access-Control-Allow-Origin": "*", 'content-type': 'application/json' },
}
return new Response(JSON.stringify(result), init)
} catch(e) { console.log(e); return new Response(JSON.stringify(result), init) }
Error thrown is here - https://pastebin.com/xMKKjdZF
Currently, Cloudflare Workers does not support raw TCP/UDP, only HTTP/HTTPS. Hence you can only connect to databases that offer HTTP(S) interfaces. MongoDB's protocol is not HTTP-based, so you'll need to find some sort of HTTP API proxy you can put in front of it. (Also note that Cloudflare Workers is not based on Node.js, so in general Node modules that use Node's system APIs will not work.)
Related
My code:
void checkState() async {
print("CTC");
var url = "http://localhost:3000";
try {
var respX = await http.get(url);
} catch (err) {
print("response Arrived: $err");
}
}
But it is not possible:
https://github.com/flutter/flutter/issues/43015#issuecomment-543835637
I am using google chrome for debugging. simply pasting http://localhost:3000 allows me to connect to the URL from the same browser.
Is there any way to do it?
This issue was not with the flutter. It is the CORS policies in the browser as well as the server that blocked the request. I hosted it in a nodejs server with express. Here what I have did to solve this:
const express = require('express');
const app = express();
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
next();
});
You can change the 'Access-Control-Allow-Origin' to the domain you are calling from if you want to. Else it will allow request from everywhere.
Remember, the localhost of your emulator is not the localhost of your machine. To test the API running on your machine you have to point to the ip adress of your computer
I have been doing some research and I'm not able to find a good answer about using Knex JS within a Lambda function:
How do I use Knex with AWS Lambda? #1875
Serverless URL Shortener with Apex and AWS Lambda
Use Promise.all() in AWS lambda
Here is what I have in my index.js:
const knex = require('knex')({
client: 'pg',
connection: {...},
});
exports.handler = (event, context, callback) => {
console.log('event received: ', event);
console.log('knex connection: ', knex);
knex('goals')
.then((goals) => {
console.log('received goals: ', goals);
knex.client.destroy();
return callback(null, goals);
})
.catch((err) => {
console.log('error occurred: ', err);
knex.client.destroy();
return callback(err);
});
};
I am able to connect and execute my code fine locally, but I'm running into an interesting error when it's deployed to AWS - the first call is always successful, but anything after fails. I think this is related to the knex client being destroyed, but then trying to be used again on the next call. If I re-upload my index.js, it goes back to working for one call, and then failing.
I believe this can be resolved somehow using promises but this my first time working with Lambda so I'm not familiar with how it's managing the connection to RDS on subsequent calls. Thanks in advance for any suggestions!
For me, it worked on my local machine but not after deploying. I was kind of be mislead.
It turns out the RDS inbound source is not open to my Lambda function. Found solution at AWS Lambda can't connect to RDS instance, but I can locally?: either changing RDS inbound source to 0.0.0.0/0 or use VPC.
After updating RDS inbound source, I can use Lambda with Knex successfully.
The Lambda runtime I am using is Node.js 8.10 with packages:
knex: 0.17.0
pg: 7.11.0
The code below using async also just works for me
const Knex = require('knex');
const pg = Knex({ ... });
module.exports.submitForm = async (event) => {
const {
fields,
} = event['body-json'] || {};
return pg('surveys')
.insert(fields)
.then(() => {
return {
status: 200
};
})
.catch(err => {
return {
status: 500
};
});
};
Hopefully it will help people who might meet same issue in future.
The most reliable way of handling database connections in AWS Lambda is to connect and disconnect from the database within the invocation process itself.
In your code above, since you disconnected already after the first invocation, the second one does not have a connection anymore.
To fix it, just move your instantiation of knex.
exports.handler = (event, context, callback) => {
console.log('event received: ', event);
// Connect
const knex = require('knex')({
client: 'pg',
connection: {...},
});
console.log('knex connection: ', knex);
knex('goals')
.then((goals) => {
console.log('received goals: ', goals);
knex.client.destroy();
return callback(null, goals);
})
.catch((err) => {
console.log('error occurred: ', err);
// Disconnect
knex.client.destroy();
return callback(err);
});
};
There are ways to reuse an existing connection but success rates for that varies widely depending on database server configuration and production load.
I got the exact same issue as you said: Used destroy() in an AWS lambda function (like this: await knex.destroy() at the bottom) and suddenly all my AWS lambdas were in error.
Because I did not suspect it, I searched for hours what was causing the issue and even started to investigate using lambda + vpc + nat etc.. Turns out it's just that AWS freezes lambda in a way that if you destroy the connection, on the next handler invocation it will try to reuse the connection.
Solution: do not use .destroy() at the end of lambda and redeploy.
I have created an hybrid application with Ionic, MongoJS, Angular JS (Mean Stack).
My application worked fine, locally. This means my mongod (Mongo Service) and my mongo ran locally on my pc. I also have a server.js (node) which is located locally.
Now I would like to use MongoLab (MongoDB as a Service) to change the location of my database from local to online.
I intented to change just the connection path, but for some reason I receive an undefined through my http get request.
My code:
server.js
var express = require('express');
var app = express();
var mongojs = require('mongojs');
//var db = mongojs('nzbaienfurtdb', ['nzbaienfurtdb']); // This is my old mongojs which ran locally and worked fine.
var databaseUrl = 'mongodb://dbuser:password#ds045604.mongolab.com:45604/nzbaienfurtdb';
var db = mongojs(databaseUrl, ['nzbaienfurtdb']); // database online with MongoLab
var bodyParser = require('body-parser');
app.use(express.static(__dirname + "/www"));
app.use(bodyParser.json());
app.get('/nzbaienfurtdb', function (req, res) {
console.log("I received a GET request")
db.nzbaienfurtdb.find(function (err, docs){
console.log(docs);
res.json(docs);
});
});
app.listen(3000);
console.log("server running on 3000");
This is a part of my get request out of a service:
service.js
return {
getUsers: function(){""
$http.get("/nzbaienfurtdb")
.success(function(data, status, headers, config){
headers("Cache-Control", "no-cache, no-store, must-revalidate");
headers("Pragma", "no-cache");
headers("Expires", 0);
users = angular.fromJson(data);
})
.error(function(data, status, headers, config){
console.log('Data could not be loaded, try again later');
})
return users;
}
MongoLab has been setup already.
My questions:
Why do I get an undefined for my http GET Request?
What happens with my server.js file when I want to deploy the Ionic App on for example an Android Phone? Is the server running on the device?
Since I have changed the var db variable i get also the following error message in my chrome console:
--------- ERROR CODE:
SyntaxError: Unexpected end of input
at Object.parse (native)
at Object.fromJson (http://localhost:3000/lib/ionic/js/ionic.bundle.js:8764:14)
at http://localhost:3000/js/userServices.js:23:27
at http://localhost:3000/lib/ionic/js/ionic.bundle.js:15737:11
at wrappedCallback (http://localhost:3000/lib/ionic/js/ionic.bundle.js:19197:81)
at wrappedCallback (http://localhost:3000/lib/ionic/js/ionic.bundle.js:19197:81)
at http://localhost:3000/lib/ionic/js/ionic.bundle.js:19283:26
at Scope.$eval (http://localhost:3000/lib/ionic/js/ionic.bundle.js:20326:28)
at Scope.$digest (http://localhost:3000/lib/ionic/js/ionic.bundle.js:20138:31)
at Scope.$apply (http://localhost:3000/lib/ionic/js/ionic.bundle.js:20430:24)
I hope somebody can help me out, I am fighting now for ages!
Thank you in advance, guys!
This issue has been resolved after ages!
I had to enable the API on the website of mongolab in my configuration.
I'm using Yeoman, Grunt, and Bower, to construct a platform for building a frontend independently of a a backend. The idea would be that all of my (AngularJS) controller, services, factories, etc live in this project, and get injected afterwards into my serverside codebase based off the result of grunt build.
My question is:
How can I mock endpoints so that the Grunt server responds to the same endpoints as my (Rails) App will?
At the moment I am using:
angular.module('myApp', ['ngResource'])
.run(['$rootScope', function ($rootScope) {
$rootScope.testState = 'test';
}]);
And then in each of my individual services:
mockJSON = {'foo': 'myMockJSON'}
And on every method:
if($rootScope.testState == 'test'){
return mockJSON;
}
else {
real service logic with $q/$http goes here
}
Then after grunt build, testState = 'test' gets removed.
This is clearly a relatively janky architecture. How can I avoid it? How can I have Grunt respond to the same endpoints as my app (some of which have dynamic params) apply some logic (if necessary), and serve out a json file (possibly dependent on path params)?
I've fixed this issue by using express to write a server that responds with static json.
First I created a directory in my project called 'api'. Within that directory I have the following files:
package.json:
{
"name": "mockAPI",
"version": "0.0.0",
"dependencies": {
"express": "~3.3.4"
}
}
Then I run npm install in this directory.
index.js:
module.exports = require('./lib/server');
lib/server.js:
express = require('express');
var app = express();
app.get('/my/endpoint', function(req, res){
res.json({'foo': 'myMockJSON'});
});
module.exports = app
and finally in my global Gruntfile.js:
connect: {
options: {
port: 9000,
hostname: 'localhost',
},
livereload: {
options: {
middleware: function (connect, options) {
return [
lrSnippet,
mountFolder(connect, '.tmp'),
mountFolder(connect, yeomanConfig.app),
require('./api')
];
}
}
},
Then the services make the requests, and the express server serves the correct JSON.
After grunt build, the express server is simply replaced by a rails server.
As of grunt-contrib-connect v.0.7.0 you can also just add your custom middleware to the existing middleware stack without having to manually rebuild the existing middleware stack.
livereload: {
options: {
open: true,
base: [
'.tmp',
'<%= config.app %>'
],
middleware: function(connect, options, middlewares) {
// inject a custom middleware into the array of default middlewares
middlewares.push(function(req, res, next) {
if (req.url !== '/my/endpoint') {
return next();
}
res.writeHead(200, {'Content-Type': 'application/json' });
res.end("{'foo': 'myMockJSON'}");
});
return middlewares;
}
}
},
See https://github.com/gruntjs/grunt-contrib-connect#middleware for the official documentation.
Alternatively you can use the grunt-connect-proxy to proxy everything that is missing in your test server to an actual backend.
It's quite easy to install, just one thing to remember when adding proxy to your livereload connect middleware is to add it last, like this:
middleware: function (connect) {
return [
lrSnippet,
mountFolder(connect, '.tmp'),
mountFolder(connect, yeomanConfig.app),
proxySnippet
];
}
grunt-connect-prism is similar to the Ruby project VCR. It provides an easy way for front end developers to record HTTP responses returned by their API (or some other remote source) and replay them later. It's basically an HTTP cache, but for developers working on a Single Page Application (SPA). You can also generate stubs for API calls that don't exist, and populate them the way you want.
It's useful for mocking complex & high latency API calls during development. It's also useful when writing e2e tests for your SPA only, removing the server from the equation. This results in much faster execution of your e2e test suite.
Prism works by adding a custom connect middleware to the connect server provided by the grunt-contrib-connect plugin. While in 'record' mode it will generate a file per response on the filesystem with content like the following:
{
"requestUrl": "/api/ponies",
"contentType": "application/json",
"statusCode": 200,
"data": {
"text": "my little ponies"
}
}
DISCLAIMER: I'm the author of this project.
You can use Apache proxy and connect your REST server with gruntjs.
Apache would do this:
proxy / -> gruntjs
proxy /service -> REST server
you would use your application hitting Apache and angular.js application would think that is talking with itself so no cross domain problem.
Here is a great tutorial on how to set this up:
http://alfrescoblog.com/2014/06/14/angular-js-activiti-webapp-with-activiti-rest/
Just my alternative way that based on Abraham P's answer. It does not need to install express within 'api' folder. I can separate the mock services for certain files. For example, my 'api' folder contains 3 files:
api\
index.js // assign all the "modules" and then simply require that.
user.js // all mocking for user
product.js // all mocking for product
file user.js
var user = function(req, res, next) {
if (req.method === 'POST' && req.url.indexOf('/user') === 0) {
res.end(
JSON.stringify({
'id' : '5463c277-87c4-4f1d-8f95-7d895304de12',
'role' : 'admin'
})
);
}
else {
next();
}
}
module.exports = user;
file product.js
var product = function(req, res, next) {
if (req.method === 'POST' && req.url.indexOf('/product') === 0) {
res.end(
JSON.stringify({
'id' : '5463c277-87c4-4f1d-8f95-7d895304de12',
'name' : 'test',
'category': 'test'
})
);
}
else {
next();
}
}
module.exports = product;
index.js just assigns all the "modules" and we simply require that.
module.exports = {
product: require('./product.js'),
user: require('./user.js')
};
My Gruntfile.js file
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
hostname: 'localhost',
livereload: 35729
},
livereload: {
options: {
open: true,
middleware: function (connect) {
return [
connect.static('.tmp'),
connect().use(
'/bower_components',
connect.static('./bower_components')
),
connect.static(appConfig.app),
require('./api').user,
require('./api').product,
];
}
}
}
This is a very weird problem with "connect-mongo"
In my server, I have two scripts.
1) create the express server with session with Mongo DataStore: It has no problem for connection or creating the session.
MongoStore = require('connect-mongo'),
app = require('express').createServer(
express.session({ secret: cfg.wiki_session_secret,
store:new MongoStore({
db: 'mydatabase',
host: '10.10.10.10',
port: 27017
})
})
);
2) just create the store without express:
var MongoStore = require('connect-mongo');
var options = {db: 'mydatabase'};
var store = new MongoStore(options, function() {
var db = new mongo.Db(options.db, new mongo.Server('10.10.10.10', 27017, {}));
db.open(function(err) {
db.collection('sessions', function(err, collection) {
callback(store, db, collection);
});
});
});
That will throw the connection problem:
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: Error connecting to database
at /home/eauser/node_modules/connect-mongo/lib/connect-mongo.js:106:13
at /home/eauser/node_modules/connect-mongo/node_modules/mongodb/lib/mongodb/db.js:79:30
at [object Object].<anonymous> (/home/eauser/node_modules/connect-mongo/node_modules/mongodb/lib/mongodb/connections/server.js:113:12)
at [object Object].emit (events.js:64:17)
at Array.<anonymous> (/home/eauser/node_modules/connect-mongo/node_modules/mongodb/lib/mongodb/connection.js:166:14)
at EventEmitter._tickCallback (node.js:126:26)
I just don't know why..
connect-mongo is a middleware for the connect framework, which express is based on.
So, you must use the middleware with the express framework or the connect framework, otherwise it won't work. It's not written to be a standalone session library.
You can go for mongoose to connect. Install using npm command
npm install mongoose
Install mongoose globally
npm install -g mongoose
app.js
var mongoose = require("mongoose");
This module has callback in the constructor which is called when the database is connected, and the collection is initialized so it won't work as you expect.
I've the same problem than you and I wanted the same interface that you aim here. So I wrote another module called YAMS - Yet Another Mongo Store. This is an example with YAMS:
var MongoClient = require("mongodb").MongoClient;
var Yams = require('yams');
var store = new Yams(function (done) {
//this will be called once, you must return the collection sessions.
MongoClient.connect('mongo://localhost/myapp', function (err, db) {
if (err) return done(err);
var sessionsCollection = db.collection('sessions')
//use TTL in mongodb, the document will be automatically expired when the session ends.
sessionsCollection.ensureIndex({expires:1}, {expireAfterSeconds: 0}, function(){});
done(null, sessionsCollection);
});
});
app.usage(express.session({
secret: 'black whisky boycott tango 2013',
store: store
}));
This is in my opinion more flexible than the connect-mongo middleware.