Database access: singleton or close - mongodb

I am working on a project for a while and I am close to finish.
However now some technical questions are coming.
So I am dealing with mongoDB and SpringData layer.
However this is not the DB which matter but more the question behind.
I am building a rest api with jaxrs.
I decided to put all my end point in scope prototype and my services inside scope request (because some services can be used more than one time in the same request).
However comes the question of the database.
- There are some people telling me that a singleton is the best approach to have only one connection but in the other end if the traffic grows all the request will be stuck at database entry
There is as well the solution of the closing connection. I implemented a filter which performs some processes (like renewing token if needed etc) I could plug the close connection here. But some person say that it is really costly to open a connection and close it.
I found some answers but it is more related to the client part (phone for example) and the constraint are not the same.

from the mongodb java driver documentation http://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/connect-to-mongodb/
The MongoClient() instance represents a pool of connections to the database; you will only need one instance of class MongoClient even with multiple threads.
I might be wrong but I dont think Mongo takes advantage of multiple connections (they run sequentially anyway), making them pointless

Related

EF Core and caching of results

I'm working on an Websocket application. When the client connects to the server, the websocket session get one dbcontext from dependency injection
Services.AddDbContext<Db>
This dbcontext will be the same for the whole websocket session. The problem is that the dbcontext will cache results. So if the websocket session is open for for example two hours and its reading the same data twice, while the data has been changed outside that dbcontext, the dbContext will give give invalid data back as response for the query. (the cached result from last query). There is serveral examples on how to avoid this, but it has to be done on every query. This is not really practical and somewhere in the code it might be forgotten and you have a chance to get invalid data.
Is there someway to permanently disable caching?
I think that you try use Entity Framewor in a very wrong way, DbContext is not supposed to work this way and it is not a cache per say, although it keeps some data in memory for you.
In your case I would suggest to either
Query The database every time as you suggested.
Or even better
Take advantage of proper caching mechanisms.
The decision if you should use sql server or a caching mechanism is based on how long you want to keep the data and how often you want to query them. If it is permanent and not query so often then it is sql server. If it is a couple of hours and you query very often it is better caching.
As a caching mechanism you can use:
The default MemoryCache, but it has quite limited functionality and it is restricted to the application level, so if you run multiple instyance of yor application this solution will not work out.
A distributed cache solution, like Redis, which supports a lot of functionality and you can connect many instances of your applications.

RethinkDB - How to stream data to the browser

Context
Greetings,
One day I randomly found RethinkDB and I was really fascinated by the whole real-time changes thing. In order to learn how to use this tool I quickly spinned up a container running RethinkDB and i started making a small project. I wanted to make something very simple therefore i thought about creating a service in which speakers can create room and the audience can ask questions. Other users can upvote questions in order to let the speaker know which one are the best. Obviously this project has a lot of realtime needs that i believe are best satisfied by using RethinkDB.
Design
I wanted to use a vary specific set of tools for this. The backend would be made in Laravel Lumen, the frontend in Vue.JS and the database of course would be RethinkDB.
The problem
RethinkDB as it seems is not designed to be exposed to the end user directly despite the fact that no security concern exists.
Assuming that the user only needs to see the questions and the upvoted in real time, no write permissions are needed and if a user changed the room ID nothing bad will happen since the rooms are all publicly accessible.
Therefore something is needed in order to await data updates and push it through a socket to the client (socket.io for example or pusher).
Given the fact that the backend is written in PHP i cannot tell Lumen to stay awake and wait for data updates. From what i have seen from the online tutorials a secondary system should be used that should listen for changes and then push them. (lets say a node.js service for example)
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
If I have to send the action from the client's computer (user asks a question), save it to the database, have a script that listens for changes, then push the changes to socket.io and finally have the client (vue.js) act when a new event arrives, what is the point of having a real-time database in the first place?
I could avoid all this headache simply by having the Lumen app push the event directly to socket.io and user any other database system instead.
I really cant understand the point of all this. I am not experienced with no-sql databases by any means but i really want to experiment with them.
Thank you.
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
RethinkDB has no built in mechanism to transfer data to end-users. It has no access control (in the conventional sense) as well. The common way, like you said, is to spin up one / multiple node instance(s) running socket.io. On each instance you can listen on your RethinkDB change streams and use socket.io's broadcast functionality. This would be a common way, but as RethinkDB's streams are pretty optimized, you could also open a change stream for every incoming socket.io connection.

Using MongoDB in AWS Lambda with the mLab API

Usually you cant use MongoDB in Lambdas because Lambda functions are stateless and operations on MongoDB require a connection, thus you suffer a large performance hit in setting up a DB connection each time a function is run.
A solution I have thought of is to use mLab's REST API (http://docs.mlab.com/data-api/), that way I dont need to open a new connection each time my Lambda function is called.
On problem I can see if that mLab's REST service could become a bottleneck, plus im relying on it never going down.
Thoughts?
I have a couple of alternative suggestions for you on this. Only because I've never used mLab.
Setup http://restheart.org/ and have that sit between your lambda micro services and your MongoDB instance. I've used this with pretty decent success on another project. It does come with the downside of now having an EC2 instance to maintain. However, setting up restheart is pretty easy and the crew maintaining it and giving support is pretty great.
You can setup a lambda function that pays the cost of connecting and keeping a connection open. All of your other microservices can then call that lambda function for the data they need. If it is hit more frequently, you will not have to pay the cost of the DB connection as frequently. However, that first connection can be pretty brutal so you may need something keeping it warm. You will also have the potential issue of connections never getting properly closed, and eventually running out.
Those two options aside, if mlab is hosting your DB, you already have put a lot of faith in their ability to keep a system alive. If they cant keep an API up that lack of faith should also translate to their ability to keep your DB alive.

Mysqli - Should I reconnect everytime?

I'm developing a project here and everytime I need to send a query to MYSQL I'm opening a new connection.
Is this right or should I only connect once? How should I proceed?
Thank you
Probably you should not open a new connection for every query.
There are exceptions to every general rule, of course, but typically you should connect once, sometime before the first query, and then re-use the same mysqli connection object for multiple queries during the given PHP request.
There is no limit to the number of queries you can run in serial using a given connection. The only limitation is that you can run only one query at a time.
Think of it this way: if you were writing a PHP script to simply read a file, and you knew you were going to read multiple lines from the file, you would keep the file handle open and make multiple read requests from it before you close the file. You would not re-open the file every time you wanted to read from it during a single PHP request.
The overhead of opening new connections to the database is reasonably low (at least for MySQL), but if you have an opportunity to easily reduce that overhead, it's likely worth it to do so.
Re your comment:
You're right, there's no way to keep your $mysqli object across pages. That's what is meant by the term request. Objects and resources are cleaned up at the end of a request.
When you said you were creating a new database connection for every query, I assume you meant that if you execute more than one SQL query during a single request (that is, page view), that you would create a new $mysqli object for each query. That would be unnecessary.
There's one other way you can reuse the database connections from one page view to the next. That is to use persistent connections. This doesn't preserve the $mysqli object -- you still have to run new mysqli on each page load. But internally it is reusing the database connection from a previous PHP request.
All you have to do to open a persistent connection in this way is to add the prefix p: to your hostname.
Servers and databases have a finite number of connections available. If every one of your users keeps an open connection for no reason (like when they are reading a blog post for a page that already loaded) then it will cap the number of people who can connect to your project in production. Unless there is a very specific need to keep a connection open, I recommend not doing so.
Again though, it really depends on the scope of your project. If you are just talking about a single page of a website, typically it's fine to leave the connection open until you are done loading the page.

Client Server Applications for Iphone

I have a question regarding this topic.Like for Client Server Applications
1) is it necessary to load database directly into the Application.
Suppose if I have a DB in the back end and My application has to connect to that DB and display the results on the View for this do I need to Add DB into the Application directly.
2) can we access any DB or a File on the Remote server and show the required results.( with out adding that particular DB or A File into the application directly). How can we do this.
I saw a similar question in stackoverflow one answer was to use a PList, I am new to this.I am browsing the net but not able to get clear results. I lost many of my interviews because of this question.
Thanks,
1) is it necessary to load database
directly into the Application.
Suppose if I have a DB in the back end
and My application has to connect to
that DB and display the results on the
View for this do I need to Add DB into
the Application directly.
I'm not sure I understand this question. No, you don't need to load a database directly into a client in a client-server architecture. Normally, when I think of a design where a server has a database, I imagine there's some kind of way for the client to query the server for information. Perhaps it's making HTTP requests, which the server parses into a query, runs the query, and then returns the results (perhaps in XML form?).
2) can we access any DB or a File on
the Remote server and show the
required results.( with out adding
that particular DB or A File into the
application directly). How can we do
this.
Are you asking if it's possible, in general, to access a server database from a client? Yes, of course. (See above, re: HTTP Requests).
Any arbitrary file? That depends on how the server is set up. Again, HTTP is one protocol works that way; if you send an HTTP query like "GET someimage.png HTTP/1.0", the server could just be grabbing the whole file someimage.png and sending it back in the response. (Technically, it's not necessarily snarfing a whole file -- it could be creating that PNG dynamically since there's nothing in the HTTP protocol that says it must be sending an existing file -- but that's outside the scope of your question.)
I lost many of my interviews because
of this question.
Not to sound too snarky, but interviews are often won and lost not because you don't know the answer, but when you can't communicate effectively. You haven't phrased your question(s) here particularly well.