I am just getting into Node, and looking at some options for connecting to SQL Server. A lot of the demos I have seen are simple "here's how to connect with this query...". However, I haven't found much on how connection pooling is managed.
Tedious and node-tds are built on the the TDS protocol. I read through the documentation for FreeTDS and how it manages connection pooling. Are these related?
I also found another extension, T-SQL FTW that was written in C# with a C++ wrapper that allows it to communicate with Node. Since it uses ADO .NET managed code, I'm wondering if this is a better option for stable connection pooling through Node, and if there are other solid options available with benchmarks and more elaborate documentation?
Connection pooling is normally implemented as a wrapper around a connection provider (such as Tedious or node-tds). Pooling is a separate concern, and I'm not sure that it belongs in the connection provider.
I'd suggest that you pick the driver that you want to use, and then write a simple connection pooling solution for it. It's a fairly straightforward task.
[I'm the author of Tedious.]
Related
So, I've created a MySQL database on AWS RDS, and written a client desktop application in a .NET application that uses Sockets to authenticate with, connect to, and manipulate the database, using the endpoint ("xxxxxx.rds.amazon.com") and username/password. This works great.
I was trying to see if I could accomplish something similar in client javascript. It seems like the analogous API it offers is WebSockets, which I am familiarizing myself with. However, it seems to me (mostly from the absence of how-to's on the web) that that endpoint ("xxxxxx.rds.amazon.com") is accessible via Sockets, but not via WebSockets -- and that there is not an alternate route to my MySQL database for WebSockets. Is this correct?
It makes sense that these are two different types of servers, but generally speaking internet resources are served out to Sockets, but not WebSockets? If that is true, they are not as analogous as I originally thought; is WebSockets mostly good for communicating between WebSockets clients and servers that I create? Or can it be used to access good stuff already existing on the internet, as Sockets can be?
(Note: this isn't asking for opinions on the best way to do this, I'm just confirming my impression of the specific way these technologies are used.)
Thank you.
I'm developing a system with several client computers and one server that hosts the central database. Each client performs its CRUD operations directly against the database using Entity Framework, over the local network. The only real challenge I have with this is versioning (EF migrations).
On the other hand, I have heard about a similar architecture where only an application server talks to the database, and clients use a WCF for all CRUD operations and never access the database directly.
What would be the advantages of taking the WCF approach? It seems like there would be twice as much development work for not too much payoff, not to mention poorer performance. And as far as versioning goes, you can't escape it; you now have to migrate EF and version your WCF service. But people must choose this architecture for a reason and I'm curious as to why.
For me most important difference between centralized and distributed database access is possibility to optimize use of connection pooling (https://msdn.microsoft.com/pl-pl/library/8xx3tyca(v=vs.110).aspx).
SQL server has limited number of simultaneously open connections (https://msdn.microsoft.com/en-us/library/ms187030.aspx). If you use connection pool in each of your applications (it is by default in EF) once opened connection is returned to pool instead of close. Then you will end up with i.e. 10 opened connection in each of your working applications.
I'm reading "Pro WF 4.5" published by APress, which seems to say unequivocally that in order to persist state in a long-running workflow after a server crash/shutdown (anything that'd clear memory), a SQL Server back-end is required for persistence.
A lot of the MSDN stuff I see online seems to contradict this. For example, the article linked below.
https://msdn.microsoft.com/en-us/library/dd851337.aspx
What is the real scoop, from someone actually using WF? TIA.
There is a built in Instance Store for SQL Server (https://msdn.microsoft.com/en-us/library/system.activities.durableinstancing.sqlworkflowinstancestore(v=vs.110).aspx) but there is nothing stopping you creating your own - https://msdn.microsoft.com/en-us/library/ee829481(v=vs.110).aspx
That way you could use any persistence you like.
Implementing your own implementation for durable instance on WF can be done. My experience is that it is difficult to do. I ended up with a provider created by Devart. They created a provider for Oracle databases. You can find more information here https://www.devart.com/dotconnect/oracle/docs/WorkflowInstanceStore.html
I could be totally misunderstanding Entity Framework here. I want to use that in my latest project (how else do you learn?) The problem is that the IBM i driver doesn't have support for that built in. Is is possible to create that framework from scratch? It is worth it?
It sounds like you'd be writing your own ADO.NET data provider to connect to IBM DB2 for i. Microsoft provides documentation for creating your own provider and a sample.
The data provider would be responsible for communicating with the database, so I'm not sure how you'd accomplish that. Either you'd be implementing your own connection to the database server running on the i (maybe you can port the SQL piece of JTOpen), or you'd be delegating your calls to the IBM-provided data provider (if that's even possible) or other data access method.
I couldn't decide whether I thought this was (1) a huge pain in the butt or (2) an opportunity for an open source project. (I guess it could be both.) It seems like it'd be easier to lobby IBM to make this part of their stock provider. You might complain about it on MIDRANGE-L and see if people will take up the cause.
Disclaimer: I am a newbie in the .NET world, so maybe there's an easier way to accomplish what you're trying to do.
Last time I used Npgsql, i.e., version 1.0, it worked very slow. Is there any other alternative to Npgsql?
Version 1.0 is three years old. Try to use the newest one.
Npgsql is an excellent connector. Just upgrade to the new one. Make sure you take a look at the documentation it is really good. That will solve the speed issue.
You asked about an alternative, so I also have to recommend an another good connector: dotConnect for PostgreSQL. It is made by Devart. There is a simple free one as well as a fully robust pay connector. The pay one has Linq and entity framework support.
http://www.devart.com/dotconnect/postgresql/
I have experience with the .NET MySQL connector. What you are describing seems to be a DNS issue. If you are using a URL in your connection string and are able to change it to an IP address, try that and see if your delay goes away.
npgsql is still the choice for .NET when connecting to PostgreSQL.
Since version 1.0, the connector has improved drastically, check out this presentation from Shay Rojansky; it is not the latest, still the boost was already quite impressive in 2018/11.
If you are upgrading from an old version, read the release notes of the latest carefully, you might break functionality in your code.
Also, I strongly recommend considering optimizing PostgreSQL as well. I work with it daily in a distributed enterprise environment, with massive workloads; it can be tuned and tweaked with a dramatic impact on the overall performance.
As #yojimbo87 told upgrade to newer connector version. Try that.
Use entity core framework. Npgsql has an Entity Framework (EF) Core provider.
Use Postgres 11
Check connection pool setting
Like most ADO.NET providers, Npgsql uses connection pooling by default. When you Close() the NpgsqlConnection object, an internal object representing the actual underlying connection that Npgsql uses goes into a pool to be re-used, saving the overhead of creating another unnecessarily.
This suits most applications well, as it's common to want to use a connection several times in the space of a second.
It doesn't suit you at all, but if you include the option Pooling=false in your connection string, it will override this default, and Close() will indeed close the actual connection.
Npgsql has an Entity Framework (EF) Core provider. It behaves like other EF Core providers (e.g. SQL Server), so the general EF Core docs apply here as well. If you're just getting started with EF Core, those docs are the best place to start.
Development happens in the Npgsql.EntityFrameworkCore.PostgreSQL repository, all issues should be reported there.
https://www.npgsql.org/efcore/index.html