How to count Number of queries passed from prisma client? - postgresql

I'm using node as backend for my application. And postgresql as database. For accessing database I'm using Prisma But now I'm really worried about the DB hit on each API calls. So is there any method to find number of queries passed through prisma client or to see the queries. Or Should I need to log anything in my postgresql for that?

Related

How to setup mutli-tenancy using row level security on Postgres with knex

I am architecting a database where I expected to have 1,000s of tenants where some data will be shared between tenants. I am currently planning on using Postgres with row level security for tenant isolation. I am also using knex and Objection.js to model the database in node.js.
Most of the tutorials I have seen look like this where you create a separate knex connection per tenant. However, I've run into a problem on my development machine where after I create ~100 connections, I received this error: "remaining connection slots are reserved for non-replication superuser connections".
I'm investigating a few possible solutions/work-arounds, but I was wondering if anyone has been able to make this setup work the way I'm intending. Thanks!
Perhaps one solution might be to cache a limited number of connections, and destroy the oldest cached connection when the limit is reached. See this code as an example.
That code should probably be improved, however, to use a Map as the knexCache instead of an object, since a Map remembers the insertion order.

RDS Data API BatchExecute taking significantly longer than standard connection

I have an AWS Lambda function that needs to insert several thousand rows of data into an RDS PostgreSQL database within a Serverless Cluster. Previously I used a normal database connection using psycopg2, but I switched to the RDS Data API in order to improve performance. However, using the Data API, BatchExecute exceeds the 5 minute lambda limit and still fails to commit the transaction in this time. Meanwhile, the psycopg2 solution, which uses a different transfer protocol, inserts all data in under 30 seconds.
Why is this possible? Shouldn't the Data API give superior performance as it doesn't need to establish a connection? Can I change any settings to make the RDS Data API perform suitably?
I don't believe I am reaching any of the data size limits, because the lambda times out rather than explicitly throwing an error. Also, I know that the connection is succeeding, as other small queries are able to execute successfully.

how can we query data in postgres and oracle in scala jdbc using single getconnection?

I'm trying to query on tables which are in oracle and postgres. here I have used two getconnection methods but while I'm trying to do some join operations it is giving me an error. This error is because of using or querying on a single resultset which has particular(either postgres or oracle) database connection. can we pass two database connections in a single getConnection() method?
note :- Written in scala
JDBC essentially works by sending a query to a database server and presenting the response of the server as a ResultSet to the developer. In other words it does not execute the query but hands it off the database which executes it. Hence JDBC can not magically execute a single query that combines tables from different database servers.
Some databases support linking multiple servers together. In that case you would have to configure the databases to know about each other, and then connect to one of them and send it a query that references the linked server in the correct format (which is different for every vendor). However not all vendors support linked servers, and even less support linking to servers of other vendors.
Another option is using something like spark which has its own query engine. Spark can use JDBC to download the data from both servers to your machine or spark cluster and then execute the query locally.

Mapping form.io responses to a postgress DB

I am considering using Form.IO Open Source which would be installed on AWS as the front end of an existing app that has a postgress dB.
I am very new to this and I can not tell if you can use the resquel library allows you to map the form responses directly to a SQL database or does it map the form response saved to its mongo database to a SQL database?
I really do not want to use the Mongo DB at all and am hoping that form responses can be mapped directly without saving to a Mongo DB
Any insight into this is greatly appreciated
thank you

Rest api vs sqoop

I was trying to import data from mysql to hdfs.
I was able to do it with sqoop but this can be done by fetching the data from api also.
My question is about when to use rest api to load data in hdfs instead of sqoop?
Please specify some difference with use cases!
You could use Sqoop to pull data from Mysql and into Hbase, then put a REST API over Hbase (on Hadoop)... Would be not much different than a REST API over Mysql.
Basically, you're comparing two different things. Hadoop is not meant to replace traditional databases or N-tier user-facing applications, it just is a more distributed, fault tolerant place to store large amounts of data.
And you typically wouldn't use a REST API to talk to a database, then put those values into Hadoop, because that wouldn't be distributed and all database results go through a single process
Sqoop (SQL <=> Hadoop) is basically used for loading data from RDBMS to HDFS.
It's a direct connection to database where you can append/modify/delete data in table(s) using sqoop eval command if privileges are not defined properly for the user accessing the db from sqoop
But using Rest web services api we can fetch data from various databases (can be NoSQL or RDBMS both) connected internally via code.
Consider you are calling a getUsersData restful web service using curl command which is specifically designed only to provide users data and doesn't allow to append/modify/update any components of db irrespective of database (RDBMS/NoSQL)