What is the best way to put content of uploaded by user CSV file into database [closed] - postgresql

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is the best way to insert the content of the file (.csv, up to 800 MBytes) uploaded by a web-application user into the PostgreSQL database?
I see three options:
Insert statement for each file row
Insert statement for multiple rows (insert batches containing e.g. 1000 rows)
Store temp file, upload it using PostgreSQL COPY command (I have shared directory between servers where application and database located)
Which way is better? Or maybe there is any other way?
Additional details:
I use Java 8 and JSP
Database: PostgreSQL 9.5
To handle multipart data I use Apache Commons FileUpload and Apache Commons CSV to parse the file

Definitely NOT a single insert for each row. Relaying on PostgreSQL COPY command should be the fastest option.

Related

Restore a database changing the schema on DB2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have a database dump where the table schema is db2inst2. I would like to restore this database on a different server, but using db2inst1 as the destination schema.
Is there a way to do it using db2 restore command?
If not, is there a way to change the schema of all tables after the restore?
You can use the ADMIN_COPY_SCHEMA procedure to copy all objects from one schema to another.
Once completed and you verify everything you can use ADMIN_DROP_SCHEMA to drop the old one.

Postgres schema compare that works with declarative partitions [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Are there any tools that can compare the schema of two Postgres databases to an SQL script and is able to handle declaratively partitioned tables correctly?
I've been searching high and low. DataGrip 2018.2 is able to generate DDL that correctly reproduces a declaratively partitioned table and all of the partitions, but it does not generate a script. Migra (a Python tool) generates a script, but treats partitions as standalone tables.
I had a similar issue but it was related to inheritance table in postgresql and I tried the following 2 options with success:
1st option:
pg_dump -s db1> first
pg_dump -s db2> second
diff first second
(obviously wont generate SQL to remedy the differences)
2nd option:
TiCodeX SQL Schema Compare
(https://www.ticodex.com)
It's a nice tools that runs in Windows, Linux and Mac and can compare the schema of MS-SQL, MySQL and PostgreSQL database.
Easy to use and effective. It may help you.

MeteorJS (MongoDB) with Spark [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am using MeteorJS framework for one of my project .
I have build a basic webApp once before using MeteorJS and it works perfectly fine when its just Client, Server and MongoDB.
In this project, I want the monogDB (which comes in build with MeteorJS) to populate data from Apache Spark.
Basically, Apache Spark will process some data and inject it into mongoDB
Is this doable ?
Please can you point me to the right tutorial for this
How complex is this to implement ?
Thanks in advance for your help
Yes this is very possible and quite easy. That said it won't be via MeteorJS, it would be part of the Apache Spark job and would be configured there.
Using the MongoDB Spark Connector taking data from a DataFrame or an RDD and saving it to MongoDB is easy.
First you would configure how and where the data is written:
// Configure where to save the data
val writeConfig = WriteConfig(Map("uri" -> "mongodb://localhost/databaseName.collectionName"))
With RDD's you should convert them into Documents via a map function eg:
val documentRDD = rdd.map(data => Document) // map the RDD into documents
MongoSpark.save(documentRDD, writeConfig)
If you are using DataFrames it's much easier as you can just provide a DataFrameWriter and writeConfig:
MongoSpark.save(dataFrame.write, writeConfig)
There is more information in the documentation or there are examples in the github repo.

Redis DB export/import [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Does anybody know a good solution for export/import in Redis?
Generally I need to dump DB (and edit the dump as a case) from a server and load it to another one (e.g. localhost).
Maybe some scripts?
Redis has two binary format files supported: RDB and AOF.
RDB is a dump like what you asked. You can call save to force a rdb. It will be stored in the dbfilename setting you have, or dump.rdb in the current working directory if that setting is missing.
More Info:
http://redis.io/topics/persistence
If you want a server to load the content from other server, no dump is required. You may use slaveof to sync the data and once it's up to date call slaveof no one.
More information on replication can be found in this link: http://redis.io/topics/replication
you can try my dump util, rdd, it extract or insert data into redis and can split, merge, filter, rename

What is an MDF file? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is this like an “embedded” database of sorts? A file containing a built in database?
SQL Server databases use two files - an MDF file, known as the primary database file, which contains the schema and data, and a LDF file, which contains the logs. See wikipedia. A database may also use secondary database file, which normally uses a .ndf extension.
As John S. indicates, these file extensions are purely convention - you can use whatever you want, although I can't think of a good reason to do that.
More info on MSDN here and in Beginning SQL Server 2005 Administation (Google Books) here.
Just to make this absolutely clear for all:
A .MDF file is “typically” a SQL Server data file however it is important to note that it does NOT have to be.
This is because .MDF is nothing more than a recommended/preferred notation but the extension itself does not actually dictate the file type.
To illustrate this, if someone wanted to create their primary data file with an extension of .gbn they could go ahead and do so without issue.
To qualify the preferred naming conventions:
.mdf - Primary database data file.
.ndf - Other database data files i.e.
non Primary.
.ldf - Log data file.