What is an MDF file? [closed] - mdf

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is this like an “embedded” database of sorts? A file containing a built in database?

SQL Server databases use two files - an MDF file, known as the primary database file, which contains the schema and data, and a LDF file, which contains the logs. See wikipedia. A database may also use secondary database file, which normally uses a .ndf extension.
As John S. indicates, these file extensions are purely convention - you can use whatever you want, although I can't think of a good reason to do that.
More info on MSDN here and in Beginning SQL Server 2005 Administation (Google Books) here.

Just to make this absolutely clear for all:
A .MDF file is “typically” a SQL Server data file however it is important to note that it does NOT have to be.
This is because .MDF is nothing more than a recommended/preferred notation but the extension itself does not actually dictate the file type.
To illustrate this, if someone wanted to create their primary data file with an extension of .gbn they could go ahead and do so without issue.
To qualify the preferred naming conventions:
.mdf - Primary database data file.
.ndf - Other database data files i.e.
non Primary.
.ldf - Log data file.

Related

What is the best way to put content of uploaded by user CSV file into database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is the best way to insert the content of the file (.csv, up to 800 MBytes) uploaded by a web-application user into the PostgreSQL database?
I see three options:
Insert statement for each file row
Insert statement for multiple rows (insert batches containing e.g. 1000 rows)
Store temp file, upload it using PostgreSQL COPY command (I have shared directory between servers where application and database located)
Which way is better? Or maybe there is any other way?
Additional details:
I use Java 8 and JSP
Database: PostgreSQL 9.5
To handle multipart data I use Apache Commons FileUpload and Apache Commons CSV to parse the file
Definitely NOT a single insert for each row. Relaying on PostgreSQL COPY command should be the fastest option.

Restore a database changing the schema on DB2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have a database dump where the table schema is db2inst2. I would like to restore this database on a different server, but using db2inst1 as the destination schema.
Is there a way to do it using db2 restore command?
If not, is there a way to change the schema of all tables after the restore?
You can use the ADMIN_COPY_SCHEMA procedure to copy all objects from one schema to another.
Once completed and you verify everything you can use ADMIN_DROP_SCHEMA to drop the old one.

CodeFluent Entities Deployment Guide Best Practice [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm seeking any best-practice documentation you might have that describes your recommendations for deploying applications which have been built on CodeFluent Entities. We're using CodeFluent (licensed through the University of Western Sydney) for our projects with the Australian Consortium for Classification Development (https://www.accd.net.au) and would like to avoid using other third-party tools if possible. I've had a quick look on SoftFluent's new website's Knowledge Center but haven't found anything which addresses this issue
CodeFluent Entities provides two ways to update the database schema, and SQL Server also has one.
Pivot Runner
http://blog.codefluententities.com/2013/10/10/the-new-sql-server-pivot-script-producer/
Generation time: The SQL Server Pivot Script Producer generates an XML file that describes the schema of the database (table, columns, keys, stored procedures, etc.).
Deployment time: The Pivot Runner read this file and update the target database to match the target schema.
You can run the PivotRunner using the provided client CodeFluent.Runtime.Database.Client.exe or use your own program:
PivotRunner runner = new PivotRunner(pivotPath);
runner.ConnectionString = "<SQL Server connection string>";
runner.Run();
SQL Server producer diff engine
The SQL Server Producer generates a diff script. So you can run this script on the target database.
Data-tier Application (dacpac)
not CodeFluent Entities related
A data-tier application (DAC) defines all the SQL Server Database
Engine schema and instance objects (such as tables, views, and logins)
required to support an application. A DAC is built into a DAC package,
which is an XML file containing a manifest that defines all the
Database Engine objects used by the application, and is used to deploy
the DAC. A DAC simplifies the management of the data-tier objects by
providing a single unit for deployment and management.
https://technet.microsoft.com/en-us/library/ee240739.aspx
https://technet.microsoft.com/en-us/library/ee635209.aspx

Communicating with Informix from PostgreSQL? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please help me to setup connectivity from PostgreSQL to Informix (latest versions for both). I would like to be able to perform a query on Informix from PostgreSQL. I am looking for a solution that will not require data exports (from Informix) and imports (to PostgreSQL) for every query.
I am very new in PostgreSQL and need detailed instructions.
As Chris Travers said, what you're seeking to do is not easy to do.
In theory, if you were working with Informix and needed to access PostgreSQL, you could (buy and) use the Enterprise Gateway Manager (EGM) and use the ODBC driver for PostgreSQL to allow Informix to connect to PostgreSQL. The EGM would do its utmost to appear to be another Informix database while actually accessing PostgreSQL. (I've not validated that PostgreSQL is supported, but EGM basically needs an ODBC driver to work, so there shouldn't be any problem — 'famous last words', probably.) This will include an emulation of 2PC (two-phase commit); not perfect, but moderately close.
For the converse connection (working with PostgreSQL and connecting to Informix), you will need to look to the PostgreSQL tool suite — or other sources.
You haven't said which version you are using. There are some limitations to be aware of but there are a wide range of choices.
Since you say this is import/export, I will assume that read-only options are not sufficient. That rules out PostgreSQL 9.1's foreign data wrapper system.
Depending on your version David Fetter's DBI-Link may suit your needs since it can execute queries on remote tables (see https://github.com/davidfetter/DBI-Link). It hasn't been updated in a while but the implementation should be pretty stable and usable across versions. If that fails you can write stored procedures in an untrusted language (PL/PythonU, PL/PerlU, etc) to connect to Informix and run the queries there. Note there are limits regarding transaction handling you will run into in this case so you may want to run any queries on the other tables using deferred constraint triggers so everything gets run at commit time.
Edit: A cleaner way occurred to me: use foreign data wrappers for import and a separate client app for export.
In this approach, you are going to have four basic components but this will be loosely coupled and subject to proper transactional controls. You can even use two-phase commit if you want. The four components are (not providing a complete working example here but at least a roadmap to one):
Foreign data wrappers for data import, allowing you to see data from Informix.
Views of data to be exported.
External application which manages the export aspect, written in a language of your choice. This listens on a channel like LISTEN export_informix;
Triggers on underlying tables which make view of data to be exported which raise a NOTIFY export_informix
The notifications are riased on the commit and so basically you have two stages to your transaction in this way:
Write data in PostgreSQL, flag data to be exported. Commit.
Read data from PostgreSQL, export to Informix. Commit on both sides (TPC?).

Redis DB export/import [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Does anybody know a good solution for export/import in Redis?
Generally I need to dump DB (and edit the dump as a case) from a server and load it to another one (e.g. localhost).
Maybe some scripts?
Redis has two binary format files supported: RDB and AOF.
RDB is a dump like what you asked. You can call save to force a rdb. It will be stored in the dbfilename setting you have, or dump.rdb in the current working directory if that setting is missing.
More Info:
http://redis.io/topics/persistence
If you want a server to load the content from other server, no dump is required. You may use slaveof to sync the data and once it's up to date call slaveof no one.
More information on replication can be found in this link: http://redis.io/topics/replication
you can try my dump util, rdd, it extract or insert data into redis and can split, merge, filter, rename