I have 2 directories of detached SQL Server .mdf's and .ldf's respectively, are there any scripts in T-SQL or Powershell that can pick these files up and attach them to a SQL Server without manually inputting every specific file?
Occasionally unattached data and log files would be dumped to these locations so I would ideally like to not update the script every time.
You can Schedule a job to execute the following T-sql statements and it will do the job for you.
USE [master]
GO
CREATE DATABASE [DataBase_Name] ON
( FILENAME = N'C:\Data\DataBase_Name_Data.mdf' ), --<---- Path to your .mdf file------
( FILENAME = N'C:\Data\DataBase_Name_Log.ldf' ) --<---- Path to your .ldf file------
FOR ATTACH
GO
Found the solution:
http://gallery.technet.microsoft.com/scriptcenter/Attach-all-the-databases-ace9ed34
A few tweaks here and there and it did exactly what i was looking for.
Related
I have copied a test mydb.sql to the directory
/docker-entrypoint-initdb.d
and it works fine.
Now I'd like to create my demo db and insert data. I have
mydb_1_struct.sql -- there is db structure
mydb_2_data.sql -- there is db data
I need to execute these scripts in strong order: structure and then data.
What is the order of the scripts execution ?
According to the docker-entrypoint script
it process the scripts in alphabetical order :
docker_process_init_files /docker-entrypoint-initdb.d/*
So you can just use the method you mention ( mydb_1 , mydb_2 ) etc.
I created a mapping that pulls data from a flat file that shows me usage data for specific SSRS reports. The file is overwritten each day with the previous days usage data. My issue is, sometimes the report doesn't have any usage for that day and my ETL sends me a "Failed" email because there wasn't any data in the Source. The job from running if there is no data in the source or to prevent it from failing.
--Thanks
A simple way to solve this is to create a "Passthrough" mapping that only contains a flat file source, source qualifier, and a flat file target.
You would create a session that runs this mapping at the beginning of your workflow and have it read your flat file source. The target can just be a dummy flat file that you keep overwriting. Then you would have this condition in the link to your next session that would actually process the file:
$s_Passthrough.SrcSuccessRows > 0
Yes, there are several ways, you can do this.
You can provide an empty file to ETL job when there is no source data. To do this, use a pre-session command like touch <filename> in the Informatica workflow. This will create an empty file with the <filename> if it is not present. The workflow will run successfully with 0 rows.
If you have a script that triggers the Informatica job, then you can put a check there as well like this:
if [ -e <filename> ]
then
pmcmd ...
fi
This will skip the job from executing.
Have another session before the actual dataload. Read the file, use a FALSE filter and some dummy target. Link this one to the session you already have and set the following link condition:
$yourDummySessionName.SrcSuccessRows > 0
How to restore .sql file to Sql server 2008 , I do not have .bak file.
I tried to search every where but can't find solution.
Pls Help if anyone knows.
.sql files are typically run using Sql Management Studio. They are basically saved SQL statements, so could be anything. You dont "import" them, more precisely, you "execute" them. Even though the script may indeed insert data.
To restore you may need backup ie. .bak
If you dont have the .bak file then either create it else there is no way to restore your database(including .sql file).
You need to have the .bak file to restore your database(including .sql file) else it is not possible.
EDIT:
As commented by OP, the solution that helped to him is:
C:\Users\Administrator>sqlcmd -S . -E -i C:\Users\Administrator\Desktop\scr.sql
I am tring to backup a kdb+ database including all scripts and resource files. i can copy table from below command but this doesn't include scripts and dependency files. Is there any way to copy entire database of Kdb+ or available any tool for this.
copy tables command.
h:hopen hsym `$"localhost:5050"
([x;y] #[`.;y;:;] x y) [h;] each h"tables[]"
You can save and load contexts (taken from http://code.kx.com/q4m3/12_Workspace_Organization/#126-saving-and-loading-contexts):
`:currentws set value `.
That will include the functions that are currently loaded. Presumably scripts are already on file.
Using COPY statement of PostgreSQL, we can load data from a text file into data base's table as below:
COPY CME_ERROR_CODES FROM E'C:\\Program Files\\ERROR_CODES\\errcodes.txt' DELIMITER AS '~'
The above statement is run from a machine which has postgresql client where as the server is in another windows machine. Running the above statement is complaining me that ERROR: could not open file "C:\Program Files\ERROR_CODES\errcodes.txt" for reading: No such file or directory.
After some research, i observed that COPY statement is looking for the loader file(errcodes.txt) in the postgresql server's machine at the same path (C:\Program Files\ERROR_CODES). To test this , i have create the same folder structure in the postgresql server's machine and kept the errcodes.txt file in there. Then the COPY statement worked well. It looks very tough constraint for me with COPY statement.
Is there any setting needed to avoid this? or it is the behavior of COPY statement? I didn't find any information on PostgreSQL documents.
here's the standard solution:
COPY foo (i, j, k) FROM stdin;
1<TAB>2<TAB>3
\.
The data must be properly escaped and tab-separated.
Actually, it is in the docs, even in grammar definition you have STDIN... See: http://www.postgresql.org/docs/9.1/static/sql-copy.html
If you're using some programming language with COPY support, you will have pg_putcopy or similar function. So you don't have to worry about escaping and concatenation.
Hints how to do this manually in Python -> Recreating Postgres COPY directly in Python?
The Perl way -> http://search.cpan.org/dist/DBD-Pg/Pg.pm#COPY_support
Hope this helps.
From the documentation
Quote:
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server. When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server.
If you want to copy from a local machine file to a server use \copy command.