I have spent few hours searching with no luck today.
What I need to do is to write a t-sql query which will be executed from within SSMS and which supposed save an output to a file.
I have full admin access to the sql server 2008 r2 as well as to the OS (I believe Win 2008 Server)
I cannot use commandline, batch files etc.
It needs to be done straight from SSMS as an execution of a t-sql script.
It doesn't even have to be pretty :-)
It just needs to do the job.
I would very appreciate any help.
Regards
Mariusz
Query > Results To > Results to File
You can also do a select all on the results pane, and right click the top-left cell. There's an option to "Save Results As," which will allow you save the results in CSV format.
If you're looking for something more advanced, I would suggest looking into SSIS (SQL Server Integration Services), which is the replacement for DTS. This is what's usually used for data file exports (DFEs)
EDIT
If you need to export the results to file via script, try this:
INSERT INTO OPENROWSET ('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\Test.xls;','SELECT productid, price FROM dbo.product')
I can think of two options, given your constraints
BCP would be the best option, use it with the queryout option I spaced on bcp being a command line utility.
Create a sql agent job that has a single step which is an OS command. THe job has a single step which runs a sqlcmd which runs the query and redirects the output to a file. After creating the job, the script runs it and then deletes the job and any history. That is a blecherous solution that would delight Rube Goldberg but would satisfy the requirements of being done completely through SSMS/automated approach.
If you're within SSMS the option to output to a text file is one of the "Results to ..." options on the toolbar alongside the execute button. Are you trying to something more programmatic?
If you have permission to execute xp_commandshell from SSMS you may be able to take advantage of some of the techniques suggested by the psuedonymous Phil Factor at http://www.simple-talk.com/sql/t-sql-programming/the-tsql-of-text-files/.
Related
how can I export in a CSV file the result of a SELECT query from Mainframe DB2 in Batch mode?
I have tried the FILE MANAGER online mode and it works but I need to use the batch mode for a better performance.
I can also use ISQL but I don't know which parameters I have to use to create a CSV file.
Thanks
If all else fails and you don't mind a little programming then coding your own program that runs the query and writes CSV is EXTREMELY easy.
I mention this because this might be better for you than relying on some tool.
As you're looking for improved performance I'd suggest you CALL the DSNUTILU stored procedure with the UNLOAD utility using DELIMITED COLDEL ',' and SHRLEVEL CHANGE ISOLATION UR parameters for CSV and to maximise concurrency on your DB2 for z/OS table. There are many other option depending on your requirements.
For reference refer to DSNUTILU stored procedure and Syntax and options of the UNLOAD control statement
On iserie you have the CPYTOIMPF command, may be on zos too
I use INavigor system for ad-hoc data extraction from the DB2 database. Only issue is that when it comes to automation. Is there a way I could automate the SQL code to be run on a specific time? I know there is Advance Job Sheduler but I'm not sure how the SQL can be added to the Sheduler. Any one who can help?
IBM added a Run SQL Statements (RUNSQL) CL command at v7.1.
Prior to that, you could store SQL statements in source files and run them with the Run SQL Statements (RUNSQLSTM) command.
Neither of the above allow an SQL Select to be run by itself. For data extraction, you'd want INSERT INTO tbl (SELECT <...> FROM <...>)
For reporting SELECTs, your best bet is to create a Query Manager query (*QMQRY object) and form (*QMFORM object) via Start DB2 UDB Query Manager (STRQM); which can then be run by the Start Query Management Query (STRQMQRY) command. Query Manager (QM) is SQL based, unlike the older Query/400 product. QM manual is here
One last option, is the db2 utility available via QShell.
Don't waste effort creating a day late going out of business because the job scheduler hasn't updated the file system.
Real businesses need real time data.
Just make a SQL view on the iseries that pulls the info you need together.
Query the view externally in real time. Even if you need last 30 days or last month or year to date. These are all simple views to create.
I rely on SQLDeveloper to edit and export a schema.
It works like a charm, and I can run import with sqlplus.
I have tried using sqlplus to generate the same schema export, with no result.
I cannot use the Oracle expdp tool, because I need an ASCII file to be able to diff it.
So the only option I have is SQLDeveloper.
I would like to automate the export (data + DDL) with a cron job on a Linux box, but I can't find a way to use SQLDeveloper from a command line to generate the export.
Any clue?
Short answer: no.
For just the schema side of things you may want checkout show create table equivalent in oracle sql which will get you the SQL source of the DDL.
Are you sure you want an ASCII file for the automated export of an entire DB though? I would be surprised if you really want to diff an entire export of a DB. This SO Answer may help a little though.
If you really want to get a full data dump plus DDL you will have to write your own script that gets the DDL as described in the first link and then select * and process each result into a sql insert.
There's a facility in MySQL Workbench's EER Modelling mode to write an SQL script that's stored with the model. But I've looked all over the place and can't see any way of executing such a script, other than by copying and pasting it into a window of the query mode. There's a menu item Scripting/Run Script, but it doesn't seem to actually do anything. Surely there must be some application of the scripts section of the model beyond just storing SQL text?
Running arbitrary SQL code during forward engineering or synchronization is not possible. The only code that gets executed is the sql to create the objects and to fill tables with data specified in the Inserts section of the table editor.
Running an sql script in general is of course possible and also trivial. Simply open a connection to your server (you should have one created on the home screen, if not do this first). Then in the editor toolbar there's a button to open a script. Use that to open the file (if you have a separate sql file). If you want to run code that is stored in the model (as SQL file) you have to copy/paste it over.
I have written a VBScript to extract data from Active Directory into a record set. I'm now wondering what the most efficient way is to transfer the data into a SQL database.
I'm torn between;
Writing it to an excel file then firing an SSIS package to import it or...
Within the VBScript, iterating through the dataset in memory and submitting 3000+ INSERT commands to the SQL database
Would the latter option result in 3000+ round trips communicating with the database and therefore be the slower of the two options?
Sending an insert row by row is always the slowest option. This is what is known as Row by Agonizing Row or RBAR. You should avoid that if possible and take advantage of set based operations.
Your other option, writing to an intermediate file is a good option, I agree with #Remou in the comments that you should probably pick CSV rather than Excel if you are going to choose this option.
I would propose a third option. You already have the design in VB contained in your VBscript. You should be able to convert this easily to a script component in SSIS. Create an SSIS package, add a DataFlow task, add a Script Component (as a datasource {example here}) to the flow, write your fields out to the output buffer, and then add a sql destination and save yourself the step of writing to an intermediate file. This is also more secure, as you don't have your AD data on disk in plaintext anywhere during the process.
You don't mention how often this will run or if you have to run it within a certain time window, so it isn't clear that performance is even an issue here. "Slow" doesn't mean anything by itself: a process that runs for 30 minutes can be perfectly acceptable if the time window is one hour.
Just write the simplest, most maintainable code you can to get the job done and go from there. If it runs in an acceptable amount of time then you're done. If it doesn't, then at least you have a clean, functioning solution that you can profile and optimize.
If you already have it in a dataset and if it's SQL Server 2008+ create a user defined table type and send the whole dataset in as an atomic unit.
And if you go the SSIS route, I have a post covering Active Directory as an SSIS Data Source