Please help me to achieve the below problem:
I am having one "view" in the Oracle database,I want the output of that view and store that output in the .txt file on some other folder in UNIX box.
The output which is generated from view is a report and I want to save that report in .txt format in one folder on UNIX box.Oracle is present on the UNIX box.
I thought you might be able to use data pump, but maybe the easiest way is to just run this into the standard oracle sql command line app like:
set long 10000
set termout off
set trimspool off
set feedback off
set heading off
spool test.txt
select a ||','||b||','||c from myview;
spool off;
If you put this in a file called extractSql.sql, then you could run:
${ORACLE_HOME}/bin/sqlplus -L ${USER}/${PASS}#${DB_SERVER} #extractSql.sql
Related
I'm wondering how to specify a filepath for my tick setup to save to when .u.endofday is sent from the tickerplant. Currently, when this message is sent the RDB is saved to the working directory where the tick.q file is.
Is there away to pass in a file path so that it is saved to ../../HDB rather than ../../Tick?
In the vanilla r.q script, the tables are saved down using
.Q.hdpf[`$":",.u.x 1;`:.;x;`sym]
where the second parameter is the directory that the tables are saved to.
`:.
represents the current directory. You can change it to something else, for example `:/home/data/hdb
https://code.kx.com/q/ref/dotq/#qhdpf-save-tables
If you are using the plain r.q script, referring to
https://github.com/KxSystems/kdb-tick/blob/master/tick/r.q
There is a comment under .u.rep, suggesting to modify the 'system cd' command, where you can specify any directory you like. This will change the directory inside the r.q process. Then when .Q.hdpf is called it will save the tables to that directory. The rdb calls .u.rep on start up.
.u.rep:{(.[;();:;].)each x;if[null first y;:()];-11!y;system "cd ",1_-10_string first reverse y};
/ HARDCODE \cd if other than logdir/db
You could have
system "cd /home/data/hdb"
which will change the current directory to this location
Depending on your setup there is couple of ways to do this.
But I think the most efficient would be for you to look at the .u.end function that is called in you RDB and see what save down function is used there.
Search the place where .u.end is defined on the RDB and look at the save_down functions.
Look for .Q.dpft which is most likely or there is set command.
Documentation on the .Q.dpft:
https://code.kx.com/q/ref/dotq/#qdpft-save-table
Where the first argument that is fed in is the directory path.
So could add a directory there in the form of
hsym `$"/path/path/HDB"
Which returns
`:/path/path/HDB
as a symbol to be inserted to the function.
There might be different ways of tables being saved down, but that is most likely way it is done.
There is also different ways to choose a directory with par.txt file that is loaded in. So useful to see if par.txt file is loaded in with the .Q.par function called on the RDB.
.Q.par[`:.;.z.d;`]
if the answer is just:
`:./2020.05.09/
That means it is using the directory you launched the script in.
Here you can find some more documentation on this:
https://code.kx.com/q/kb/partition/
I hope you are all well.
So my question is about the procedure to open multiple raw data files that are compressed.
My files' names are ordered so I have for example : o_equities_20080528.tas.zip o_equities_20080529.tas.zip o_equities_20080530.tas.zip ...
Thank you all in advance.
How much work this will be depends on whether:
You have enough space to extract all the files simultaneously into one folder
You need to be able to keep track of which file each record has come from (i.e. you can't tell just from looking at a particular record).
If you have enough space to extract everything and you don't need to track which records came from which file, then the simplest option is to use a wildcard infile statement, allowing you to import the records from all of your files in one data step:
infile "c:\yourdir\o_equities_*.tas" <other infile options as per individual files>;
This syntax works regardless of OS - it's a SAS feature, not shell expansion.
If you have enough space to extract everything in advance but you need to keep track of which records came from each file, then please refer to this page for an example of how to do this using the filevar option on the infile statement:
http://www.ats.ucla.edu/stat/sas/faq/multi_file_read.htm
If you don't have enough space to extract everything in advance, but you have access to 7-zip or another archive utility, and you don't need to keep track of which records came from each file, you can use a pipe filename and extract to standard output. If you're on a Linux platform then this is very simple, as you can take advantage of shell expansion:
filename cmd pipe "nice -n 19 gunzip -c /yourdir/o_equities_*.tas.zip";
infile cmd <other infile options as per individual files>;
On windows it's the same sort of idea, but as you can't use shell expansion, you have to construct a separate filename for each zip file, or use some of 7zip's more arcane command-line options, e.g.:
filename cmd pipe "7z.exe e -an -ai!C:\yourdir\o_equities_*.tas.zip -so -y";
This will extract all files from all of the matching archives to standard output. You can narrow this down further via the 7-zip command if necessary. You will have multiple header lines mixed in with the data - you can use findstr to filter these out in the pipe before SAS sees them, or you can just choose to tolerate the odd error message here and there.
Here, the -an tells 7-zip not to read the zip file name from the command line, and the -ai tells it to expand the wildcard.
If you need to keep track of what came from where and you can't extract everything at once, your best bet (as far as I know) is to write a macro to process one file at a time, using the above techniques and add this information while you're importing each dataset.
I have a script I created to help with converting a video then uploading it to our website. Our videos all have a standard format for their filename to help with setting them up correctly (day, month, year; i.e. 09OCT2013.m4v). They get filed into directories from year to month to day (i.e. 2013/oct/09OCT2013/09OCT2013.m4v). Right now, my script opens by asking for user input for the year then month then the actual file name for the folder. What I want to do is take the file that has already been created, drop it into the script, then have the script take it apart and put it in the appropriate file (i.e. drop the file 12JUN2012.m4v into the script and the script automatically puts it into 2012/jun/12JUN2012/). Is there any possible way to do this in terminal? Please let me know if any part of my question is unclear.
Assuming that you're using bash:
for file in "$#"
do
dd=${file:0:2}
mm=${file:2:3}
yy=${file:5:4}
mv "$file" "$yy/$mm/$file"
done
If the file needs to be moved further, or is supplied with more pathname, you can adjust the script, but the basic idea of splitting the last component of the file name up using the substring notation is good.
Im trying to use BCP to dump data from CDC function into a .dat file. Im using the following query (which works in Server 2008 R2):
USE LEESWIJZER
DECLARE #begin_time datetime
, #end_time datetime
, #from_lsn binary(10)
, #to_lsn binary(10)
SET #end_time = '2013-07-05 12:00:00.000';
SELECT #to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', #end_time);
SELECT #from_lsn = sys.fn_cdc_get_min_lsn('dbo_LWR_CONTRIBUTIES')
SELECT sys.fn_cdc_map_lsn_to_time(__$start_lsn) AS ChangeDTS
, *
FROM cdc.fn_cdc_get_net_changes_dbo_LWR_CONTRIBUTIES (#from_lsn, #to_LSN, 'all')
(edited for readability, used in BCP as single string)
my BCP string is:
BCP "Query above" queryout "C:\temp\LWRCONTRIBUTIES.dat" -w -t ";|" -r \n -T -S {server\\instance} -o "C:\temp\LWRCONTRIBUTIES.log"
As you can see I want a resulting .dat file in unicode, and a log file. I'm guessing the "ChangeDTS" column added to the function outcome is causing my problem. Error message reads: "[Microsoft][SQL Native Client]Host-file columns may be skipped only when copying into the Server".
It may be resolved using a format file, but since this code needs to run daily, likely more than once a day, and the tables are subject to change, I'm reluctant to constantly adjust my format files (there are 100's of tables needing the same procedure).
Furthermore, this is run on a clients database, who wont like me creating views in their database.
Anybody got any idea how I can create a text file (.dat) with a selected number of columns from a cdc function?
Found the answer, regardless of which version of bcp used, bcp cant handle declarations, it seems. If i edit those out, works like a charm.
However, according to someone on a different forum, BCP should be to handle declarations of variables. So happy it works for me now, but still confused why it does now and didnt before.
I have a crystal report which contains a list of absolutely referenced text files. There is one text file referenced in each body line.
e.g.
line1 c:\file1.txt
line2 c:\file2.txt
Is there any way to display the contents of these files in Crystal?
i.e. I would like each crystal body line to show the text from the referenced text file.
I'm using Crystal reports 11 with a non-standard database connector (dataflex).
You would need to set up a file dsn (in XP it's under Control Panel/Administrative Tools/Datasources (ODBC)) and then use the file dsn (Microsoft Text Driver) for the datasource as an ODBC(RDO) connection.
I set this test scenario up on mine like the following:
**File 1**
column1
1row1
1row2
1row3
**File 2**
column1
2row1
2row2
2row3
I set up the file dsn to point to the c drive and in the datasource screen I added file1.txt and file2.txt to the selected tables. Then the easiest thing to do is clear the links of the tables so that it pulls every row. It will warn you that there are multiple starting points. I don't generally recomend this, but it will work in this case and since it's not reporting off a database it probably isn't the end of the world. If you disregard the starting point message then add the fields to the report, when you run it you should get the following output:
1row1 2row1
1row1 2row2
1row1 2row3
1row2 2row1
1row2 2row2
1row2 2row3
1row3 2row1
1row3 2row2
1row3 2row3
From this you can change your grouping to get the output that you need.
You can also use this same connect against subreports instead of doing this linking where you have the main report pull the info from file1.txt and then put a subreport in the report footer that pulls from file2.txt. This option won't have the text collated, but you'd still have it in the same report.
Hope this helps some.
It's easier than you think. I just set up one myself before I wrote this to make sure I was giving you the right steps. Using CR version XI and a .txt file, I followed these steps:
For each text file you want to import, make a subsection in your report (i.e. DetailsA, DetailsB, etc.). If your list of text files is constantly changing (and I don't think it is, based on your description), you'll need another method.
Make sure your text file is comma delimited and the first row contains field names. If these text files are actually text (i.e. not tables), then just put a dummy variable name in the first row so Crystal will see the text as a table of data with just 1 row.
For each text file you want to display, create a new Subreport (Insert->Subreport)
In the database selection menu, go to "Create New Connection"->"Access/Excel (DAO)"
Under 'database type', you'll see a 'text' option at the bottom of the screen.
Choose your file.
Relax! (I'm in a good mood this morning, don't know why)
I guess if you have a function that takes a file name as an argument and returns the contents of that file - you could use that function in a Crystal Report formula.
I am not familiar with the current CR, it has been years since I last used it (I last used version 8). In the versions I did use, such a function was not built in. What you would have to do back then, was to create a UFL (user function library) containing the functions you needed. If I remember correctly, you had to do this using COM.
In this day and age, I guess you can extend CR using some other mechanism, perhaps writing .NET code?
I suggest you search the CR documentation for the term UFL.
Another suggestion, then:
Create a new table FILECONTENTS (filename varchar primary key, contents blob)
Create a script that on a schedule populates this table with the filenames and contents of all the files (assuming that there is a finite number of files, and that you have a way of knowing about them)
Modify the report datasource query to join it with the FILECONTENTS table, and add the contents field to the report.
You could setup a file dsn. But this is geared toward tabular file data, not text.
How big are these text files? You want to display the entire contents of each file?
There is probably no easy way to dynamically read in a file from within crystal. You will most likely have to push a dataset to the report which contains the file contents.