I am trying to export data to existing csv file.
I have been using these methods to export data.
Microsoft.Jet.OLEDB.4.0
SQLCMD
Data Export Wizard
However I cannot find if there is any parameter / option to append the exported data to existing file. Is there any way? Thanks.
Note: answer is biased towards *nix operating systems; I'm not too familiar with windows.
If you can run your sql query via the command line,
using a scripting language, you can use a library that creates an MSSQL connection, (an example of this is a node.js program I authored (https://github.com/skilbjo/aqtl but any tool will do), or
a windows binary that runs something like sqlcmd from the command line,
you can just pipe the output to the csv file. For example:
$ node runquery.js myquery.sql >> existing_csv_file.csv
Related
I'm new to pgAdmin and GIS DB in general. I want to upload a CSV file to pgAdmin v4.1 and I'm trying to understand the logic to do so. I am able to do this by creating a new table under the desired DB and then manually defined the column (name, type etc.), only then I am able to load the CSV into pgAdmin using the GUI. This seems a bit cumbersome way to import a CSV file, because let's say I have a CSV file with 200 columns, it is not practical to define them all manually, and there must be a way to tell pgAdmin: this is the CSV file, now get the columns by yourself and get (or at least assume) the columns type, ad create a new table, much similar to how pandas reads CSV in python. As I'm new to this topic, please elaborate your answer\comment as much as possible.
NO: Unfortunately, we can only import CSV after the table is created.
YES: There is no GUI method, but:
There is a utility called pgFutter which will do exactly what you want. This is a command line utility. Here are the binaries.
You can write a function that does that. Here is an example.
I would look into using GDAL to upload your CSV file into postgis.
I used this recently to do a similar job.
ogr2ogr -f "PostgreSQL" -lco GEOMETRY_NAME=geometry -lco FID=gid PG:"host=127.0.0.1 user=username dbname=dbname password=********" postgres.vrt -nln th_new_data_2019 -t_srs EPSG:27700
Code used to upload a csv to postgis and transform the coordinate system.
-f = file format name
output file format name, some possible values are:
-f "ESRI Shapefile"
-f "TIGER"
-f "MapInfo File"
-f "GML"
-f "PostgreSQL
-lco = NAME=VALUE:
Layer creation option (format specific)
-nln name:
Assign an alternate name to the new layer
-t_srs srs_def:
target spatial reference set. The coordinate systems that can be passed are anything supported by the OGRSpatialReference.SetFromUserInput() call, which includes EPSG PCS and GCSes (i.e. EPSG:4296), PROJ.4 declarations (as above), or the name of a .prj file containing well known text.
The best and simplest guide for installing GDAL that I have used is :
https://sandbox.idre.ucla.edu/sandbox/tutorials/installing-gdal-for-windows
I am trying to export the DataStage job designs with executables. Below is the screenshots I use to export from the GUI.
This is the two commands I use:
dsexport.exe /h=XX /U=XX /p=XX projectXXX /job=XXX jobname.dsx
dsexport.exe /h=XX /U=XX /p=XX projectXXX /job=XXX /EXEC /APPEND jobname.dsx
The file generated from commands is bigger than the one from GUI. Anyone knows how to use dsexport command to export jobs with the options as in the GUI screenshots. much appreciated. I am using Designer V8.5.
JS
C:\IBM\InformationServer\Clients\Classic>dsexport /d={ip address of server} /u={user id} /p={password} /job={job to export} {Project where job is located in} {FileName.dsx}
try this, it will export a single dsx file with all informations
P.S.I am using version 11.3
As you can see GUI is excluding some read-only files which is not excluded in command line this is why the file size difference is there.
You have "Include Dependent Items" unchecked in the GUI. The command line will include dependent items by default (i.e. shared containers or routines). You can disable this behaviour on the command line by using the /NODEPENDENTS command switch.
I am looking to batch download a large number of files (>800). I have a text file with the list of all the filenames. These filenames are then used to derive the URL from which they can be downloaded. I had been parsing through the file with a python script, using subprocess to wget the files.
wget ftp://ftp.name.of.site/filename-prefix/filename/filename+suffix
However for reasons unknown to me, wget is failing to properly connect. I wanted to know if I could essentially use an ftp program that would work in a similar manner, i.e. no login and stay within the commandline.
Edit:
What's in my text file:
ERS032033
ERS032214
ERS032234
ERS032223
ERS032218
The ERS### act as the prefix. The whole thing is the filename. The final file (i.e. filename+suffix) would look something like: ERS032033_1.fastq.gz
Submitting the correct url is not the problem.
Since you are using Python, I suggest dropping the subprocess approach and using the urllib module instead:
import urllib
handle = urllib.urlopen('ftp://ftp.name.of.site/filename-prefix/filename/filename+suffix')
print handle.read()
handle.close()
Assuming you are using Python 2 (urllib.request for Python 3)
If you simply need batch download, urllib.urlretrieve is a cleaner approach.
Is there any MS-DOS command to get the version of am executable (or dll) file?
Of course there is a simple command ;-)
wmic /node:"servername" datafile where name='c:\\Program Files (x86)\\Symantec\\Symantec Endpoint Protection\\smc.exe' get version
you can ommit /node to perform check on the local computer. And if you ommit "get version" you get all the values and column names. Of course there are standard wmic parameters available like /output:filename, /append:filename or /format:csv and you can use #list.txt instead of a server name to perform check on list of machines.
Either user powershell
see
Get file version in PowerShell
or Windows Explorer
or write your own utility, I don't think that MSDOS supports this natively.
You could load the executable as a binary file and read the PE headers manually...
http://www.microsoft.com/whdc/system/platform/firmware/PECOFF.mspx
You can try Resource Hacker with the following syntax:
reshack.exe -extract "path\to\my\file.dll," ver.rc, VERSIONINFO, , && findstr FILEVERSION ver.rc
Beware of commas. Make sure you can create ver.rc.
I can tell hbase to disable and delete particular tables using:
disable 'tablename'
drop 'tablename'
But I want to delete all the tables in the database without hardcoding the names of any of the tables. Is there a way to do this? I want to do this through the command-line utility ./hbase shell, not through Java or Thrift.
disable_all and drop_all have been added as commands in the HBase ruby shell. These commands were added in jira HBASE-3506 These commands take a regex of tables to disable/drop. And they will ask for confirmation before continuing. That should make droping lots of tables pretty easy and not require outside libraries or scripting.
I have a handy script that does exactly this, using the Python Happybase library:
import happybase
c = happybase.Connection()
for table in c.tables():
c.disable_table(table)
c.delete_table(table)
print "Deleted: " + table
You will need Happybase installed to use this script, and you can install it as:
sudo easy_install happybase
You can pipe commands to the bin/hbase shell command. From there you can use some scripting to grab the table names and pipe the disable/delete commands back to hbase.
i.e.
echo "list" | bin/hbase shell | ./filter_table_names.pl > table_names.txt
./turn_table_names_into_disable_delete_commands.pl table_names.txt | bin/hbase shell
There is a hack.
Open $HBASE_HOME/lib/ruby/shell/commands/list.rb file and add below line at the bottom of command method.
return list
After that, list command returns an array of names of all tables.
And then do just like this.
list.each {|t| disable t;drop t}
I'm not deleting tables through the hbase shell but I deleting them from the command line by,
- deleting my hadoop distributed filesystem directory, then,
- creating a new clean hadoop distributed filesystem directory, then,
- formatting my hadoop distributed filesystem with 'hadoop namenode -format', then,
- start-all.sh and start-hbase.sh
Reference:
http://hadoop.apache.org/common/docs/r0.20.1/api/overview-summary.html#overview_description
If you're looking for something that will do this in a 'one-liner' via a shell script you can use this method:
$ echo 'list.each {|t| disable t; drop t}; quit;' | hbase shell
NOTE: The above was run from Bash shell prompt. It echoes the commands into hbase shell and does a loop through all the tables that are returned from the list command, and then disables & drops each table as it iterates through the array that list returned. Once it's done, it quits.