need to load table data to xml file in mysql - mysql-workbench

Now am using this command to load table data to XML file in MySQL
mysql --user=username--password=pwd test123 --xml 'select * from TableName' > XmlFileName.xml
It is generating xml file in below format
<row>
<column1>value1</column1>
<column2>value2</column2>
</row>
but i want to generate XML file like below:
<row>
<field name='column1'>value1</field>
<field name='column2'>value2</field>
</row>
You can refer this link (I need to generate xml in third format)
https://docs.oracle.com/cd/E17952_01/refman-5.5-en/load-xml.html

You need mysqldump, try:
mysqldump --xml --user=username --password=pwd test123 TableName

There are some problems with command you pasted in your question.
You need the -e flag
The database name should be included in the query like database.table
you are missing a space between username and --password flag
It should look like this:
mysql --user=username --password=pwd --xml -e 'SELECT * FROM DbName.TableName' > XmlFileName.xml
This should give the desired output, as will the other answer posted here by Rodney Salcedo which uses mysqldump.

To export data as xml, try:
mysql -uroot -p --xml -e "SELECT * FROM table_name" > table_name.xml
To import xml data, try:
LOAD XML table_name.xml INTO TABLE table_name;
Have a look into tools like workbench that could help you export the data obtained after executing the particular query. I recently had a look into sqlyog that sports the same feature, the difference is the latter has a good GUI.

Related

How to use pgbench?

I have a table on pgadmin4 which consist of 100.000 lines and 23 columns.I need to benchmark postgresql on this specific table using pgbench,but i cant understand what parameters should i use.The database name is desdb and table called test.
PgAdmin4 is not a database server, it is a client. You don't have tables "on" pgadmin4, pgadmin4 is just one way of accessing tables which are on an actual server.
You don't benchmark tables, you benchmark queries. Knowing nothing about the table other than its name, all I could propose for a query is something like:
select * from test
Or
select count(*) from test
You could put that in a file test.sql, then run:
pgbench -n -f test.sql -T60 -P5 desdb
If you are like me and don't like littering your filesystem with bunches of tiny files with contents of no particular interest and you if use the bash shell, you could not create a test.sql file and instead make it dynamic:
pgbench -n -f <(echo 'select * from test') -T60 -P5 desdb
Whether that is a meaningful query to be benchmarking, I don't know. Do you care about how fast you can read (and then throw away) all columns for all rows in the table?
you can refer details regarding pgbench from : https://www.cloudbees.com/blog/tuning-postgresql-with-pgbench.

How to export table data from PostgreSQL (pgAdmin) to CSV file?

I am using pgAdmin version 4.3 and i want to export one table data to CSV file. I used this query
COPY (select * from product_template) TO 'D:\Product_template_Output.csv' DELIMITER ',' CSV HEADER;
but it shows error
a relative path is not allowed to use COPY to a file
How can I resolve this problem any help please ?
From the query editor, once you have executed your query, you just have to click on the "Download as CSV (F8)" button or use F8 key.
Source pgAdmin 4 Query Toolbar
Use absolute paths or cd to a known location so that you can ignore the path.
For example cd into documents directory then run the commands there.
If you are able to cd into your documents directory, then the command would be like this:
Assuming you are want to use PSQL from the command line.
cd ~/Documents && psql -h host -d dbname -U user
\COPY (select * from product_template) TO 'Product_template_Output.csv' DELIMITER ',' CSV HEADER;
The result would be Product_template_Output.csv in your current working directory(Documents folder).
Again using psql.
You have to remove the double quotes:
COPY (select * from product_template) TO 'D:\Product_template_Output.csv'
DELIMITER ',' CSV HEADER;
If your PgAdmin instance resides in a remote server, the aforementioned solutions might not be handy for you if you do not have remote access to the server. In this case, simply select all the query data and copy it. Open an excel file and you could paste it. Simple !! Tweaked.
You might have tough time if your query result is too much though.
Try this command:
COPY (select * from product_template) TO 'D:\Product_template_Output.csv' WITH CSV;
In PgAdmin export option is available in file menu.Execute the query, and then we can view the data in the Output pane. Click on the menu FILE -> EXPORT from query window.
PSQL to export data
COPY noviceusers(code, name) FROM 'C:\noviceusers.csv' DELIMITER ',' CSV HEADER;
https://www.novicetechie.com/2019/12/export-postgresql-data-in-to-excel-file.html for reference.
Write your query to select data on the query tool and execute
Click on the download button on the pgAdmin top bar (selected in red)
Rename the file to your liking
Select which folder to save the file
Congrats!!!

What is the purpose of the sql script file in a tar dump?

In a tar dump
$ tar -tf dvdrental.tar
toc.dat
2163.dat
...
2189.dat
restore.sql
After extraction
$ file *
2163.dat: ASCII text
...
2189.dat: ASCII text
restore.sql: ASCII text, with very long lines
toc.dat: PostgreSQL custom database dump - v1.12-0
What is the purpose of restore.sql?
toc.dat is binary, but I can open it and it looks like a sql
script too. How different are between the purposes of restore.sql
and toc.dat?
The following quote from the document does't answer my question:
with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects
in a machine-readable format that pg_restore can read.
Since a tar dump contains restore.sql besides the .dat files,
what is the difference between the sql script files restore.sql and toc.dat in a tar dump and a
plain dump (which has only one sql script file)?
Thanks.
restore.sql is not used by pg_restore. See this comment from src/bin/pg_dump/pg_backup_tar.c:
* The tar format also includes a 'restore.sql' script which is there for
* the benefit of humans. This script is never used by pg_restore.
toc.dat is the table of contents. It contains commands to create and drop each object in the dump and is used by pg_restore to create the objects. It also contains COPY statements that load the data from the *.dat file.
You can extract the table of contents in human-readable form with pg_restore -l, and you can edit the result to restore only specific objects with pg_restore -L.
The <number>.dat files are the files containing the table data, they are used by the COPY statements in toc.dat and restore.sql.
This looks a script to restore the data to PostgresQL. the script was created using pg_dump.
If you'd like to restore, please have a look at pg_restore.
The dat files contain the data to be restored in those \copy commands in the sql script.
the toc.dat file is not referenced inside the sql file. if you try to peek inside using cat toc.dat|strings you'll find that it contains data very similar to the sql file, but with a few more internal ids.
I think it might have been intended to work without the SQL at some point, but that's not how it's working right now. see the code to generate toc here.

In SAS, How Do I Create/Alter Postgres tables?

I have SAS code that will write to a postgres table if it is already created but still empty. How can I create/alter a postgres table from SAS (or using a script that pulls in SAS macro variables) if it does not exist or already has data? The number of fields may change. Currently, I use the filename option along with the pipe to write to the postgres file.
filename pgout pipe %unquote(%bquote(')/data/dwight/IFS6.2/app/PLANO/sas_to_psql.sh
%bquote(")&f_out_schema.%bquote(").&file_name.
%bquote(')
)
;
I've tried using this version, but it does not work:
filename pgout pipe %unquote(%bquote(')/data/dwight/IFS6.2/app/PLANO/sas_to_psql.sh
%bquote('')CREATE TABLE mdo_backend.fob_id_desc
SELECT * FROM &library_name..&file_name.
%bquote(")&f_out_schema.%bquote(").&file_name./('')/
%bquote(')
)
;
This is the script I use:
LOAD_TO_PSQL.SH
#!/bin/bash
. /data/projects/ifs/psql/config.sh
psql -d $DB -tAq -c "COPY $1 FROM STDIN USING DELIMITERS '|'"

Import from .sql file to cassandra table

I am new to Cassandra database, I have created keyspace and table.Then,
I would like to execute large number of queries that are saved inside a test.sql file ( I dont want to perform Importing from CSV ). My test.sql file content is looks like this..
INSERT INTO contact_detail (<fields>) VALUES(<values>);
INSERT INTO contact_detail (<fields>) VALUES(<values>);
INSERT INTO contact_detail (<fields>) VALUES(<values>);
INSERT INTO contact_detail (<fields>) VALUES(<values>);
I have saved this file into Cassandra bin directory.
how can i run this file and execute all the insert statement? I need to insert all the values to Cassandra Database, how is that possible?
Save the file on the one of the nodes. Then run cqlsh -f <file name> -k <keyspace name> on that node.
For more info check here: http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/cqlsh.html
You can use the SOURCE command from inside cqlsh to run any CQL script (file path is relative from you launched the cqlsh command).