Get data from .CDB file - cdb

i would like to get data from.cdb file. Is it possible to retrieve data from .cdb file without knowing keys names?

If you are talking about CDB constant database files, the cdbdump program will dump all data in cdbmake format on standard output.

Related

Convert OpenStreetMap POI Data to CSV

I am looking to extract some Point of Interest (POI) data from OpenStreetMap in a tabular format. I have used this link navigated to the relevant country and downloaded the file,
http://download.geofabrik.de/
What I get is a file with the .osm.pbf extension. However, I think it is possible to download the files in other formats like .shp.zip and .osm.bz2. Is there some way that I can convert this data into a tabular format like a CSV file?
I came across a tool called Osmosis which can be used to manipulate data in these formats, but I am not sure if it can be used for this purpose,
https://wiki.openstreetmap.org/wiki/Osmosis
I was able to successfully install in on my Windows machine though.
To be frank, I am not even sure if this gives me what I want.
In essence, what I am looking for is Sri Lankan POI data that contains the following attributes,
https://data.humdata.org/dataset/hotosm_lka_points_of_interest
If the conversion of this file does not give me data in this format, then I am open to other approaches as well? What is the best way to go about acquiring this data?

Is it possible convert Postgres dump to csv?

I have Postgres 12.7 dump file (binary format), 300Gb, for simplicity, contains one table.
I want to read the data without installing a server. It would be nice to read the data by converting to csv format on the fly.
Is it possible to read from the dump in any way, preferably with c# or java?

Store .sav file into RDBMS including meta data

I want to know what is the best approach to store the data from .sav file into RDBMS database with out loosing any meta data model as well as actual response data.
Note first that you can save all the metadata in a sav file where you have deleted all the data and then reapply the metadata to a new, similar sav file using APPLY DICTIONARY.
Otherwise, you would need to create tables in the database for the various attributes. That's easy for variable labels, formats, measurement level, and missing value codes. For value labels it would take a bit more work.
One possible approach would be to use OMS to capture the output from CODEBOOK (without any statistics) as data files and then export those files to the database.

Dynamically create table from csv

I am faced with a situation where we get a lot of CSV files from different clients but there is always some issue with column count and column length that out target table is expecting.
What is the best way to handle frequently changing CSV files. My goal is load these CSV files into Postgres database.
I checked the \COPY command in Postgres but it does have an option to create a table.
You could try creating a pg_dump compatible file instead which has the appropriate "create table" section and use that to load your data instead.
I recommend using an external ETL tool like CloverETL, Talend Studio, or Pentaho Kettle for data loading when you're having to massage different kinds of data.
\copy is really intended for importing well-formed data in a known structure.

Archiving text log files in postgresql

We are writing a testing framework from scratch using Perl. Each test case writes a log file and we are planning to archive the resulting log files created by each test case for reporting purposes.
Now we are using PostgreSQL database for storing the results. Now how do I archive the text log file in PostgreSQL database? I googled and found out that bytea datatype can be used to store files in binary format. If I do so how do i retrieve it back as text?.
Any ideas will be appreciated.
If your log files are text files, then you should use the TEXT datatype to store them. If the log files are binary (or, perhaps, compressed text files), then you'd want to use BYTEA. In either case, you can INSERT and SELECT them just like any other column type when using DBI. If they're really large then you might want to play with the LongReadLen DBI parameter and read the DBI manual section on BLOBs.