Manifold/PostGIS data manipulation and export - postgresql

I'm currently working on a GIS database project using Manifold Ultimate.
I am able to import data from PostGIS via the database console, and edit the data as a table object within Manifold.
How do i 'commit' these changes back to PostGIS?
I am required to submit the exported database. What format is expected for a PostGIS export and how is the exporting done?

#mdsumner is correct. Linking the PostGIS data is the way to go.
If you have exported the complete table and edited records it's not simple to replace the data present in PostGIS by a new export. This will fail until you have deleted all the tables with index, triggers and sequences whose names are derived from the same name of exported drawing (with inconsistend handling of lower case). It's not enought to drop the table.
Note that with Manifolds linked storage model you have no client buffer of edited, added or deleted records that are written back in a process of commitment of a transaction. Every edit of every single column is written to PostGIS at once.
Concerning your 2. question: That depends on the target system. Manifold exports GEOMETRY type geometries. Other PostGIS clients may digest only a single type point, line or polygon. You can edit the type in "geometry_columns.type" as long as you have added only the one type of object to the drawing.

I think that if you imported the data it is no longer linked to the DB and you would need to export it and replace what is in the DB. If you link the data the edits you make are commited "live" as the data is not a copy but remains stored by the DB.
I'm not that familiar with this, but that's what the Database Console topic in help describes.

Related

Is there a way to link (not import!) a dbf table into a PostgreSQL database?

I need to link an external dbf table into an existing PostgreSQL database. I managed to import that table into PostgreSQL but that creates a copy and therefor redundant data. The dbf table is the attribute table of a shapefile, but I don’t need the spatial aspect of it.
The exercise is part of a project to move data from an MS Access database to PostgreSQL hoping that the data then become accessible from QGIS. Th dbf table is at the moment linked into the MS Access database and used in queries (views) which I want to re-build in PostgreSQL.
I found lots of posts about importing dbf tables into but nothing which would work about linking a dbf table. The closest I got was the Foreign Data Wrapper, but I didn’t manage to use it for my purpose. I’m using PostgreSQL with pgAdmin 4.24.
Many thanks
The exercise is part of a project to move data from an MS Access database to PostgreSQL hoping that the data then become accessible from QGIS.
If you must use PostgreSQL in order to provide access to your spatial data from QGIS, I see no other option than importing the shapefile into PostgreSQL (PostGIS). If for whatever reason you do not need the geometries, you can drop the geometry column after importing the shapefile into the database:
ALTER TABLE table_name DROP COLUMN column_name;
Alternative scenario:
If we're talking about static shapefiles and you don't really need to use PostgreSQL, you can use GeoServer to publish this shapefile via Web Feature Service (WFS) - it is at least what I do in small projects. The easiest option would be to copy the shapefiles into a so called GeoServer Data Directory and publish them afterwards. After that you'd be able to access the data from QGIS using its WFS Client.

How to copy Tableau Data Extract logic?

Someone in my org created a Data Extract. There is an issue in one of the worksheets that uses it, and we suspect it's due to a mistake in how the Union was built.
But since it's a Data Extract, I can't see the UI for the data merge. Is there anyway to take a current Data Extract and view the logic that creates it?
Download the extract from the server (I'm assuming you're using server), then open that extract using desktop. You should be able to see the details of it.
Before going too deep into extract details, note that extracts are not intended to be permanent systems of record for data - just an efficient way to work with query results for optimized reporting. So in general, you should always be able to throw away the extract and look at the original source - or recreate the extract on command. But life isn't always perfect so ...
If you use Tableau Desktop to look at your worksheet, and look at the data source icon at the top of the data pane in the left sidebar, do you see an icon for your data source that looks like two databases with one on top of (shadowing) the other? If so, you can at right click on the data source icon and view its properties to see the source database table or file path. You can then even try disabling the extract to view the original source data.
If instead you see a single database icon, you have a "naked" extract where you've discarded the reference to the original source, (unless it is stored in the catalog mentioned below.)
If your organization purchased the Data Management Add-on for Tableau Server (strongly recommended), then if your data source is published to Tableau Server you can trace its history and origin by exploring the Tableau Catalog. That is especially valuable if the extract was built by a Tableau Prep Flow.
If instead, someone built the extract another way, say by writing a custom app using the Tableau Data Extract API, then the answer is to find that program.
One last point, in recent versions of Tableau, extracts are stored in an efficient relational type database file called Hyper. Hyper extracts can either be a single table (say serializing the results of a query joining multiple tables) or a Hyper extract can contain multiple tables (say serializing caching individual tables and deferring the join for later).
That may not be relevant to your question, but could turn out to matter as you reverse engineer how the extract was created.

When you create a free form of Microstrategy, is it possible to do an automatic mapping?

When you finish the free form query in microstrategy, the next step is to map the columns.
Is there any way to do it automatically? At least make the list of the columns with its names.
Thanks!!!!
Sadly, this isn't possible. You will have to map all columns manually.
While this functionality isnt possible with freeform reporting specifically, Microstrategy Data Import will allow you the ability to create Data Import Cubes. These cubes can be configured as live connections, meaning they execute against the data source selected every time they are used, and are not your typical snapshot cube. Data Imports from a database can be sourced from a database query. This effectively allows you to write your own SQL with the end result being a report that you did not have to specify columns manually for.

SSIS or TSQL for SQL/MySQL table comparrison

I am new to SSIS and am after some assistance in creating an SSIS package to do a specific task. My data is stored remotely within a MySQL Database and this is downloaded to a SQL Server 2014 Database. What I want to do is the following, create a package where I can enter 2 dates that can be compared against the create date/date modified per record on a number of tables to give me a snap shot and compare the MySQL Data to the SQL Data so that I can see if there are any rows that are missing from my local SQL Database or if any need to be updated. Some tables have no dates so I just want to see a record count on what is missing if anything between the 2. If this is better achieved through TSQL I am happy to hear about other suggestions or sites to look at where things have been done similar.
In relation to your query Tab :
"Hi Tab, What happens at the moment is our master data is stored in a MySQL Database, the data was then downloaded to a SQL Server Database as a one off. What happens at the moment is I have a SSIS package that uses the MAX ID which can be found on most of the tables to work out which records are new and just downloads them or updates them. What I want to do is run separate checks on the tables to make sure that during the download nothing has been missed and everything is within sync. In an ideal world I would like to pass in to a SSIS package or tsql stored procedure a date range, shall we say calender week, this would then check for any differences between the remote MySQL database tables and the local SQL tables. It does not currently have to do anything but identify issues, correcting them may come later or changes would need to be made to the existing sync package. Hope his makes more sense."
Thanks P
To do this, you need to implement a Type 1 Slowly Changing Dimension type data flow in SSIS. There are a number of ways to do this, including a built in transformation aptly called the Slowly Changing Dimension transformation. Whilst this is easy to set up, it is a pain to maintain and it runs horrendously slowly.
There are numerous ways to set this up using other transformations or even SQL merge statements which are detailed here: https://bennyaustin.wordpress.com/2010/05/29/alternatives-to-ssis-scd-wizard-component/
I would recommend that you use Lookup transformations as they perform better than the Slowly Changing Dimension transformation but offer better diagnostics and error handling than the better performing SQL merge statement.
Before you do this you will need to add a Checksum or Hashbytes column to your SQL data for ease of comparison with the incoming MySQL data.
In short, calculate some sort of repeatable checksum as the data is downloaded into your SQL Server, then use this in an SSIS Lookup, matching on the row key, to check for changes. Where the checksum value is different for the same row it needs updating and where there is no matching row key in your SQL Data you need to insert the new row.

Using Data compare to copy one database over another

Ive used the Data Comare tool to update schema between the same DB's on different servers, but what If so many things have changed (including data), I simply want to REPLACE the target database?
In the past Ive just used TSQL, taken a backup then restored onto the target with the replace command and/or move if the data & log files are on different drives. Id rather have an easier way to do this.
You can use Schema Compare (also by Red Gate) to compare the schema of your source database to a blank target database (and update), then use Data Compare to compare the data in them (and update). This should leave you with the target the same as the source. However, it may well be easier to use the backup/restore method in that instance.