PostGIS Conversion Issues - postgresql

I am having an issue using PostGIS (1.5.4) data. It may be that I'm just not familiar enough with this technology to see the obvious (I'm a regular expert with nearly 4 hours of experience), but I am running into an error that I have been unable to solve with Google.
I have a table which includes Polygon data (and yes, I checked; the column type is geometry, not polygon- the Postgres native type). The problem arises when I am trying to run a query on the table to find which shape contains a particular problem.
I am using the following query:
SELECT *
FROM geo_shape
WHERE ST_Contains(geoshp_polygon, POINT(-97.4388046000, 38.1112251000));
The error I receive is 'ERROR: function st_contains(geometry, point) does not exist'. I tried using a CAST() function, but got 'ERROR: cannot cast type geometry to polygon'. I'm guessing the issue has to do with the way the data is stored- PGAdmin shows it as hex data. I tried using ST_GeomFromHEXEWKB() just on a hunch, but received 'ERROR: function st_geomfromhexewkb(geometry) does not exist'.
I'm fairly confused as to what the issue is here, so any ideas at all would be much appreciated.

st_contains needs a geom,geom as arguments...
Give this a try...
SELECT * FROM geo_shape
WHERE ST_Contains(geoshp_polygon,
GeomFromText('POINT(-97.4388046000 38.1112251000)'));
Editted to correct , issue in the point data. ST_geomfromtext will work, kinda curious what the difference is there

You cannot mix PostgreSQL's geometric data types with PostGIS's geometry type, which is why you see that error. I suggest using one of PostGIS's geometry contstructors to help out:
SELECT *
FROM geo_shape
WHERE ST_Contains(geoshp_polygon,
ST_SetSRID(ST_MakePoint(-97.4388046000, 38.1112251000),4326);
Or a really quick text way is to piece together the well-known text:
SELECT 'SRID=4326;POINT(-97.4388046000 38.1112251000)'::geometry AS geom;
(this will output the WKB for the geometry type).

Related

Get PostGIS geometry field on Drill

I have a table with a geometry column and if I query it using PostGIS, it shows the records right:
PostGIS query image:
The problem is when I execute the query using Apache Drill, because it shows all the records fine except the geometry one, it shows as null.
Drill query image:
Reviewing the logs, it shows the following error:
WARN o.a.d.e.store.jdbc.JdbcRecordReader - Ignoring column that is
unsupported. org.apache.drill.common.exceptions.UserException:
UNSUPPORTED_OPERATION ERROR: A column you queried has a data type that
is not currently supported by the JDBC storage plugin. The column's
name was geom_multipolygon and its JDBC data type was OTHER
I tested creating the Drill storage plugin with postgis-jdbc-2.2.1.jar and postgresql-42.1.4.jar but the same error is shown.
If I use:
cast(geom_multipolygon as varchar(255))
it shows the varchar representation of the geometry, another option is getting the MULTIPOLYGON text and transform to Drill binary using ST_GeomFromText(geom), but we need the binary format directly from PostGIS, so those approaches can't be done.
We have seen this page: https://github.com/k255/drill-gis/issues/1 but the proposed solution doesn't work for us, so I think there is a way to achieve this.
UPDATE: I finally found the way that Drill can show the geo fields, is to change the data type in PostGIS from geometry to bytea. It seems to be a compatibility issue. With this way, we can perform geospatial queries on Drill, but in PostGIS those fields are no longer geometries, so they can not be indexed and treated as such.

Oracle DB link - where clause evaluation

i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.

Tableau rawsqlagg_real

Could somebody please give me a little guidance on rawsqlagg_real function in Tableau. What is right syntax for it when it is used to get data from MySQL.
I used it as per my understanding but I am getting an error "No such column [__measure__3]".
Code:
RAWSQLAGG_REAL("select count(Film Id) from flavia.TableforThe_top_10percent_of_the_user where count(distinct(User Id)) = %1",[it sucks])
I see a few issues here
Instead of WHERE, use HAVING
You have column names like Film Id, you should write them as 'Film Id' instead
Though I must say that it is better to do with LOD calculations as Tableau will be able to do better query optimizations that way. Plus it is less error prone and much easier to write.
I find another issue here in addition to using having instead of where. The filter value should be numeric, or the operator should be like and not =.
where count(distinct(User Id)) = **%1**

Postgres pgrouting2 Dijkstra shortest path returns edges that don't exist

I am struggling with a very strange problem for a few days now that I can't find a solution.
I am using postgresql 9.3 with postgis 2 and pgrouting 2 extentions. I have imported OSM data for a city and created topology network successfully with pgr_createTopology() function.
I can successfully find shortest path with Dijkstra algorithm by executing for example (ignore the simplified cost function)
SELECT * from pgr_dijkstra(
'SELECT id, source, target, st_length(way) as cost FROM planet_osm_roads',
5744, 5900, false, false
)
and getting the following result (seq,id1,id2,cost)
0;5744;178191032;428.359590042932
1;5749;177327184;61.7533237967002
2;5821;177327456;544.454553269731
3;5833;177338744;51.1559809959342
4;5871;177338880;71.0702814120015
5;5900;-1;0
The problem is that the returning id2 values, which corresponds to the id of the edges, are not present in the planet_osm_roads table. Actually those values cannot be found in any column of planet_osm_roads or planet_osm_roads_vertices_pgr tables. Am I missing something? Maybe someone had faced the same problem before.
Thank you all in advance
What kind of values to you have for edge ids?
pgRouting only supports 32 bit integer values, if your ids are larger then they will get silently truncated. This is a known problem.

PostGIS geography query returns a string value

I have a strange issue. The lonlat column on my app works well on the development server –– its output is in the form of POINT(X Y). But when I move the data to the production server, the output is strange!
ActionView::Template::Error (undefined method `lon' for "0101000020E6100000541B9C887E7A52C02920ED7F80614440":String):
The lonlat value, which is encoded with SRID: 4326, is being read as a string. I am almost certain that there was a corruption in the data during migrating it from development to production because this was not a problem before the migration.
Does anyone know what about the database schema or column may cause this issue?
A geometry field stores its data as WKB. To see the WKT representation you need to change your query to something like
select ST_Astext(the_geom) as geometry from table
However, I don't know why in your development you have some kind of implicit conversion between WKB binary data and WKT strings. ¿What version of postgres and postgis are you using?
What lang is in your app server?
Is that ActiveRecord you're using?
I suggest you try something like
float ST_X(geometry a_point);
To make sure you can read the data properly and determinate if problem is on the data field or somewhere else.
I also would try doing the pg_dump in a single step if you determinate the problem is with the geometry column.
You can use pg_dump with option
--exclude-table-data=reg_expresion_ _tablename_
--exclude-table-data=schema.reg_expresion_ _tablename_
This will bring all the schema definition, but exclude the table data and bring only the data from table you need.
Turns out that when I killed the connection to the server to migrate the data, Rails did not set the schema search path (meaning didn't discover the postgis extension) upon reconnecting. I had to restart the server to solve this problem.