OpenStreetMap delete country data - openstreetmap

I have installed openstreetmap on my server, i have imported only my country on the first installation, but after a month i have added to the same map (postgresql database) another country using this command:
osm2pgsql -a --slim -d osm_map -C 1600 --hstore -S openstreetmap-carto-2.10.0/openstreetmap-carto.style spain-latest.osm.bz2
But now i want the delete all spain data from the database.
Is that possible using an script? I couldn't found anything on the web.
I'm pretty sure that i have to delete planet_osm_lines, planet_osm_nodes, etc. But maybe there is a script for doing it.
Thank you!!

Ok, i found a solution, i create my own script to delete nodes from database using a polygon.
The only table i have missed is planet_osm_rels that i've found this link: relation in OSM thats explain how planet_osm_rels is related, but is an small table, and you need to parse the text from members extracting n and w to found the corresponding ids.
delete from planet_osm_line where osm_id in (select id from planet_osm_nodes WHERE st_contains(
ST_GeomFromText('POLYGON((110.59 -11.33484,110.59 -38.50,157 -38.50,157 -11.33484,110.59 -11.33484))',4326),
st_pointfromtext('POINT ('|| (lon / 10000000.0) || ' ' || (lat / 10000000.0) || ')',4326)))
delete from planet_osm_polygon where osm_id in (select id from planet_osm_nodes WHERE st_contains(
ST_GeomFromText('POLYGON((110.59 -11.33484,110.59 -38.50,157 -38.50,157 -11.33484,110.59 -11.33484))',4326),
st_pointfromtext('POINT ('|| (lon / 10000000.0) || ' ' || (lat / 10000000.0) || ')',4326)))
delete from planet_osm_point where osm_id in (select id from planet_osm_nodes WHERE st_contains(
ST_GeomFromText('POLYGON((110.59 -11.33484,110.59 -38.50,157 -38.50,157 -11.33484,110.59 -11.33484))',4326),
st_pointfromtext('POINT ('|| (lon / 10000000.0) || ' ' || (lat / 10000000.0) || ')',4326)))
delete from planet_osm_roads where osm_id in (select id from planet_osm_nodes WHERE st_contains(
ST_GeomFromText('POLYGON((110.59 -11.33484,110.59 -38.50,157 -38.50,157 -11.33484,110.59 -11.33484))',4326),
st_pointfromtext('POINT ('|| (lon / 10000000.0) || ' ' || (lat / 10000000.0) || ')',4326)))
delete from planet_osm_ways where id in (select w.id from planet_osm_nodes n INNER JOIN planet_osm_ways w ON n.id = ANY (w.nodes) WHERE st_contains(
ST_GeomFromText('POLYGON((110.59 -11.33484,110.59 -38.50,157 -38.50,157 -11.33484,110.59 -11.33484))',4326),
st_pointfromtext('POINT ('|| (n.lon / 10000000.0) || ' ' || (n.lat / 10000000.0) || ')',4326)))
And finally we delete nodes:
delete from planet_osm_nodes n where st_contains(ST_GeomFromText('POLYGON((110.59 -11.33484,110.59 -38.50,157 -38.50,157 -11.33484,110.59 -11.33484))',4326),
st_pointfromtext('POINT ('|| (n.lon / 10000000.0) || ' ' || (n.lat / 10000000.0) || ')',4326))

Related

Postgres: Parsing query for database objects

I've got procedure which works for too long period of time.
It parses query to arrays and then search for intersections with objects in database.
In first temp table I split every statement to array.
Second is about combine all possible database objects in array
Third - I'm looking for intersections in arrays.
Now this procedure uses 3 month time period for analyzing.
I dont want to reduce time. Maybe I will if
you dont suggest me something.
I've read that index GIN on array may help. What do you think?
Maybe you did it another way?
database - POSTGRES 11
CREATE TEMP TABLE temp_array_data
AS
(
SELECT id,
pid,
regexp_split_to_array(query, '\s+') as query
FROM t_stat_session
WHERE query_start::DATE BETWEEN pdtQueryDateFrom AND pdtQueryDateTo
AND duration IS NOT NULL
);
CREATE TEMP TABLE temp_sys_objects_data
AS
(
SELECT string_to_array(schemaname || '.' || tablename, '.') object_arr1,
string_to_array(schemaname || '.' || tablename, ',') object_arr2,
schemaname,
tablename object_name,
'T' AS object_type
FROM pg_catalog.pg_tables
UNION ALL
SELECT string_to_array(schemaname || '.' || viewname, '.') object_arr1,
string_to_array(schemaname || '.' || viewname, ',') object_arr2,
schemaname,
viewname object_name,
'VW' AS object_type
FROM pg_catalog.pg_views
UNION ALL
SELECT string_to_array(schemaname || '.' || matviewname, '.') object_arr1,
string_to_array(schemaname || '.' || matviewname, ',') object_arr2,
schemaname,
matviewname object_name,
'MVW' AS object_type
FROM pg_catalog.pg_matviews
);
CREATE TEMP TABLE temp_data_for_final
AS
(
SELECT id,
pid,
schemaname,
object_name,
object_type,
1 cnt
FROM temp_array_data adta,
temp_sys_objects_data
WHERE (ARRAY [object_arr1] && ARRAY [query] OR ARRAY [object_arr2] <# ARRAY [query])
);

Compare two tables and find the missing column using left join

I wanted to compare the two tables employees and employees_a and find the missing columns in the table comployees_a.
select a.Column_name,
From User_tab_columns a
LEFT JOIN User_tab_columns b
ON upper(a.table_name) = upper(b.table_name)||'_A'
AND a.column_name = b.column_name
Where upper(a.Table_name) = 'EMPLOYEES'
AND upper(b.table_name) = 'EMPLOYEES_A'
AND b.column_name is NULL
;
But this doesnt seems to be working. No rows are returned.
My employees table has the below columns
emp_name
emp_id
base_location
department
current_location
salary
manager
employees_a table has below columns
emp_name
emp_id
base_location
department
current_location
I want to find the rest two columns and add them into employees_a table.
I have more than 50 tables like this to compare them and find the missing column and add those columns into their respective "_a" table.
Missing columns? Why not using the MINUS set operator, seems to be way simpler, e.g.
select column_name from user_tables where table_name = 'EMP_1'
minus
select column_name from user_tables where table_name = 'EMP_2'
Thirstly, check if user_tab_columns table contains columns of your tables (in my case user_tab_columns is empty and I have to use all_tab_columns):
select a.Column_name
From User_tab_columns a
Where upper(a.Table_name) = 'EMPLOYEES'
Secondly, remove line AND upper(b.table_name) = 'EMPLOYEES_A', because upper(b.table_name) is null in case a column is not found. You have b.table_name in JOIN part of the SELECT already.
select a.Column_name
From User_tab_columns a
LEFT JOIN User_tab_columns b
ON upper(a.table_name) = upper(b.table_name)||'_A'
AND a.column_name = b.column_name
Where upper(a.Table_name) = 'EMPLOYEES'
AND b.column_name is NULL
You do not need any joins and can use:
select 'ALTER TABLE EMPLOYEES_A ADD "'
|| Column_name || '" '
|| CASE MAX(data_type)
WHEN 'NUMBER'
THEN 'NUMBER(' || MAX(data_precision) || ',' || MAX(data_scale) || ')'
WHEN 'VARCHAR2'
THEN 'VARCHAR2(' || MAX(data_length) || ')'
END
AS sql
From User_tab_columns
Where Table_name IN ('EMPLOYEES', 'EMPLOYEES_A')
GROUP BY COLUMN_NAME
HAVING COUNT(CASE table_name WHEN 'EMPLOYEES' THEN 1 END) = 1
AND COUNT(CASE table_name WHEN 'EMPLOYEES_A' THEN 1 END) = 0;
Or, for multiple tables:
select 'ALTER TABLE ' || MAX(table_name) || '_A ADD "'
|| Column_name || '" '
|| CASE MAX(data_type)
WHEN 'NUMBER'
THEN 'NUMBER(' || MAX(data_precision) || ',' || MAX(data_scale) || ')'
WHEN 'VARCHAR2'
THEN 'VARCHAR2(' || MAX(data_length) || ')'
END
AS sql
From User_tab_columns
Where Table_name IN ('EMPLOYEES', 'EMPLOYEES_A', 'SOMETHING', 'SOMETHING_A')
GROUP BY
CASE
WHEN table_name LIKE '%_A'
THEN SUBSTR(table_name, 1, LENGTH(table_name) - 2)
ELSE table_name
END,
COLUMN_NAME
HAVING COUNT(CASE WHEN table_name NOT LIKE '%_A' THEN 1 END) = 1
AND COUNT(CASE WHEN table_name LIKE '%_A' THEN 1 END) = 0;
fiddle

Total size of database is bigger than sum of each table size in Postgresql

Total DB size:
5620sam=# SELECT pg_size_pretty(pg_database_size('5620sam'));
pg_size_pretty
----------------
72 GB
(1 row)
Files size on the disk:
# du --apparent-size -h /var/lib/postgresql/9.5/main/
<...>
74G /var/lib/postgresql/9.5/main/
But if I sum the size of each of the tables (with index) separately, I get 3 times less ~ 20 Gb:
5620sam=# SELECT
table_name,
pg_size_pretty(table_size) AS table_size,
pg_size_pretty(indexes_size) AS indexes_size,
pg_size_pretty(total_size) AS total_size
FROM (
SELECT
table_name,
pg_table_size(table_name) AS table_size,
pg_indexes_size(table_name) AS indexes_size,
pg_total_relation_size(table_name) AS total_size
FROM (
SELECT ('"' || table_schema || '"."' || table_name || '"') AS table_name
FROM information_schema.tables
) AS all_tables
ORDER BY total_size DESC
) AS pretty_sizes;
Output - get ~ 20 Gb
What happened to another 50 gigabytes? Where can I find them? VACUUM FULL does not help, do it every night.

How do I find all the NUMERIC columns in a table and do a SUM() on them?

I have a few tables in Netezza, DB2 and PostgreSQL databases, for which I need to reconcile and the best way we have come out with is to do a SUM() across all the NUMERIC Table columns on all the 3 databases.
Does anyone have a quick and simple way to find all the COLUMNS which are either NUMERIC or INTEGER or BIGINT and then run a SUM() on all these?
For comparing the results, I can do it manually also, or if someone has a way to capture these results in a common table and automatically check the differences in the SUM?
For DB2 you can use this metadata which will help you to find out the data type for each column
SELECT
COLUMN_NAME || ' ' || REPLACE(REPLACE(DATA_TYPE,'DECIMAL','NUMERIC'),'CHARACTER','VARCHAR') ||
CASE
WHEN DATA_TYPE = 'TIMESTAMP' THEN ''
ELSE
' (' ||
CASE
WHEN CHARACTER_MAXIMUM_LENGTH IS NOT NULL THEN CAST(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(30))
WHEN NUMERIC_PRECISION IS NOT NULL THEN CAST(NUMERIC_PRECISION AS VARCHAR(30)) ||
CASE
WHEN NUMERIC_SCALE = 0 THEN ''
ELSE ',' || CAST(NUMERIC_SCALE AS VARCHAR(3))
END
ELSE ''
END || ')'
END || ',' "SQLCOL",
COLUMN_NAME,
DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, NUMERIC_PRECISION, NUMERIC_SCALE, ORDINAL_POSITION
FROM SYSIBM.COLUMNS
WHERE TABLE_NAME = 'insert your table name'
AND TABLE_SCHEMA = 'insert your table schema'
ORDER BY ORDINAL_POSITION
For Netezza, I got the following query:
SELECT 0 AS ATTNUM, 'SELECT' AS SQL
UNION
SELECT ATTNUM, 'SUM(' || ATTNAME || ') AS S_' || ATTNAME || ',' AS COLMN
FROM _V_RELATION_COLUMN RC
WHERE NAME = '<table-name>'
AND FORMAT_TYPE= 'NUMERIC'
UNION
SELECT 10000 AS ATTNUM, ' 0 AS FLAG FROM ' || '<table-name>'
ORDER BY ATTNUM
Still looking how to do this across DB2 and PostgreSQL.

Issue in creating a function in PostgreSQL using date_trunc

Here is a sample of my code
v_sql_main:= ' SELECT min_createdate, max_createdate, createdate, customerid::integer, deviceid::integer, null::bigint as sourceip, null::bigint as sourceip_int, service, total, end_recordid::bigint '||
' FROM ( '||
' SELECT min(date_trunc( '||quote_literal('HOUR')||' , firstoccurrence)) as min_createdate, '||
' max(date_trunc( '||quote_literal('HOUR')||' , firstoccurrence)) as max_createdate, '||
' date_trunc( '||quote_literal('DAY')||' , firstoccurrence) as createdate, '||
' customerid::integer, '||
' deviceid::integer, '||
' service, '||
case when v_days < 4 then
' count(1) as total '
else
' sum(summcount) as total '
end ||', max(recordid) as end_recordid'
' FROM '|| v_tablename||
' LEFT OUTER JOIN '|| v_child_tablename||
' ON ' ||v_tablename||'.SERVICE_ID = '|| v_child_tablename||'.SERVICE_ID '||
' WHERE '||
' customerid = v_customerid AND '||
' deviceid = v_deviceid AND '||
' date_trunc( '||quote_literal('DAY')||' , firstoccurrence) = date_trunc( '||quote_literal('DAY')||' ,now()- interval '1 day') '||
' group by date_trunc( '||quote_literal('DAY')||' , firstoccurrence), customerid, deviceid, service ) as a order by total desc limit 10;;';
When I try to execute this I am getting the following error
ERROR: syntax error at or near "1"
LINE 144: ...unc( '||quote_literal('DAY')||' ,now()- interval '1 day') '|...
What i need is to get: date-1
Thanks in Advance
SHABEER
Replace the line:
' date_trunc( '||quote_literal('DAY')||' , firstoccurrence) = date_trunc( '||quote_literal('DAY')||' ,now()- interval '1 day') '||
by:
' date_trunc( '||quote_literal('DAY')||' , firstoccurrence) = date_trunc( '||quote_literal('DAY')||' ,now()- interval '' 1 day'') '||
Please take a look to the Interval syntax documentation