WSO2 DAS does not suport Postgres? - postgresql

I'm using API manager 1.10.0 and DAS 3.0.1.
I'm trying to setup Postgres for DAS. There is no postpresql.sql, so I used oracle.sql.
But I get exception.
[2016-08-11 15:06:25,079] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
java.lang.RuntimeException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:194)
at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:731)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1$$anonfun$2.apply(carbon.scala:55)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1$$anonfun$2.apply(carbon.scala:42)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1.apply(carbon.scala:41)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1.apply(carbon.scala:38)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$.schemaString(carbon.scala:38)
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:180)
... 26 more
Create table for API_REQUEST_SUMMARY script is:
CREATE TABLE API_REQUEST_SUMMARY (
api character varying(100)
, api_version character varying(100)
, version character varying(100)
, apiPublisher character varying(100)
, consumerKey character varying(100)
, userId character varying(100)
, context character varying(100)
, max_request_time decimal(30)
, total_request_count integer
, hostName character varying(100)
, year SMALLINT
, month SMALLINT
, day SMALLINT
, time character varying(30)
, PRIMARY KEY(api,api_version,apiPublisher,consumerKey,userId,context,hostName,time)
);
How to make this work with Postgres?

I had to define the column max_request_time as bigint

Related

type "hstore" is only a shell

I am trying to setup automatic audit Logging in Postgres Using Triggers and Trigger Functions. For this i want to create the table logged_actions in audit schema. When i run the following query :
CREATE TABLE IF NOT EXISTS audit.logged_actions
(
event_id bigint NOT NULL DEFAULT nextval('audit.logged_actions_event_id_seq'::regclass),
schema_name text COLLATE pg_catalog."default" NOT NULL,
table_name text COLLATE pg_catalog."default" NOT NULL,
relid oid NOT NULL,
session_user_name text COLLATE pg_catalog."default",
action_tstamp_tx timestamp with time zone NOT NULL,
action_tstamp_stm timestamp with time zone NOT NULL,
action_tstamp_clk timestamp with time zone NOT NULL,
transaction_id bigint,
application_name text COLLATE pg_catalog."default",
client_addr inet,
client_port integer,
client_query text COLLATE pg_catalog."default",
action text COLLATE pg_catalog."default" NOT NULL,
row_data hstore,
changed_fields hstore,
statement_only boolean NOT NULL,
CONSTRAINT logged_actions_pkey PRIMARY KEY (event_id),
CONSTRAINT logged_actions_action_check CHECK (action = ANY (ARRAY['I'::text, 'D'::text, 'U'::text, 'T'::text]))
)
I have already created the extension "hstore" and query is not executed and error message appears stating that
ERROR: type "hstore" is only a shell LINE 17: row_data hstore
That's a cryptic way of saying the hstore extension isn't loaded. You need to create extension hstore before you can use it.
Note that jsonb more-or-less makes hstore obsolete.

CREATE TABLE DB2 SQLSTATE: 42601, SQLCODE: -104): DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601

I am trying to create a DB2 table with the following
CREATE TABLE ACQ_FAH_DEV.fah_balance_ledger
(
"ACTIVE" VARCHAR(10),
INPUT_BY VARCHAR(32),
INPUT_TIME DATE,
AMENDED_BY VARCHAR(32),
AMENDED_TIME DATE,
ENTITY VARCHAR(20),
ACCOUNT_CODE VARCHAR(20),
ACCOUNT_NAME VARCHAR(255),
PORTFOLIO_CODE VARCHAR(100),
OM_LOAD_RUN_ID VARCHAR(128) NOT NULL,    
OM_LOAD_TMST TIMESTAMP(6) NOT NULL,    
BM_BUSINESS_INTERVAL_TYP VARCHAR(20) NOT NULL,
BM_BUSINESS_INTERVAL_TMST TIMESTAMP(6) NOT NULL,    
BM_BUSINESS_INTERVAL_START_END_FLAG VARCHAR(10) NOT NULL,    
SM_SOURCE_SYSTEM_CD VARCHAR(16) NOT NULL,    
OM_UNIQUE_ROW_ID BIGINT NOT NULL,    
OM_USER_ID VARCHAR(100) NOT NULL,    
OM_VERSION_ID SMALLINT NOT NULL
)
ORGANIZE BY COLUMN IN ACQ_FAH_DEV
DISTRIBUTE BY HASH(
ACCOUNT_CODE,
OM_VERSION_ID,
BM_BUSINESS_INTERVAL_TYP
);
but running into this error
(SQLSTATE: 42601, SQLCODE: -104): DB2 SQL Error: SQLCODE=-104,
SQLSTATE=42601, SQLERRMC=(;LOAD_RUN_ID VARCHAR;BINARY, DRIVER=4.26.14
SQLSTATE 42601: A character, token, or clause is invalid or missing.
SQL0104N An unexpected token "(" was found following "LOAD_RUN_ID
VARCHAR". Expected tokens may include: "BINARY".
The error seems innocuous but I am not able to see why this is failing.
For LUW, the tablespace clause should be before the organized clause (I assume IN ACQ_FAH_DEV refers to which tablespace to put the table in). Try:
CREATE TABLE ACQ_FAH_DEV.fah_balance_ledger
(
"ACTIVE" VARCHAR(10),
INPUT_BY VARCHAR(32),
INPUT_TIME DATE,
AMENDED_BY VARCHAR(32),
AMENDED_TIME DATE,
ENTITY VARCHAR(20),
ACCOUNT_CODE VARCHAR(20),
ACCOUNT_NAME VARCHAR(255),
PORTFOLIO_CODE VARCHAR(100),
OM_LOAD_RUN_ID VARCHAR(128) NOT NULL,
OM_LOAD_TMST TIMESTAMP(6) NOT NULL,
BM_BUSINESS_INTERVAL_TYP VARCHAR(20) NOT NULL,
BM_BUSINESS_INTERVAL_TMST TIMESTAMP(6) NOT NULL,
BM_BUSINESS_INTERVAL_START_END_FLAG VARCHAR(10) NOT NULL,
SM_SOURCE_SYSTEM_CD VARCHAR(16) NOT NULL,
OM_UNIQUE_ROW_ID BIGINT NOT NULL,
OM_USER_ID VARCHAR(100) NOT NULL,
OM_VERSION_ID SMALLINT NOT NULL
)
IN ACQ_FAH_DEV
ORGANIZE BY COLUMN
DISTRIBUTE BY HASH(
ACCOUNT_CODE,
OM_VERSION_ID,
BM_BUSINESS_INTERVAL_TYP
);
Documentation for CREATE TABLE statement

ERROR: extra data after last expected column - COPY

When I try to import the data with delimiter | I receive the error:
ERROR: extra data after last expected column
I am able to load the data if I remove double quote or single quote from the filed which have issue in the below sample data but my requirement is I need all data without removing any.
This is my copy command:
COPY public.dimingredient FROM '/Users//Downloads/archive1/test.txt'
DELIMITER '|' NULL AS '' CSV HEADER ESCAPE AS '"' ;
My table:
public.dimingredient
(
dr_id integer NOT NULL,
dr_loadtime timestamp(6) without time zone NOT NULL,
dr_start timestamp(6) without time zone NOT NULL,
dr_end timestamp(6) without time zone NOT NULL,
dr_current boolean NOT NULL,
casnumber character varying(100) COLLATE pg_catalog."default" NOT NULL,
ingredientname character varying(300) COLLATE pg_catalog."default" NOT NULL,
matchingstrategy character varying(21) COLLATE pg_catalog."default",
percentofconfidence double precision,
disclosurestatus character varying(42) COLLATE pg_catalog."default",
issand character varying(1) COLLATE pg_catalog."default",
sandmeshsize character varying(20) COLLATE pg_catalog."default",
sandquality character varying(20) COLLATE pg_catalog."default",
isresincoated character varying(1) COLLATE pg_catalog."default",
isartificial character varying(1) COLLATE pg_catalog."default",
CONSTRAINT dimingredient_pkey PRIMARY KEY (dr_id)
)
my data:
5144|2016-07-01 13:34:25.1001891|1900-01-01 00:00:00.0000000|9999-12-31 23:59:59.9999999|True|93834|"9-octadecenamide,n,n-bis(2-hydroxyethyl)-, (9z)"|"NO CAS MATCH FOUND"||Disclosed|||||
5145|2016-07-01 13:34:25.1001891|1900-01-01 00:00:00.0000000|9999-12-31 23:59:59.9999999|True|93834|"9-octadecenamide,n,n-bis-2(hydroxy-ethyl)-,(z)""|"NO CAS MATCH FOUND"||Disclosed|||||
Omitting the empty line in your dample data, I get a different error message with 9.6, to wit:
ERROR: unterminated CSV quoted field
CONTEXT: COPY dimingredient, line 3: "5145|2016-07-01 13:34:25.1001891|1900-01-01 00:00:00.0000000|9999-12-31 23:59:59.9999999|True|93834|..."
Strangely enough, that error message has been there since CSV COPY was introduced in version 8.0, so I wonder how your data are different from the data you show above.
The error message is easily explained: There is an odd number of quotation characters (") in the second line.
Since two doubled quotes in a quoted string are interpreted as a single double quote (" is escaped as ""), the fields in the second line are:
5145
2016-07-01 13:34:25.1001891
1900-01-01 00:00:00.0000000
9999-12-31 23:59:59.9999999
True
93834
9-octadecenamide,n,n-bis-2(hydroxy-ethyl)-,(z)"|NO CAS MATCH FOUND||Disclosed|||||
... and then COPY hits the end of file while parsing a quoted string. Hence the error.
The solution is to use an even number of " characters per field.
If you need a " character in a field, either choose a different QUOTE or quote the field and double the ".

MariaDB -> Error: 1062, Duplicate entry '0' for key 'PRIMARY'

So I'm trying to import geoipcity data into my table like so:
mysqlimport --fields-terminated-by="," --fields-optionally-enclosed-by="\"" --lines-terminated-by="\n" --host=localhost --user=user --password=passw database_name /var/www/html/GeoLiteCity_20150804/geoip_city.csv
But I keep getting the error.
Error: 1062, Duplicate entry '0' for key 'PRIMARY'
Now I saw the question relating to this error has been asked before but I simply don't understand the answers. I'm not that much of a guru, I'm a volunteer IT guy and I have no idea how to resolve this. I tried using this instead:
LOAD DATA LOCAL INFILE '/var/www/html/GeoLiteCity_20150804/geoip_city_ips.csv' INTO TABLE geoip_city_ips;
But then it would simply fill the table with "NULL" in all the columns.
My table structure:
--
-- Table structure for table geoip_city
CREATE TABLE IF NOT EXISTS geoip_city (
locID int(10) unsigned NOT NULL,
country char(8) COLLATE utf8_unicode_ci DEFAULT NULL,
region char(8) COLLATE utf8_unicode_ci DEFAULT NULL,
city varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
postalCode char(32) CHARACTER SET latin1 DEFAULT NULL,
latitude double DEFAULT NULL,
longitude double DEFAULT NULL,
dmaCode char(8) CHARACTER SET latin1 DEFAULT NULL,
areaCode char(8) CHARACTER SET latin1 DEFAULT NULL,
PRIMARY KEY (locID),
KEY Index_Country (country)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=FIXED;
Some lines from geoip_city:
717543,"MX","32","Zacatecas","98051",22.7833,-102.5833,,
717544,"MX","26","Cananea","84624",30.9500,-110.3000,,
717545,"MX","07","Valles","79040",26.6667,-100.6833,,
717546,"DE","02","Berg","88276",47.9667,11.3500,,
717547,"DE","09","Schwalbach","65824",49.3000,6.8167,,
717548,"RU","48","Moscow","129233",55.7522,37.6156,,
717549,"MX","28","Reynosa","88520",26.0833,-98.2833,,
717550,"PH","40","San Jose","5100",12.4558,121.0459,,
717551,"ES","56","Tarragona","43070",41.1167,1.2500,,
717552,"GB","Z6","","",51.9167,-0.6500,,
Well I'm guessing this is a MariaDB issue then since nobody replied? Would going back to Debian solve the issue?

PostgreSQL COPY from CSV with delimiter "|"

please, can anyone help me to solve this problem?
I'd like to create a table in Postgres database with data from CSV file with delimiter "|", while trying to use the command COPY (or Import) I get this error:
ERROR: extra data after last expected column
CONTEXT: COPY twitter, line 2: ""Sono da Via Martignacco
http://t.co/NUC6MP0z|"<a href=""http://foursquare.com"" rel=""nofollow"">f..."
The first 2 lines of CSV:
txt|"source"|"ulang"|"coords"|"tweettime_wtz"|"country"|"id"|"userid"|"in_reply_user_id"|"in_reply_status_id"|"uname"|"ucreationdate"|"utimezone"|"followers_count"|"friends_count"|"x_coords"|"y_coords"
Sono da Via Martignacco http://t.co/NUC6MP0z|"foursquare"|"it"|"0101000020E6100000191CA9E7726F2A4026C1E1269F094740"|"2012-05-13 10:00:45+02"|112|201582743333777411|35445264|""|""|"toffo93"|"2009-04-26 11:00:03"|"Rome"|1044|198|13.21767353|46.07516943
For this data I have created in Postgres a table "Twitter"
CREATE TABLE public.twitter
(
txt character varying(255),
source character varying(255),
ulang character varying(255),
coords geometry(Point,4326),
tweettime_wtz character varying(255),
country integer,
userid integer NOT NULL,
in_reply_user_id character varying(255),
in_reply_status_id character varying(255),
uname character varying(255),
ucreationdate character varying(255),
utimezone character varying(255),
followers_count integer,
friends_count integer,
x_coords numeric,
y_coords numeric,
CONSTRAINT id PRIMARY KEY (userid)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.twitter
OWNER TO postgres;
Any ideas, guys?
The destination table contain 16 column, but your file contain have 17 column.
It seems to be the id field who is missing.
try to set you table as:
CREATE TABLE public.twitter
(
txt character varying(255),
source character varying(255),
ulang character varying(255),
coords geometry(Point,4326),
tweettime_wtz character varying(255),
country integer,
id character varying,
userid integer NOT NULL,
in_reply_user_id character varying(255),
in_reply_status_id character varying(255),
uname character varying(255),
ucreationdate character varying(255),
utimezone character varying(255),
followers_count integer,
friends_count integer,
x_coords numeric,
y_coords numeric,
CONSTRAINT twitter_pk PRIMARY KEY (userid)
)
WITH (
OIDS=FALSE
);
Change the data type of the id field as you need it.
My solution:
So the problem was in my CSV file: it has had invisible signs of quotes. I haven't seen them when I opened CSV in Excel, I saw the lines in this way:
txt|"source"|"ulang"|"coords"|"tweettime_wtz"|"country"|"id"|"userid"|"in_reply_user_id"|"in_reply_status_id"|"uname"|"ucreationdate"|"utimezone"|"followers_count"|"friends_count"|"x_coords"|"y_coords"
Sono da Via Martignacco http://t.co/NUC6MP0z|"foursquare"|"it"|"0101000020E6100000191CA9E7726F2A4026C1E1269F094740"|"2012-05-13 10:00:45+02"|112|201582743333777411|35445264|""|""|"toffo93"|"2009-04-26 11:00:03"|"Rome"|1044|198|13.21767353|46.07516943
But when I opened CSV in notepad I saw it differently:
"txt"|"source"|"ulang"|"coords"|"tweettime_wtz"|"country"|"id"|"userid"|"in_reply_user_id"|"in_reply_status_id"|"uname"|"ucreationdate"|"utimezone"|"followers_count"|"friends_count"|"x_coords"|"y_coords"
"Sono da Via Martignacco http://t.co/NUC6MP0z"|"foursquare"|"it"|"0101000020E6100000191CA9E7726F2A4026C1E1269F094740"|"2012-05-13 10:00:45+02"|112|201582743333777411|35445264|""|""|"toffo93"|"2009-04-26 11:00:03"|"Rome"|1044|198|13.21767353|46.07516943
"
So i should delete all quotes (in Notepad and saving the file as CSV), so that the text became:
txt|source|ulang|coords|tweettime_wtz|country|id|userid|in_reply_user_id|in_reply_status_id|uname|ucreationdate|utimezone|followers_count|friends_count|x_coords|y_coords
Sono da Via Martignacco http://t.co/NUC6MP0z|<a href=http://foursquare.com rel=nofollow>foursquare</a>|it|0101000020E6100000191CA9E7726F2A4026C1E1269F094740|2012-05-13 10:00:45+02|112|201582743333777411|35445264|||toffo93|2009-04-26 11:00:03|Rome|1044|198|13.21767353|46.07516943
Only after this I was able to use Import tool in pgAdmin without any problem!