So I'm trying to import geoipcity data into my table like so:
mysqlimport --fields-terminated-by="," --fields-optionally-enclosed-by="\"" --lines-terminated-by="\n" --host=localhost --user=user --password=passw database_name /var/www/html/GeoLiteCity_20150804/geoip_city.csv
But I keep getting the error.
Error: 1062, Duplicate entry '0' for key 'PRIMARY'
Now I saw the question relating to this error has been asked before but I simply don't understand the answers. I'm not that much of a guru, I'm a volunteer IT guy and I have no idea how to resolve this. I tried using this instead:
LOAD DATA LOCAL INFILE '/var/www/html/GeoLiteCity_20150804/geoip_city_ips.csv' INTO TABLE geoip_city_ips;
But then it would simply fill the table with "NULL" in all the columns.
My table structure:
--
-- Table structure for table geoip_city
CREATE TABLE IF NOT EXISTS geoip_city (
locID int(10) unsigned NOT NULL,
country char(8) COLLATE utf8_unicode_ci DEFAULT NULL,
region char(8) COLLATE utf8_unicode_ci DEFAULT NULL,
city varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
postalCode char(32) CHARACTER SET latin1 DEFAULT NULL,
latitude double DEFAULT NULL,
longitude double DEFAULT NULL,
dmaCode char(8) CHARACTER SET latin1 DEFAULT NULL,
areaCode char(8) CHARACTER SET latin1 DEFAULT NULL,
PRIMARY KEY (locID),
KEY Index_Country (country)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=FIXED;
Some lines from geoip_city:
717543,"MX","32","Zacatecas","98051",22.7833,-102.5833,,
717544,"MX","26","Cananea","84624",30.9500,-110.3000,,
717545,"MX","07","Valles","79040",26.6667,-100.6833,,
717546,"DE","02","Berg","88276",47.9667,11.3500,,
717547,"DE","09","Schwalbach","65824",49.3000,6.8167,,
717548,"RU","48","Moscow","129233",55.7522,37.6156,,
717549,"MX","28","Reynosa","88520",26.0833,-98.2833,,
717550,"PH","40","San Jose","5100",12.4558,121.0459,,
717551,"ES","56","Tarragona","43070",41.1167,1.2500,,
717552,"GB","Z6","","",51.9167,-0.6500,,
Well I'm guessing this is a MariaDB issue then since nobody replied? Would going back to Debian solve the issue?
Related
In a Laravel 9 application, I have algolia/scout-extended 2.0.
Running on my local OS command
php artisan scout:optimize
I got several files like config/scout-search-pages.php, and I modified one of them:
'searchableAttributes' => [
'page_title',
'page_slug',
'page_content',
'page_content_shortly',
'page_author_name',
'page_author_email',
'page_price',
'page_categories',
'page_created_at',
],
'customRanking' => ['asc(page_price)', 'desc(page_created_at)'],
And clearing the cache I run an import and after that I see a set of rows/columns I expected: https://prnt.sc/Ew0BdaH3dvON
But on the “Configuration” tab I see an empty “Searchable attributes” list, and I have to add fields manually.
I expected that these fields must be filled automatically from “searchableAttributes” option,
but not.
After I added more columns at “Add a Searchable Attribute” I see “unordered/unordered” option for fields.
How does it work and which option do I have to select?
The source table on MySQL 8 side has this structure:
CREATE TABLE `search_pages` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`page_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`page_slug` varchar(260) COLLATE utf8mb4_unicode_ci NOT NULL,
`page_content` mediumtext COLLATE utf8mb4_unicode_ci NOT NULL,
`page_content_shortly` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`page_author_name` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`page_author_email` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`page_price` decimal(9,2) NOT NULL DEFAULT '0.00',
`page_categories` json DEFAULT NULL,
`page_created_at` timestamp NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `search_pages_page_slug_unique` (`page_slug`),
KEY `search_pages_page_author_name_page_title_index` (`page_author_name`,`page_title`)
) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
I am trying to setup automatic audit Logging in Postgres Using Triggers and Trigger Functions. For this i want to create the table logged_actions in audit schema. When i run the following query :
CREATE TABLE IF NOT EXISTS audit.logged_actions
(
event_id bigint NOT NULL DEFAULT nextval('audit.logged_actions_event_id_seq'::regclass),
schema_name text COLLATE pg_catalog."default" NOT NULL,
table_name text COLLATE pg_catalog."default" NOT NULL,
relid oid NOT NULL,
session_user_name text COLLATE pg_catalog."default",
action_tstamp_tx timestamp with time zone NOT NULL,
action_tstamp_stm timestamp with time zone NOT NULL,
action_tstamp_clk timestamp with time zone NOT NULL,
transaction_id bigint,
application_name text COLLATE pg_catalog."default",
client_addr inet,
client_port integer,
client_query text COLLATE pg_catalog."default",
action text COLLATE pg_catalog."default" NOT NULL,
row_data hstore,
changed_fields hstore,
statement_only boolean NOT NULL,
CONSTRAINT logged_actions_pkey PRIMARY KEY (event_id),
CONSTRAINT logged_actions_action_check CHECK (action = ANY (ARRAY['I'::text, 'D'::text, 'U'::text, 'T'::text]))
)
I have already created the extension "hstore" and query is not executed and error message appears stating that
ERROR: type "hstore" is only a shell LINE 17: row_data hstore
That's a cryptic way of saying the hstore extension isn't loaded. You need to create extension hstore before you can use it.
Note that jsonb more-or-less makes hstore obsolete.
I am Migrating my database from MySQL to PostgreSQL.While creating table I got an error which I can't resolve.My MySQL Query is like this.
MYSQL Query
CREATE TABLE `configuration` (
`Name` varchar(300) NOT NULL,
`Value` varchar(300) default NULL,
`CType` char(1) default NULL,
`Size` int(11) default NULL,
`CGroup` varchar(50) default NULL,
`RestartReq` char(1) NOT NULL default 'Y',
`Display` char(1) NOT NULL default 'Y',
PRIMARY KEY (`Name`),
KEY `CType` (`CType`),
CONSTRAINT `configuration_ibfk_1` FOREIGN KEY (`CType`) REFERENCES `conftype` (`CType`)
)ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin`
PostgreSQL Query
CREATE TABLE configuration (
Name varchar(300) PRIMARY KEY,
Value varchar(300) default NULL,
CType char(1) default NULL,
Size integer default NULL,
CGroup varchar(50) default NULL,
RestartReq char(1) NOT NULL default 'Y',
Display char(1) NOT NULL default 'Y',
KEY CType (CType),
CONSTRAINT `configuration_ibfk_1` FOREIGN KEY (CType) REFERENCES conftype (CType)
)
Running File with
psql -h localhost -p 5432 -U postgres -f ps.sql testdb
Error getting
psql:ps.sql:40: ERROR: syntax error at or near "(" at character 287
psql:ps.sql:40: LINE 9: KEY CType ('CType'),
From the MySQL documentation:
KEY is normally a synonym for INDEX.
In PostgreSQL you have to create the index separately from the table:
CREATE TABLE configuration (
name varchar(300) PRIMARY KEY,
value varchar(300),
ctype char(1),
size integer,
cgroup varchar(50),
restartreq boolean NOT NULL DEFAULT true,
display boolean NOT NULL DEFAULT true,
CONSTRAINT configuration_ibfk_1 FOREIGN KEY (ctype) REFERENCES conftype (ctype)
);
CREATE INDEX conf_key ON configuration(ctype);
A few other points:
PostgreSQL identifiers (mainly table and column names) are case-insensitive except when double-quoted. The standard approach is to put identifiers in lower case and keywords in upper case.
Using a varchar(300) as a PRIMARY KEY is usually not a good idea for performance reasons. Consider adding a serial type.
The default value of a column is NULL when nothing is specified, so no need to specify DEFAULT NULL.
PostgreSQL has a boolean data type.
I have recently upgraded to MySQL 5.7 and was trying to run a replication from 5.6 master. However the replication fails with the following error:
Error 'Cannot get geometry object from data you send to the GEOMETRY field' on query.
Turns out it also happens when I try to import data from the mysqldump. Table structure is as follows:
CREATE TABLE `locations` (
`location_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`country_id` int(10) unsigned NOT NULL,
`name` varchar(100) CHARACTER SET utf8 NOT NULL,
`locations_type_id` int(11) unsigned NOT NULL,
`parent_id` int(11) unsigned DEFAULT NULL,
`importance` decimal(3,2) NOT NULL DEFAULT '1.00',
`lat` decimal(10,7) DEFAULT NULL,
`lng` decimal(10,7) DEFAULT NULL,
`radius` decimal(6,3) DEFAULT NULL,
`polygon` polygon DEFAULT NULL,
PRIMARY KEY (`location_id`),
KEY `name` (`name`,`locations_type_id`,`parent_id`,`lat`,`lng`),
KEY `locations_type_id` (`locations_type_id`),
KEY `name_2` (`name`(8)),
KEY `country_id` (`country_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
It appears to me that the import is trying to insert some binary data into the polygon field but in fairness I have no idea how to make it work.
Any ideas?
If you can re-run mysqldump, try to add --hex-blob option to have all binary data exported as hex dump.
I am developing a CakePHP app, and I would like to use UUID as a primary keys, since the application will be distributed accross multiple databases and I would also like to take advantage of the integrated ACL framework in CakePHP 2.1
I am going according to the tutorial and I have modified DB scheme to following
CREATE TABLE acos (
id uuid NOT NULL,
parent_id uuid DEFAULT NULL,
model VARCHAR(255) DEFAULT '',
foreign_key uuid DEFAULT NULL,
alias VARCHAR(255) DEFAULT '',
lft uuid DEFAULT NULL,
rght uuid DEFAULT NULL,
PRIMARY KEY (id)
);
CREATE TABLE aros_acos (
id uuid NOT NULL,
aro_id uuid NOT NULL,
aco_id uuid NOT NULL,
_create CHAR(2) NOT NULL DEFAULT 0,
_read CHAR(2) NOT NULL DEFAULT 0,
_update CHAR(2) NOT NULL DEFAULT 0,
_delete CHAR(2) NOT NULL DEFAULT 0,
PRIMARY KEY(id)
);
CREATE TABLE aros (
id uuid NOT NULL,
parent_id uuid DEFAULT NULL,
model VARCHAR(255) DEFAULT '',
foreign_key uuid DEFAULT NULL,
alias VARCHAR(255) DEFAULT '',
lft uuid DEFAULT NULL,
rght uuid DEFAULT NULL,
PRIMARY KEY (id)
);
However now I am getting an error:
Error: SQLSTATE[42883]: Undefined function: 7 ERROR: function max(uuid) does not exist LINE 1: SELECT MAX("Aro"."rght") AS "rght" FROM "public"."aros" AS "... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts.
The version of CakePHP is 2.1.0-beta and I'm using PostgreSQL with UUID data type.
Have anyone succesfully used CakePHP ACL framework with UUID's? I would like to get this working with minimal modification in CakePHP framework, for future supportability of this app.
There is no aggregate function max() defined for the data type UUID. No UUID is considered "bigger" than another UUID.
Consider the following demo:
CREATE TEMP TABLE t(id uuid);
INSERT INTO t VALUES
('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')
,('b0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11');
SELECT max(id) FROM t;
Yields:
ERROR: function max(uuid) does not exist
LINE 1: SELECT max(id) FROM t;
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
You can circumvent the problem. Cast the id to text if you want the alphabetically biggest value:
SELECT max(id::text) FROM t;
Yields:
b0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11
But be aware that that is just the standard text representation of a UUID. The same UUID could be represented in many other forms.