when i store a number having more than 23 digit in SQllite number format is change - android-sqlite

Table create query is:
public void onCreate(SQLiteDatabase db) {
String table_track = "create table tracker_list(tracker_id String)";
db.execSQL(table_track);
}
Inserting no is :9114901159818188003856
But when we fetch data from sqllite it returns converted number :9.1149E+19
But I want same number that is inserted.
Thanks in advance.

Sqlite is not using string as datatype.
Using text or nvarchar should fix the problem in this case.
These are datatypes that Sqlite accepts:
CREATE TABLE ex2(
a VARCHAR(10),
b NVARCHAR(15),
c TEXT,
d INTEGER,
e FLOAT,
f BOOLEAN,
g CLOB,
h BLOB,
i TIMESTAMP,
j NUMERIC(10,5)
k VARYING CHARACTER (24),
l NATIONAL VARYING CHARACTER(16)
);
For more information see Datatypes In SQLite Version 2.

Related

How can I select just the first X characters from a text column

On my PostgreSQL DB I have a table that looks like below. The description column can store a string of any size.
What I'm looking for, is a way to select just the first X chars from the content of the description column, or the whole string if X > description.length
CREATE TABLE descriptions (
id uuid NOT NULL,
description text NULL,
);
E.g.: If X = 100 chars and if in the description column I store a string containing 150+ chars, when I run select <some method on description> from descriptions, I just want to get back the first 100 chars from the description column.
Bonus if the approach proposed is extremely fast!
Use a type cast or use substring():
SELECT CAST(description AS varchar(100)), substring(description for 100)
FROM descriptions;
Or if you want to do it the "PostgreSQL way":
SELECT description::varchar(100), substr(description, 1, 100)
FROM descriptions;

How to store point (x, y) in a database?

I need to store location (x,y point) in my database where, point can be null and X and Y are always less then 999. At the moment I'm using EFCore Code First and Postgresql database, but I'd like to be flexible so that I can switch to MSSql without too much work. I'm not planning to move away from EF Core.
Right now, I have two columns: LocationX and LocationY, both are int? type. I'm not sure if this is good solution, because technically DB allows (X=2, Y=null), and it's should be. It's either both are null, or both are not.
My option two is to store it in a one string column: "123x321", with max length of 7.
Is there a better way?
Thanks,
Check constraint could be used to enforce both column are NULL or NOT NULL at the same time:
CREATE TABLE t(id INT,
x INT,
y INT,
CHECK((x IS NULL AND y IS NULL) OR (x IS NOT NULL AND y IS NOT NULL))
);
db<>fiddle demo
In addition to the check constraint suggested by #LukaszSzozda you can restrict the x,y values with an additional check constraint on each. So assuming they must also be in range 0,999 then
CREATE TABLE t(id INT,
x INT constraint x_range check ( x>=0 and x<=999),
y INT constraint y_range check ( y>=0 and y<=999),
CHECK((x IS NULL AND y IS NULL) OR (x IS NOT NULL AND y IS NOT NULL))
);
As far a your idea of storing a single string - very bad. Not only will you have the issue of separating values every time you need them it allows for distinctly invalid data. Values '1234567' and even 'abcdefg' are completely valid as far as the database is concerned.
So your table definition must account for and eliminate them. With this your table definition becomes:
create table txy
( xy_string varchar(7)
, constraint xy_format check( xy_string ~* '^\d{1,3}x\d{1,3}')
)
insert into txy(xy_string)
( values ('1x2'), ('354X512'), ('38x92') );
Which is actually a reduction as it is back to a single constraint, but your queries now require something like:
select xy_string
, regexp_replace(xy_string, '^(\d+)(X|x)(\d+)','\1') x
, regexp_replace(xy_string, '^(\d+)(X|x)(\d+)','\3') y
from txy;
See demo here.
In short never store groups of numbers values as a single delimited string. The additional work is just not worth it.

BYTEA to Integer using libpq

I have a table with two columns:
1) id SERIAL PRIMARY KEY 2) BYTEA
I am trying to fetch all the rows using PGresult * res = PQexecParams(conn, "select * from table",0,NULL,NULL,NULL,NULL,1); ==> The last argument = 1 specify results to be in binary format.
Due to the last argument, I am able to fetch BYTEA column properly but the "id" column is also returned in a format that I can't understand(probably in BYTEA format). Is there a way to convert the "id" value returned by the above mentioned PQexecParams to integer? I am using PQgetvalue API to fetch results.

Best way to sanitize some data for importing into postgresql?

I have two columns with date in the YYMMDD format and a time in the HHMMSS format, they are strings like 150103 132244. These are close to a quarter of a billion records. What would be the best way to sanitize the data prior to importing to PostgreSQL? Is there a way to do this while importing, for instance?
Your data can be converted to timestamp with time zone using the function to_timestamp():
with example(d, t) as (
values ('150103', '132244')
)
select d, t, to_timestamp(concat(d, t), 'yymmddhh24miss')
from example;
d | t | to_timestamp
--------+--------+------------------------
150103 | 132244 | 2015-01-03 13:22:44+01
(1 row)
You can import a file into a table with temporary columns (d, t):
create table example(d text, t text);
copy example from ....
add a timestamp with time zone column, convert the data and drop redundant text columns:
alter table example add tstamp_column timestamptz;
update example
set tstamp_column = to_timestamp(concat(d, t), 'yymmddhh24miss');
alter table example drop d, drop t;

CSV file data into a PostgreSQL table

I am trying to create a database for movielens (http://grouplens.org/datasets/movielens/). We've got movies and ratings. Movies have multiple genres. I splitted those out into a separate table since it's a 1:many relationship. There's a many:many relationship as well, users to movies. I need to be able to query this table multiple ways.
So I created:
CREATE TABLE genre (
genre_id serial NOT NULL,
genre_name char(20) DEFAULT NULL,
PRIMARY KEY (genre_id)
)
.
INSERT INTO genre VALUES
(1,'Action'),(2,'Adventure'),(3,'Animation'),(4,'Children\s'),(5,'Comedy'),(6,'Crime'),
(7,'Documentary'),(8,'Drama'),(9,'Fantasy'),(10,'Film-Noir'),(11,'Horror'),(12,'Musical'),
(13,'Mystery'),(14,'Romance'),(15,'Sci-Fi'),(16,'Thriller'),(17,'War'),(18,'Western');
.
CREATE TABLE movie (
movie_id int NOT NULL DEFAULT '0',
movie_name char(75) DEFAULT NULL,
movie_year smallint DEFAULT NULL,
PRIMARY KEY (movie_id)
);
.
CREATE TABLE moviegenre (
movie_id int NOT NULL DEFAULT '0',
genre_id tinyint NOT NULL DEFAULT '0',
PRIMARY KEY (movie_id, genre_id)
);
I dont know how to import my movies.csv with columns movie_id, movie_name and movie_genre For example, the first row is (1;Toy Story (1995);Animation|Children's|Comedy)
If I INSERT manually, it should be look like:
INSERT INTO moviegenre VALUES (1,3),(1,4),(1,5)
Because 3 is Animation, 4 is Children and 5 is Comedy
How can I import all data set this way?
You should first create a table that can ingest the data from the CSV file:
CREATE TABLE movies_csv (
movie_id integer,
movie_name varchar,
movie_genre varchar
);
Note that any single quotes (Children's) should be doubled (Children''s). Once the data is in this staging table you can copy the data over to the movie table, which should have the following structure:
CREATE TABLE movie (
movie_id integer, -- A primary key has implicit NOT NULL and should not have default
movie_name varchar NOT NULL, -- Movie should have a name, varchar more flexible
movie_year integer, -- Regular integer is more efficient
PRIMARY KEY (movie_id)
);
Sanitize your other tables likewise.
Now copy the data over, extracting the unadorned name and the year from the CSV name:
INSERT INTO movie (movie_id, movie_name)
SELECT parts[1], parts[2]::integer
FROM movies_csv, regexp_matches(movie_name, '([[:ascii:]]*)\s\(([\d]*)\)$') p(parts)
Here the regular expression says:
([[:ascii:]]*) - Capture all characters until the matches below
\s - Read past a space
\( - Read past an opening parenthesis
([\d]*) - Capture any digits
\) - Read past a closing parenthesis
$ - Match from the end of the string
So on input "Die Hard 17 (John lives forever) (2074)" it creates a string array with {'Die Hard 17 (John lives forever)', '2074'}. The scanning has to be from the end $, assuming all movie titles end with the year of publication in parentheses, in order to preserve parentheses and numbers in movie titles.
Now you can work on the movie genres. You have to split the string on the bar | using the regex_split_to_table() function and then join to the genre table on the genre name:
INSERT INTO moviegenre
SELECT movie_id, genre_id
FROM movies_csv, regexp_split_to_table(movie_genre, '\|') p(genre) -- escape the |
JOIN genre ON genre.genre_name = p.genre;
After all is done and dusted you can delete the movies_csv table.