SQl Server 2012 autofill one column from another - tsql

I have a table where a user inputs name, dob, etc. and I have a User_Name column that I want automatically populated from other columns.
For example input is: Name - John Doe, DOB - 01/01/1900
I want the User_Name column to be automatically populated with johndoe01011900 (I already have the algorithm to concatenate the column parts to achieve the desired result)
I just need to know how (SQL, Trigger) to have the User_Name column filled once the user completes imputing ALL target columns. What if the user skips around and does not input the data in order? Of course the columns that are needed are (not null).

This should do it:
you can use a calculated field with the following calculation:
LOWER(REPLACE(Name, ' ', ''))+CONVERT( VARCHAR(10), DateOfBirth, 112))
In the below sample I have used a temp table but this is the same for regular tables as well.
SAMPLE:
CREATE TABLE #temp(Name VARCHAR(100)
, DateOfBirth DATE
, CalcField AS LOWER(REPLACE(Name, ' ', ''))+CONVERT( VARCHAR(10), DateOfBirth, 112));
INSERT INTO #temp(Name
, DateOfBirth)
VALUES
('John Doe'
, '01/01/1900');
SELECT *
FROM #temp;
RESULT:

Related

Extract all the values in jsonb into a row

I'm using postgresql 11, I have a jsonb which represent a row of that table, it's look like
{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com",...,"thirdpartyauthenticationkey":{}}
is there any method that I could gather all the "values" of the jsonb into a string which is separated by ',' and without the keys?
The string I want to obtain with the jsonb above is like
(test, Root, 0, superadmin#ae.com, ..., {})
I need to keep the ORDER of those values as what their keys were in the jsonb. Could I do that with postgresql?
You can use the jsonb_populate_record function (assuming your json data does match the users table). This will force the text value to match the order of your users table:
Schema (PostgreSQL v13)
CREATE TABLE users (
userid text,
rolename text,
loginerror int,
email text,
thirdpartyauthenticationkey json
)
Query #1
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
SELECT jsonb_populate_record(null::users, js),
jsonb_populate_record(null::users, js)::text AS record_as_text,
pg_typeof(jsonb_populate_record(null::users, js)::text)
FROM d
;
jsonb_populate_record
record_as_text
pg_typeof
(test,Root,0,superadmin#ae.com,{})
(test,Root,0,superadmin#ae.com,{})
text
(other,User,324,nope#ae.com,{})
(other,User,324,nope#ae.com,{})
text
Note that if you're building this string to insert it back into postgresql then you don't need to do that, since the result of jsonb_populate_record will match your table:
Query #2
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
INSERT INTO users
SELECT (jsonb_populate_record(null::users, js)).*
FROM d;
There are no results to be displayed.
Query #3
SELECT * FROM users;
userid
rolename
loginerror
email
thirdpartyauthenticationkey
test
Root
0
superadmin#ae.com
[object Object]
other
User
324
nope#ae.com
[object Object]
View on DB Fiddle
You can use jsonb_each_text() to get a set of a text representation of the elements, string_agg() to aggregate them in a comma separated string and concat() to put that in parenthesis.
SELECT concat('(', string_agg(value, ', '), ')')
FROM jsonb_each_text('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb) jet (key,
value);
db<>fiddle
You didn't provide DDL and DML of a (the) table the JSON may reside in (if it does, that isn't clear from your question). The demonstration above therefore only uses the JSON you showed as a scalar. If you have indeed a table you need to CROSS JOIN LATERAL and GROUP BY some key.
Edit:
If you need to be sure the order is retained and you don't have that defined in a table's structure as #Marth's answer assumes, then you can of course extract every value manually in the order you need them.
SELECT concat('(',
concat_ws(', ',
j->>'userid',
j->>'rolename',
j->>'loginerror',
j->>'email',
j->>'thirdpartyauthenticationkey'),
')')
FROM (VALUES ('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb)) v (j);
db<>fiddle

How deep can we go in levels of nested tables in oracle 12c?

i am trying to do the folowing:
1) create or replace type transaction as object (date Date, description
varchar(30));
create or replace type T_transaction as table of transaction;
2) create or replace type account as object (id int, description varchar(30),
t_transaction T_transaction)
nested table t_transaction store as xxx1;
create or replace type T_account as table of account;
3) create or replace type user as object (id int, descr varchar(30), t_account
T_account)
nested table t_account store as xxx2;
create or replace type T_user as table of user;
4) create or replace table banks (name varchar(20), users T_user)
nested table users store as xxx3;
first 2 types were created successfully, but "create or replace type account..." is giving -> Warning: Type created with compilation errors.
is there an advice for creating such database using multiple level of nested tables ?
Edit:
I did some research on the subject (object nesting limitations) and here are my findings:
According to Database Limits,
every column of a nested table is in effect added to the columns of the host table and the maximum total number of columns in a table is 1000.
So this would be the official upper limit (in case every nested table had a single column).
However, when I did actual testing (on 11g and 12c), I weren't able to create a table with a nesting depth more than 50 because of error
ORA-00036: maximum number of recursive SQL levels (50) exceeded.
Thus I conclude that the maximum possible depth of nesting is 50.
Initial answer:
I am not aware of limits on objects nesting but I think they should be reasonably permissive.
Your code fails because you made a few mistakes:
1. Using type names as column names (date, t_account, etc.);
2. Using nested table clause in a wrong place;
The code should go like this:
create or replace type transaction_type as object (tx_date Date, description varchar2(30));
create or replace type transaction_tab as table of transaction_type;
create or replace type account_type as object (id int, description varchar(30),
transactions transaction_tab);
create or replace type account_tab as table of account_type;
create or replace type user_type as object (id int, descr varchar(30), accounts account_tab);
create or replace type user_tab as table of user_type;
CREATE table banks (name varchar(20), users user_tab)
nested table users store as xxx3 (
nested table accounts store as xxx2 (
nested table transactions store as xxx1
));
Checking
INSERT INTO banks VALUES (
'John', user_tab(
user_type(1
,'regular user'
, account_tab(
account_type(1
,'regular account'
, transaction_tab(transaction_type(
trunc(sysdate)
, 'regular transaction'))
))
)));
SQL> SELECT *FROM banks;
NAME
--------------------
USERS(ID, DESCR, ACCOUNTS(ID, DESCRIPTION, TRANSACTIONS(TX_DATE, DESCRIPTION)))
--------------------------------------------------------------------------------
John
USER_TAB(USER_TYPE(1, 'regular user', ACCOUNT_TAB(ACCOUNT_TYPE(1, 'regular accou
nt', TRANSACTION_TAB(TRANSACTION_TYPE('04-APR-18', 'regular transaction'))))))
Selecting nested table columns
SELECT b.name, u.id, u.descr, a.id, a.description
FROM banks b, table(b.users) u, table(u.accounts) a
WHERE u.descr = 'regular user' AND a.description = 'regular account'
NAME ID DESCR ID DESCRIPTION
----- --- ------------- --- ----------------
John 1 regular user 1 regular account

Can the categories in the postgres tablefunc crosstab() function be integers?

It's all in the title. Documentation has something like this:
SELECT *
FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 text);
I have two tables, lab_tests and lab_tests_results. All of the lab_tests_results rows are tied to the primary key id integer in the lab_tests table. I'm trying to make a pivot table where the lab tests (identified by an integer) are row headers and the respective results are in the table. I can't get around a syntax error at or around the integer.
Is this possible with the current set up? Am I missing something in the documentation? Or do I need to perform an inner join of sorts to make the categories strings? Or modify the lab_tests_results table to use a text identifier for the lab tests?
Thanks for the help, all. Much appreciated.
Edit: Got it figured out with the help of Dmitry. He had the data layout figured out, but I was unclear on what kind of output I needed. I was trying to get the pivot table to be based on batch_id numbers in the lab_tests_results table. Had to hammer out the base query and casting data types.
SELECT *
FROM crosstab('SELECT lab_tests_results.batch_id, lab_tests.test_name, lab_tests_results.test_result::FLOAT
FROM lab_tests_results, lab_tests
WHERE lab_tests.id=lab_tests_results.lab_test AND (lab_tests.test_name LIKE ''Test Name 1'' OR lab_tests.test_name LIKE ''Test Name 2'')
ORDER BY 1,2'
) AS final_result(batch_id VARCHAR, test_name_1 FLOAT, test_name_2 FLOAT);
This provides a pivot table from the lab_tests_results table like below:
batch_id |test_name_1 |test_name_2
---------------------------------------
batch1 | result1 | <null>
batch2 | result2 | result3
If I understand correctly your tables look something like this:
CREATE TABLE lab_tests (
id INTEGER PRIMARY KEY,
name VARCHAR(500)
);
CREATE TABLE lab_tests_results (
id INTEGER PRIMARY KEY,
lab_tests_id INTEGER REFERENCES lab_tests (id),
result TEXT
);
And your data looks something like this:
INSERT INTO lab_tests (id, name)
VALUES (1, 'test1'),
(2, 'test2');
INSERT INTO lab_tests_results (id, lab_tests_id, result)
VALUES (1,1,'result1'),
(2,1,'result2'),
(3,2,'result3'),
(4,2,'result4'),
(5,2,'result5');
First of all crosstab is part of tablefunc, you need to enable it:
CREATE EXTENSION tablefunc;
You need to run it one per database as per this answer.
The final query will look like this:
SELECT *
FROM crosstab(
'SELECT lt.name::TEXT, lt.id, ltr.result
FROM lab_tests AS lt
JOIN lab_tests_results ltr ON ltr.lab_tests_id = lt.id'
) AS ct(test_name text, result_1 text, result_2 text, result_3 text);
Explanation:
The crosstab() function takes a text of a query which should return 3 columns; (1) a column for name of a group, (2) a column for grouping, (3) the value. The wrapping query just selects all the values those crosstab() returns and defines the list of columns after (the part after AS). First is the category name (test_name) and then the values (result_1, result_2). In my query I'll get up to 3 results. If I have more then 3 results then I won't see them, If I have less then 3 results I'll get nulls.
The result for this query is:
test_name |result_1 |result_2 |result_3
---------------------------------------
test1 |result1 |result2 |<null>
test2 |result3 |result4 |result5

CSV file data into a PostgreSQL table

I am trying to create a database for movielens (http://grouplens.org/datasets/movielens/). We've got movies and ratings. Movies have multiple genres. I splitted those out into a separate table since it's a 1:many relationship. There's a many:many relationship as well, users to movies. I need to be able to query this table multiple ways.
So I created:
CREATE TABLE genre (
genre_id serial NOT NULL,
genre_name char(20) DEFAULT NULL,
PRIMARY KEY (genre_id)
)
.
INSERT INTO genre VALUES
(1,'Action'),(2,'Adventure'),(3,'Animation'),(4,'Children\s'),(5,'Comedy'),(6,'Crime'),
(7,'Documentary'),(8,'Drama'),(9,'Fantasy'),(10,'Film-Noir'),(11,'Horror'),(12,'Musical'),
(13,'Mystery'),(14,'Romance'),(15,'Sci-Fi'),(16,'Thriller'),(17,'War'),(18,'Western');
.
CREATE TABLE movie (
movie_id int NOT NULL DEFAULT '0',
movie_name char(75) DEFAULT NULL,
movie_year smallint DEFAULT NULL,
PRIMARY KEY (movie_id)
);
.
CREATE TABLE moviegenre (
movie_id int NOT NULL DEFAULT '0',
genre_id tinyint NOT NULL DEFAULT '0',
PRIMARY KEY (movie_id, genre_id)
);
I dont know how to import my movies.csv with columns movie_id, movie_name and movie_genre For example, the first row is (1;Toy Story (1995);Animation|Children's|Comedy)
If I INSERT manually, it should be look like:
INSERT INTO moviegenre VALUES (1,3),(1,4),(1,5)
Because 3 is Animation, 4 is Children and 5 is Comedy
How can I import all data set this way?
You should first create a table that can ingest the data from the CSV file:
CREATE TABLE movies_csv (
movie_id integer,
movie_name varchar,
movie_genre varchar
);
Note that any single quotes (Children's) should be doubled (Children''s). Once the data is in this staging table you can copy the data over to the movie table, which should have the following structure:
CREATE TABLE movie (
movie_id integer, -- A primary key has implicit NOT NULL and should not have default
movie_name varchar NOT NULL, -- Movie should have a name, varchar more flexible
movie_year integer, -- Regular integer is more efficient
PRIMARY KEY (movie_id)
);
Sanitize your other tables likewise.
Now copy the data over, extracting the unadorned name and the year from the CSV name:
INSERT INTO movie (movie_id, movie_name)
SELECT parts[1], parts[2]::integer
FROM movies_csv, regexp_matches(movie_name, '([[:ascii:]]*)\s\(([\d]*)\)$') p(parts)
Here the regular expression says:
([[:ascii:]]*) - Capture all characters until the matches below
\s - Read past a space
\( - Read past an opening parenthesis
([\d]*) - Capture any digits
\) - Read past a closing parenthesis
$ - Match from the end of the string
So on input "Die Hard 17 (John lives forever) (2074)" it creates a string array with {'Die Hard 17 (John lives forever)', '2074'}. The scanning has to be from the end $, assuming all movie titles end with the year of publication in parentheses, in order to preserve parentheses and numbers in movie titles.
Now you can work on the movie genres. You have to split the string on the bar | using the regex_split_to_table() function and then join to the genre table on the genre name:
INSERT INTO moviegenre
SELECT movie_id, genre_id
FROM movies_csv, regexp_split_to_table(movie_genre, '\|') p(genre) -- escape the |
JOIN genre ON genre.genre_name = p.genre;
After all is done and dusted you can delete the movies_csv table.

Dynamically Buld the Statement

I want dynamically search condition.
This is my table(This is generated dynamically ) this is not a physical table.
id Tablename columnname Value |
1 Company Company_name Microsoft |
2 Address Pcity CA |
3 Phone Pnumber 100-4582 |
I want search the Value in the particular table , In this tables are already in the database(Company,Address,Phone). dynamically pass the tablename and columnname and search the Value.
Ex
Select c.Company_name from Company c join Address a on
a.companyid=c.companyid join phone p on p.companyid=c.companyid
where 1=1 and c.company_name like '%Microsoft%' and a.Pcity Like
'%CA%' and p.Pnumber like '%100-4582%'
I want dynamically buld the query and search the condition in the Value column.
How can I do this ..
Thanks.
what i understood is basically you want to make dynamic query, so i am providing a sample for you
create proc dummy_proc
#arg1 varchar(50),
#arg2 int
as
Declare #S varchar(max)
Set #S='select * from tablename where 1=1'
if #arg1<>'' then
set #s=#s+ 'and column1 like''%'+#arg1+%'''
Execute(#S)
hope it helps