I had to make a really small and simple script that would fill a table with string values according to these criteria:
2 characters long
1st character is always numeric (0-9)
2nd character is (0-9) but also includes "X"
Values need to be inserted into a table on a database
The program would execute:
insert into table (code) values ('01');
insert into table (code) values ('02');
insert into table (code) values ('03');
insert into table (code) values ('04');
insert into table (code) values ('05');
insert into table (code) values ('06');
insert into table (code) values ('07');
insert into table (code) values ('08');
insert into table (code) values ('09');
insert into table (code) values ('0X');
And so on, until the total 110 values were inserted.
My code (just to accomplish it, not to minimize and make efficient) was:
use strict;
use DBI;
my ($db1,$sql,$sth,%dbattr);
%dbattr=(ChopBlanks => 1,RaiseError => 0);
$db1=DBI->connect('DBI:mysql:','','',\%dbattr);
my #code;
for(0..9)
{
$code[0]=$_;
for(0..9)
{
$code[1]=$_;
insert(#code);
}
insert($code[0],"X");
}
sub insert
{
my $skip=0;
foreach(#_)
{
if($skip==0)
{
$sql="insert into table (code) values ('".$_[0].$_[1]."');";
$sth=$db1->prepare($sql);
$sth->execute();
$skip++;
}
else
{
$skip--;
}
}
}
exit;
I'm just interested to see a really succinct & precise version of this logic.
133 characters - non-strict
use DBI;$d=DBI->connect('DBI:mysql','','',{RaiseError=>1});for$a(0..9){for$b(0..9,'X'){$d->do("insert into table values('$a$b')");}}
152 characters - strict
use strict;use DBI;my$d=DBI->connect('DBI:mysql','','',{RaiseError=>1});for my$a(0..9){for my$b(0..9,'X'){$d->do("insert into table values('$a$b')");}}
Legible version of 152 character string:
use strict;use DBI;
my $d=DBI->connect('DBI:Informix:stores','','',{RaiseError=>1});
foreach my $a (0..9)
{
foreach my $b (0..9, 'X')
{
$d->do("insert into table values('$a$b')");
}
}
Thought process
Given:
create table table(code char(2) not null);
And the Perl:
use strict;
use DBI;
my $d=DBI->connect('DBI:mysql','','',{RaiseError=>1});
my $h=$d->prepare("insert into table(code)values(?)");
foreach my $a (0..9)
{
foreach my $b (0..9, 'X')
{
$h->execute("$a$b");
}
}
I tested with Informix, so the connect string I actually used was "DBI:Informix:stores".
This solution is still readable - and because of the RaiseError, error-proofed (unless you want to add a transaction too).
Code Golfing it, it becomes (182 characters):
use strict;use DBI;my$d=DBI->connect('DBI:mysql','','',{RaiseError=>1});my$h=$d->prepare("insert into table(code)values(?)");for my$a(0..9){for my$b(0..9,'X'){$h->execute("$a$b");}}
The first part can be reduced to
for my $x (0..9) {
for my $y (0..9,'X') {
insert("$x$y");
}
}
I really don't understand what $skip is doing in the second part. I'd have
sub insert {
my $code = shift;
my $sql="insert into table (code) values ('$code');";
my $sth=$db1->prepare($sql);
$sth->execute();
}
You could just do it in sql (for some values of sql):
insert into table (code) select concat(foo, bar)
from (select 0 foo union select 1 union select 2 union select 3 union select 4
union select 5 union select 6 union select 7 union select 8 union select 9)
foo
join (select 0 bar union select 1 union select 2 union select 3 union select 4
union select 5 union select 6 union select 7 union select 8 union select 9
union select 'X')
bar;
Related
PostgreSql gives me this error when i try to cast a TEXT colum to a integer.
select pro_id::integer from mmp_promocjas_tmp limit 1;
This colum contains only digits, valid integer. How can "1" be invalid integer?
select pro_id, length(pro_id) ,length(trim(pro_id)) from mmp_promocjas_tmp limit 1;
outputs:
1 | 2 | 2
Query select pro_id from mmp_promocjas_tmp where trim(pro_id) = '1' shows nothing.
I tried to remove whitespaces, without no result:
select pro_id from mmp_promocjas_tmp where regexp_replace(trim(pro_id), '\s*', '', 'g')
There are probably spurious invisible contents in the column.
To make them visible, try a query like this:
select pro_id, c,lpad(to_hex(ascii(c)),4,'0') from (
select pro_id,regexp_split_to_table(pro_id,'') as c
from (select pro_id from mmp_promocjas_tmp limit 10) as s
) as g;
This will show the ID and each character its contains, both as a character and as its hexadecimal code in the repertoire.
I am working on migrating data from one database to another for a hospital. In the old database, the doctor's specialty IDs are all in one column (swvar_specialties), each separated by commas. In the new database, each specialty ID will have it's own column (example: Specialty1_PrimaryID, Specialty2_PrimaryID, Specialty3_PrimaryID, etc). I am trying to export the data out of the old database and separate these into these separate columns. I know I can use indexof and substring to do this - I just need help with the syntax.
So this query:
Select swvar_specialties as Specialty1_PrimaryID
From PhysDirectory
might return results similar to 39,52,16. I need this query to display Specialty1_PrimaryID = 39, Specialty2_PrimaryID = 52, and Specialty3_PrimaryID = 16 in the results. Below is my query so far. I will eventually have a join to pull the specialty names from the specialties table. I just need to get this worked out first.
Select pd.ref as PrimaryID, pd.swvar_name_first as FirstName, pd.swvar_name_middle as MiddleName,
pd.swvar_name_last as LastName, pd.swvar_name_suffix + ' ' + pd.swvar_name_degree as NameSuffix,
pd.swvar_birthdate as DateOfBirth,pd.swvar_notes as AdditionalInformation, 'images/' + '' + pd.swvar_photo as ImageURL,
pd.swvar_philosophy as PhilosophyOfCare, pd.swvar_gender as Gender, pd.swvar_specialties as Specialty1_PrimaryID, pd.swvar_languages as Language1_Name
From PhysDirectory as pd
The article Split function equivalent in T-SQL? provides some details on how to use a split function to split a comma-delimited string.
By modifying the table-valued function in presented in this article to provide an identity column we can target a specific row such as Specialty1_PrimaryID:
/*
Splits string into parts delimitered with specified character.
*/
CREATE FUNCTION [dbo].[SDF_SplitString]
(
#sString nvarchar(2048),
#cDelimiter nchar(1)
)
RETURNS #tParts TABLE (id bigint IDENTITY, part nvarchar(2048) )
AS
BEGIN
if #sString is null return
declare #iStart int,
#iPos int
if substring( #sString, 1, 1 ) = #cDelimiter
begin
set #iStart = 2
insert into #tParts
values( null )
end
else
set #iStart = 1
while 1=1
begin
set #iPos = charindex( #cDelimiter, #sString, #iStart )
if #iPos = 0
set #iPos = len( #sString )+1
if #iPos - #iStart > 0
insert into #tParts
values ( substring( #sString, #iStart, #iPos-#iStart ))
else
insert into #tParts
values( null )
set #iStart = #iPos+1
if #iStart > len( #sString )
break
end
RETURN
END
Your query can the utilise this split function as follows:
Select
pd.ref as PrimaryID,
pd.swvar_name_first as FirstName,
pd.swvar_name_middle as MiddleName,
pd.swvar_name_last as LastName,
pd.swvar_name_suffix + ' ' + pd.swvar_name_degree as LastName,
pd.swvar_birthdate as DateOfBirth,pd.swvar_notes as AdditionalInformation,
'images/' + '' + pd.swvar_photo as ImageURL,
pd.swvar_philosophy as PhilosophyOfCare, pd.swvar_gender as Gender,
(Select part from SDF_SplitString(pd.swvar_specialties, ',') where id=1) as Specialty1_PrimaryID,
(Select part from SDF_SplitString(pd.swvar_specialties, ',') where id=2) as Specialty2_PrimaryID,
pd.swvar_languages as Language1_Name
From PhysDirectory as pd
Here's a simple query we do for ad hoc requests from our Marketing department on the leads we received in the last 90 days.
SELECT ID
,FIRST_NAME
,LAST_NAME
,ADDRESS_1
,ADDRESS_2
,CITY
,STATE
,ZIP
,HOME_PHONE
,MOBILE_PHONE
,EMAIL_ADDRESS
,ROW_ADDED_DTM
FROM WEB_LEADS
WHERE ROW_ADDED_DTM BETWEEN #START AND #END
They are asking for more derived columns to be added that show the number of previous occurences of ADDRESS_1 where the EMAIL_ADDRESS matches. But they want is for different date ranges.
So the derived columns would look like this:
,COUNT_ADDRESS_1_LAST_1_DAYS,
,COUNT_ADDRESS_1_LAST_7_DAYS
,COUNT_ADDRESS_1_LAST_14_DAYS
etc.
I've manually filled these derived columns using update statements when there was just a few. The above query is really just a sample of a much larger query with many more columns. The actual request has blossomed into 6 date ranges for 13 columns. I'm asking if there's a better way then using 78 additional update statements.
I think you will have a hard time writing a query that includes all of these 78 metrics per e-mail address without actually creating a query that hard-codes the different choices. However you can generate such a pivot query with dynamic SQL, which will save you some keystrokes and will adjust dynamically as you add more columns to the table.
The result you want to end up with will look something like this (but of course you won't want to type it):
;WITH y AS
(
SELECT
EMAIL_ADDRESS,
/* aggregation portion */
[ADDRESS_1] = COUNT(DISTINCT [ADDRESS_1]),
[ADDRESS_2] = COUNT(DISTINCT [ADDRESS_2]),
... other columns
/* end agg portion */
FROM dbo.WEB_LEADS AS wl
WHERE ROW_ADDED_DTM >= /* one of 6 past dates */
GROUP BY wl.EMAIL_ADDRESS
)
SELECT EMAIL_ADDRESS,
/* pivot portion */
COUNT_ADDRESS_1_LAST_1_DAYS = *count address 1 from 1 day ago*,
COUNT_ADDRESS_1_LAST_7_DAYS = *count address 1 from 7 days ago*,
... other date ranges ...
COUNT_ADDRESS_2_LAST_1_DAYS = *count address 2 from 1 day ago*,
COUNT_ADDRESS_2_LAST_7_DAYS = *count address 2 from 7 days ago*,
... other date ranges ...
... repeat for 11 more columns ...
/* end pivot portion */
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;
This is a little involved, and it should all be run as one script, but I'm going to break it up into chunks to intersperse comments on how the above portions are populated without typing them. (And before long #Bluefeet will probably come along with a much better PIVOT alternative.) I'll enclose my interspersed comments in /* */ so that you can still copy the bulk of this answer into Management Studio and run it with the comments intact.
Code/comments to copy follows:
/*
First, let's build a table of dates that can be used both to derive labels for pivoting and to assist with aggregation. I've added the three ranges you've mentioned and guessed at a fourth, but hopefully it is clear how to add more:
*/
DECLARE #d DATE = SYSDATETIME();
CREATE TABLE #L(label NVARCHAR(15), d DATE);
INSERT #L(label, d) VALUES
(N'LAST_1_DAYS', DATEADD(DAY, -1, #d)),
(N'LAST_7_DAYS', DATEADD(DAY, -8, #d)),
(N'LAST_14_DAYS', DATEADD(DAY, -15, #d)),
(N'LAST_MONTH', DATEADD(MONTH, -1, #d));
/*
Next, let's build the portions of the query that are repeated per column name. First, the aggregation portion is just in the format col = COUNT(DISTINCT col). We're going to go to the catalog views to dynamically derive the list of column names (except ID, EMAIL_ADDRESS and ROW_ADDED_DTM) and stuff them into a #temp table for re-use.
*/
SELECT name INTO #N FROM sys.columns
WHERE [object_id] = OBJECT_ID(N'dbo.WEB_LEADS')
AND name NOT IN (N'ID', N'EMAIL_ADDRESS', N'ROW_ADDED_DTM');
DECLARE #agg NVARCHAR(MAX) = N'', #piv NVARCHAR(MAX) = N'';
SELECT #agg += ',
' + QUOTENAME(name) + ' = COUNT(DISTINCT '
+ QUOTENAME(name) + ')' FROM #N;
PRINT #agg;
/*
Next we'll build the "pivot" portion (even though I am angling for the poor man's pivot - a bunch of CASE expressions). For each column name we need a conditional against each range, so we can accomplish this by cross joining the list of column names against our labels table. (And we'll use this exact technique again in the query later to make the /* one of past 6 dates */ portion work.
*/
SELECT #piv += ',
COUNT_' + n.name + '_' + l.label
+ ' = MAX(CASE WHEN label = N''' + l.label
+ ''' THEN ' + QUOTENAME(n.name) + ' END)'
FROM #N as n CROSS JOIN #L AS l;
PRINT #piv;
/*
Now, with those two portions populated as we'd like them, we can build a dynamic SQL statement that fills out the rest:
*/
DECLARE #sql NVARCHAR(MAX) = N';WITH y AS
(
SELECT
EMAIL_ADDRESS, l.label' + #agg + '
FROM dbo.WEB_LEADS AS wl
CROSS JOIN #L AS l
WHERE wl.ROW_ADDED_DTM >= l.d
GROUP BY wl.EMAIL_ADDRESS, l.label
)
SELECT EMAIL_ADDRESS' + #piv + '
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;';
PRINT #sql;
EXEC sp_executesql #sql;
GO
DROP TABLE #N, #L;
/*
Now again, this is a pretty complex piece of code, and perhaps it can be made easier with PIVOT. But I think even #Bluefeet will write a version of PIVOT that uses dynamic SQL because there is just way too much to hard-code here IMHO.
*/
I want to copy a single record from a table, modify some of the fields in the record and insert into the same table. The table has 90 columns.
Thought of using insert..select in one statement but there are 90 columns and i need to tell the column name in the select query. How can i do it in a better way in perldbi. Pls provide me an example.
Fetch and cache the column names for your table using the NAME statement attribute
my $sth = $dbh->prepare('SELECT * FROM my_table where 1=0');
$sth->execute;
my $cols = $sth->{NAME};
then use $cols to construct your insert...select using some replacement function to inject your modifications in the select.
my %mods_for_column = ( 'foo' => 'foo+10', 'bar' => 'trim(bar)' );
my $inscols = join(',', #$cols);
my $selcols = join(',',
map { exists($mods_for_column($_)) ? $mods_for_column($_) : $_ } #$cols
);
my $sql = "insert into my_table ($inscols) select $selcols from my_table where mumble...";
OK, the umpteenth conditional column question:
I'm writing a stored proc that takes an input parameter that's mapped to one of several flag columns. What's the best way to filter on the requested column? I'm currently on SQL2000, but about to move to SQL2008, so I'll take a contemporary solution if one's available.
The table queried in the sproc looks like
ID ... fooFlag barFlag bazFlag quuxFlag
-- ------- ------- ------- --------
01 1 0 0 1
02 0 1 0 0
03 0 0 1 1
04 1 0 0 0
and I want to do something like
select ID, name, description, ...
from myTable
where (colname like #flag + 'Flag') = 1
so if I call the sproc like exec uspMyProc #flag = 'foo' I'd get back rows 1 and 4.
I know I can't do the part in parens directly in SQL. In order to do dynamic SQL, I'll have to stuff the entire query into a string, concatenate the #flag param in the WHERE clause and then exec the string. Aside from the dirty feeling I get when doing dynamic SQL, my query is fairly large (I'm selecting a couple dozen fields, joining 5 tables, calling a couple of functions), so it's a big giant string all because of a single line in a 3-line WHERE filter.
Alternately, I could have 4 copies of the query and select among them in a CASE statement. This leaves the SQL code directly executable (and subject to syntax hilighting, etc.) but at the cost of repeating big chunks of code, since I can't use the CASE on just the WHERE clause.
Are there any other options? Any tricky joins or logical operations that can be applied? Or should I just get over it and exec the dynamic SQL?
There are a few ways to do this:
You can do this with a case statement.
select ID, name, description, ...
from myTable
where CASE
WHEN #flag = 'foo' then fooFlag
WHEN #flag = 'bar' then barFlag
END = 1
You can use IF.
IF (#flag = 'foo') BEGIN
select ID, name, description, ...
from myTable
where fooFlag = 1
END ELSE IF (#flag = 'bar') BEGIN
select ID, name, description, ...
from myTable
where barFlag = 1
END
....
You can have a complicated where clause with a lot of parentheses.
select ID, name, description, ...
from myTable
where (#flag = 'foo' and fooFlag = 1)
OR (#flag = 'bar' and barFlag = 1) OR ...
You can do this with dynamic sql:
DECLARE #SQL nvarchar(4000)
SELECT #SQL = N'select ID, name, description, ...
from myTable
where (colname like ''' + #flag + 'Flag'') = 1'
EXECUTE sp_ExecuteSQL #SQL, N''
There are more, but I think one of these will get you going.
"Alternately, I could have 4 copies of the query and select among them in a CASE statement."
You don't need to copy your entire query 4 times, just add all the possibilities into the where clauses in your single copy of the query:
select ID, name, description, ...
from myTable
where (#flag = 'foo' and fooFlag = 1) OR (#flag = 'bar' and barFlag = 1) OR ...
What I would do is CASE some variables at the beginning. Example:
DECLARE
#fooFlag int,
#barFlag int,
#bazFlag int,
#quuxFlag int
SET #fooFlag = CASE WHEN #flag = 'foo' THEN 1 ELSE NULL END
SET #barFlag = CASE WHEN #flag = 'bar' THEN 1 ELSE NULL END
SET #bazFlag = CASE WHEN #flag = 'baz' THEN 1 ELSE NULL END
SET #quuxFlag = CASE WHEN #flag = 'quux' THEN 1 ELSE NULL END
SELECT ID, name, description, ...
FROM myTable
WHERE (fooFlag >= ISNULL(#fooFlag, 0) AND fooFlag <= ISNULL(#fooFlag, 1))
AND (barFlag >= ISNULL(#barFlag, 0) AND barFlag <= ISNULL(#barFlag, 1))
AND (bazFlag >= ISNULL(#bazFlag, 0) AND bazFlag <= ISNULL(#bazFlag, 1))
AND (quuxFlag >= ISNULL(#quuxFlag, 0) AND quuxFlag <= ISNULL(#quuxFlag, 1))
The good thing about this query is that, because the possible values for "flags" are bounded, you can calculate all your conditionals as prerequisites instead of wrapping columns in them. This guarantees a high-performance index seek on whichever columns are indexed, and doesn't require writing any dynamic SQL. And it's better than writing 4 separate queries for obvious reasons.
You could have a parameter for each possible flag column, then check if the parameter is null or the value in the column is equal to the parameter. Then you pass in a 1 for the flags that you want to check and leave the others null.
select id, name, description, ...
from myTable
where (#fooFlag is null or fooFlag = #fooFlag) AND
(#barFlag is null or barFlag = #barFlag) AND
...
Honestly, though, this seems like an ideal candidate for building a dynamic LINQ query and skipping the SPROC once you get to SQL2008.
int should be accepted as varchar value
declare #CompanyID as varchar(10) = '' -- or anyother value
select * from EmployeeChatTbl chat
where chat.ConversationDetails like '%'+#searchKey+'%'
and
(
(0 = CASE WHEN (#CompanyID = '' ) THEN 0 ELSE 1 END)
or
(chat.CompanyID = #CompanyID)
)
working
when the companyID is present , then filtration based on it is done, other wise , filtration is skipped.
where
case when #value<>0 then Field else 1 end
=
case when #value<>0 then #value else 1 end