Progress OpenEdge how to sort by length of char - progress-4gl

newbie question here.
How can I sort employee table by LEN(Firstname) Desc then by lastName asc?
Here is the code I have so far:
output to /usr2/appsrv/test/Test.txt.
def var dRow as char.
for each emp by emp.FirstName no-lock:
dRow = substitute ("&1,&2", emp.FirstName, emp.LastName).
put unformatted dRow skip.
end.
put unformatted dRow skip.
output close.
I am on version 11.3.3
Thank you

for each emp by length( emp.firstName ) descending by lastName:
display emp.firstName emp.lastName.
end.
Performance will stink if this is a large table.

for each emp no-lock by emp.FirstName desc by emp.LastName :

Related

PHP: extract comma separated values from MySQL

my table is:
id | id_fruit | name
2|1|orange|
3|1|apple|
23|1|banana|
34|1|ananas|
35|3|kiwi|
I want to extract all name where id_fruit=1 and print them in a form field as values in this way:
orange, apple, banana, ananas
I've tried
select concat_ws(', ', name) from fruits where id_fruit=1 but doesn't work.
Ho can I do it? thanks!
User Group_Contact function in mysql
Select group_concat(name) from my table where fruit_id=1
I think you misunderstood the concat_ws() use.
MySQL documentation about this function explains you must give every argument to join at once.
So you sql statement has no effect.
As Raja says, you can go with your back-end programming language (such as php, python or whatever you use).
You just select without the concat_ws and do the join on the back-end side.
It seems concat_ws() main use is to concat two fields from the same entry in mysql, such as the firstname and lastname (fields) for example of the same user (entry).
Simply through PHP, you can do like that!
You can while loop all the names like this.
PHP
$in = 0;
$fruits = "";
while($row = mysqli_fetch_assoc($exeutedquery)){
if($i == 0){
$fruits += $row['name'];
$i++;
}else{
$fruits += ",".$row['name'];
}
}
AND FOR SQL
SELECT STUFF
(
(select ',' + name
from
fruits
where
ID in (1,2,3)
FOR XML PATH('')
),1,1,''
)
BAM!

How to split a field that has carriage return

I have a field in my database table called ADDRESSFORMAT
1,The Lodge
Street
Town
Postcode
Where the contents are separated by a CHAR (13) and CHAR (10)
How would I go about creating fields in a query that would only pull back either the first line, second line...and so on?
The following is an in-line approach.
The Cross Apply B generates a "clean string". It will eliminate any number of repeating CRLFs and create a pipe delimited string to be processed by Cross Appy C.
I should note that this method of eliminating repeating strings was demonstrated by Gordon Linoff several weeks back. Sorry I can't find the original post.
Example
Declare #YourTable table (ID int,ADDRESSFORMAT varchar(max))
Insert Into #YourTable values
(1,'The Lodge
Street
Town
Postcode')
Select A.ID
,C.*
From #YourTable A
Cross Apply (
Select CleanString = replace(replace(replace(replace(replace(ADDRESSFORMAT,char(13),'|'),char(10),'|'),'|','><'),'<>',''),'><','|')
) B
Cross Apply (
Select Pos1 = ltrim(rtrim(xDim.value('/x[1]','varchar(max)')))
,Pos2 = ltrim(rtrim(xDim.value('/x[2]','varchar(max)')))
,Pos3 = ltrim(rtrim(xDim.value('/x[3]','varchar(max)')))
,Pos4 = ltrim(rtrim(xDim.value('/x[4]','varchar(max)')))
,Pos5 = ltrim(rtrim(xDim.value('/x[5]','varchar(max)')))
,Pos6 = ltrim(rtrim(xDim.value('/x[6]','varchar(max)')))
,Pos7 = ltrim(rtrim(xDim.value('/x[7]','varchar(max)')))
,Pos8 = ltrim(rtrim(xDim.value('/x[8]','varchar(max)')))
,Pos9 = ltrim(rtrim(xDim.value('/x[9]','varchar(max)')))
From (Select Cast('<x>' + replace((Select replace(B.CleanString,'|','§§Split§§') as [*] For XML Path('')),'§§Split§§','</x><x>')+'</x>' as xml) as xDim) as A
) C
Returns
ID Pos1 Pos2 Pos3 Pos4 Pos5 Pos6 Pos7 Pos8 Pos9
1 The Lodge Street Town Postcode NULL NULL NULL NULL NULL

How to represent spaces in Perl's DBI properly

I have a record in an Informix table. The table columns look like this:
acct_no integer,
suffix char(1),
meter_num char(20),
date_read datetime year to second not null ,
counter smallint,
reading integer not null ,
typeofread char(1),
estimated char(1),
time_billed datetime year to second
Using Informix's dbaccess tool:
select *
from ws_mtrread
where acct_no = 113091000
and suffix = " "
order by date_read desc;
this result (newest shown) is returned and works whether or not I use one or two spaces for suffix.
acct_no 113091000
suffix
meter_num 34153205
date_read 2013-09-09 23:31:15
counter 0
reading 1240
typeofread g
estimated
time_billed 2013-10-22 11:48:21
However, this Perl DBI query
my $sql_statement =
"select * ".
"from ws_mtrread ".
"where acct_no = ? ".
"and suffix = ? ".
"order by date_read desc ; ";
does not work. I can fetch the row without specifying $suffix, so I know the row exists.
I believe this is an error on my part in representing the suffix. In this example suffix is equal to a string of two spaces.
How do I represent spaces correctly, so the query works? Here is the rest of the code I used to fetch the row.
my $test_acct_no = 113091000;
my $suffix = " ";
my $pt_sel_hdl = $DBHdl->prepare($sql_statement);
$pt_sel_hdl->execute($test_acct_no, $DBHdl->quote($suffix));
my $ws_mtr_read_rec_ref = $pt_sel_hdl->fetchrow_hashref;
After the call, $ws_mtr_read_rec_ref is undefined.
Don't use DBI's quote method here:
$pt_sel_hdl->execute($test_acct_no, $DBHdl->quote($suffix));
When you use ? placeholders in your SQL, the database driver will correctly parameterize the query arguments that you pass to execute. You are probably creating a query that is searching for the literal string " " (including the quotes) when you want to search for (just two spaces.)
So this should be all you need:
$pt_sel_hdl->execute($test_acct_no, $suffix);

TRIM not works with lines and tabs of a xpath in PostgreSQL?

With this query
SELECT trim(title) FROM (
SELECT
unnest( xpath('//p[#class="secTitle1"]', xmlText )::varchar[] ) AS title
FROM t1
) as t2
and XML input text with lines and spaces,
<root>
...
<p class="x">
text text
text text
</p><p> ...</p>
...
</root>
The trim() have no effect (!). It is a PostgreSQL bug? How to apply fn:normalize-space() with the XPath? I need something like "WHERE title is not null"? (Oracle is simpler...) How to do this simple query with PostreSQL?
Workaround
I need a well-configured build-in function, not a workaround... But I need to work and to show results, so I am using regular expression...
SELECT id, TRIM(regexp_replace(tit, E'[\\n\\r\\t ]+', ' ', 'g')) AS tit
FROM (
SELECT
id, -- xpath returns array of 1, 2, or more strings
unnest( xpath('//p[#class="secTitle1"]', texto )::VARCHAR[] ) AS tit
FROM t
) AS tmp
So, a "only simple space trim" is not friendly, not util (!).
EDIT after #mu comment
I try
SELECT id, TRIM(tit, E'\\n\\r\\t') AS tit
and
SELECT id, TRIM(tit, '\n\r\t') AS tit
both NOT WORKs.
QUESTION REMAINS:
there are no TRIM-option or postgresql configuration to say to TRIM work as it is required?
can I use normalize-space() at xpath? How?
I am using PostgreSQL 9.1, need to upgrade?
It works in 9.2, and it works on 8.4 too.
postgres=# select trim(unnest(string_to_array(e'\t\tHello\n\t\tHello\n\t\tHello', e'\n')), e'\t');
btrim
-------
Hello
Hello
Hello
(3 rows)
your regexp replace any char \n or \r or \t, but trim working with string "\n\r\t". It has different meaning than you expect.

Count previous occurences of a value split by date ranges

Here's a simple query we do for ad hoc requests from our Marketing department on the leads we received in the last 90 days.
SELECT ID
,FIRST_NAME
,LAST_NAME
,ADDRESS_1
,ADDRESS_2
,CITY
,STATE
,ZIP
,HOME_PHONE
,MOBILE_PHONE
,EMAIL_ADDRESS
,ROW_ADDED_DTM
FROM WEB_LEADS
WHERE ROW_ADDED_DTM BETWEEN #START AND #END
They are asking for more derived columns to be added that show the number of previous occurences of ADDRESS_1 where the EMAIL_ADDRESS matches. But they want is for different date ranges.
So the derived columns would look like this:
,COUNT_ADDRESS_1_LAST_1_DAYS,
,COUNT_ADDRESS_1_LAST_7_DAYS
,COUNT_ADDRESS_1_LAST_14_DAYS
etc.
I've manually filled these derived columns using update statements when there was just a few. The above query is really just a sample of a much larger query with many more columns. The actual request has blossomed into 6 date ranges for 13 columns. I'm asking if there's a better way then using 78 additional update statements.
I think you will have a hard time writing a query that includes all of these 78 metrics per e-mail address without actually creating a query that hard-codes the different choices. However you can generate such a pivot query with dynamic SQL, which will save you some keystrokes and will adjust dynamically as you add more columns to the table.
The result you want to end up with will look something like this (but of course you won't want to type it):
;WITH y AS
(
SELECT
EMAIL_ADDRESS,
/* aggregation portion */
[ADDRESS_1] = COUNT(DISTINCT [ADDRESS_1]),
[ADDRESS_2] = COUNT(DISTINCT [ADDRESS_2]),
... other columns
/* end agg portion */
FROM dbo.WEB_LEADS AS wl
WHERE ROW_ADDED_DTM >= /* one of 6 past dates */
GROUP BY wl.EMAIL_ADDRESS
)
SELECT EMAIL_ADDRESS,
/* pivot portion */
COUNT_ADDRESS_1_LAST_1_DAYS = *count address 1 from 1 day ago*,
COUNT_ADDRESS_1_LAST_7_DAYS = *count address 1 from 7 days ago*,
... other date ranges ...
COUNT_ADDRESS_2_LAST_1_DAYS = *count address 2 from 1 day ago*,
COUNT_ADDRESS_2_LAST_7_DAYS = *count address 2 from 7 days ago*,
... other date ranges ...
... repeat for 11 more columns ...
/* end pivot portion */
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;
This is a little involved, and it should all be run as one script, but I'm going to break it up into chunks to intersperse comments on how the above portions are populated without typing them. (And before long #Bluefeet will probably come along with a much better PIVOT alternative.) I'll enclose my interspersed comments in /* */ so that you can still copy the bulk of this answer into Management Studio and run it with the comments intact.
Code/comments to copy follows:
/*
First, let's build a table of dates that can be used both to derive labels for pivoting and to assist with aggregation. I've added the three ranges you've mentioned and guessed at a fourth, but hopefully it is clear how to add more:
*/
DECLARE #d DATE = SYSDATETIME();
CREATE TABLE #L(label NVARCHAR(15), d DATE);
INSERT #L(label, d) VALUES
(N'LAST_1_DAYS', DATEADD(DAY, -1, #d)),
(N'LAST_7_DAYS', DATEADD(DAY, -8, #d)),
(N'LAST_14_DAYS', DATEADD(DAY, -15, #d)),
(N'LAST_MONTH', DATEADD(MONTH, -1, #d));
/*
Next, let's build the portions of the query that are repeated per column name. First, the aggregation portion is just in the format col = COUNT(DISTINCT col). We're going to go to the catalog views to dynamically derive the list of column names (except ID, EMAIL_ADDRESS and ROW_ADDED_DTM) and stuff them into a #temp table for re-use.
*/
SELECT name INTO #N FROM sys.columns
WHERE [object_id] = OBJECT_ID(N'dbo.WEB_LEADS')
AND name NOT IN (N'ID', N'EMAIL_ADDRESS', N'ROW_ADDED_DTM');
DECLARE #agg NVARCHAR(MAX) = N'', #piv NVARCHAR(MAX) = N'';
SELECT #agg += ',
' + QUOTENAME(name) + ' = COUNT(DISTINCT '
+ QUOTENAME(name) + ')' FROM #N;
PRINT #agg;
/*
Next we'll build the "pivot" portion (even though I am angling for the poor man's pivot - a bunch of CASE expressions). For each column name we need a conditional against each range, so we can accomplish this by cross joining the list of column names against our labels table. (And we'll use this exact technique again in the query later to make the /* one of past 6 dates */ portion work.
*/
SELECT #piv += ',
COUNT_' + n.name + '_' + l.label
+ ' = MAX(CASE WHEN label = N''' + l.label
+ ''' THEN ' + QUOTENAME(n.name) + ' END)'
FROM #N as n CROSS JOIN #L AS l;
PRINT #piv;
/*
Now, with those two portions populated as we'd like them, we can build a dynamic SQL statement that fills out the rest:
*/
DECLARE #sql NVARCHAR(MAX) = N';WITH y AS
(
SELECT
EMAIL_ADDRESS, l.label' + #agg + '
FROM dbo.WEB_LEADS AS wl
CROSS JOIN #L AS l
WHERE wl.ROW_ADDED_DTM >= l.d
GROUP BY wl.EMAIL_ADDRESS, l.label
)
SELECT EMAIL_ADDRESS' + #piv + '
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;';
PRINT #sql;
EXEC sp_executesql #sql;
GO
DROP TABLE #N, #L;
/*
Now again, this is a pretty complex piece of code, and perhaps it can be made easier with PIVOT. But I think even #Bluefeet will write a version of PIVOT that uses dynamic SQL because there is just way too much to hard-code here IMHO.
*/