how to replace all the email id records with progress 4gl - progress-4gl

The problem here is I want to update the email ids, I want to update like user#abc.com to user#xyz.com
I have selected all the email ids like this,
for each table where
table.email matches "*" + "#abc.com" + "*" no-lock :
Display
I can't use replace function, since each email ids will be of different length.
Is it possible to change email ids like this ? Please share with me.

Replacing exactly "abc" with "xyz" is done like this:
/* You need to change NO-LOCK to EXCLUSIVE-LOCK if you want to update or change! */
FOR EACH table WHERE table.email MATHES"*" + "#abc.com" + "*" EXCLUSIVE-LOCK:
ASSIGN
table.email = REPLACE(table.email, "#abc.com", "#xyz.com").
END.
But maybe you need to elaborate your question or is this all you want to do?
About performance
This query wont be very fast. Matches does not utilize any indices so the entire table will be scanned.
On later versions of Progress you can add a TABLE-SCAN option. This will increase speed but not by a lot. If you do this you will have to remove the MATCHES expression in the query and do like this:
FOR EACH table EXCLUSIVE-LOCK TABLE-SCAN:
IF table.email MATCHES etcetera
END.
END.
If this is a one time thing to fix e-mail addresses perhaps it doesn't need to be that fast? If not I suggest you add a logical field to the table (table.fixed) and create an index with the field in it. Then you can very fast go through all not fixed records.

I tried myself, and I wrote like this, and it worked.
def var cmail1 as char.
def var cmail2 as char.
assign
cmail1 = "#abc.com"
cmail2 = "#xyz.com".
for each table where
exclusive-lock :
Assign
table.email = REPLACE(table.email, cmail1, cmail2).
but the performance is low. If you any alternate for this, please post.

Related

OpenEdge Progress 4GL Query returns (MISSING) after % sign

DEFINE TEMP-TABLE tt_pay_terms NO-UNDO
FIELD pt_terms_code LIKE payment_terms.terms_code
FIELD pt_description LIKE payment_terms.description.
DEFINE VARIABLE htt AS HANDLE NO-UNDO.
htt = TEMP-TABLE tt_pay_terms:HANDLE.
FOR EACH platte.payment_terms
WHERE (
active = true
AND system_id = "000000"
)
NO-LOCK:
CREATE tt_pay_terms.
ASSIGN
pt_terms_code = payment_terms.terms_code.
pt_description = payment_terms.description.
END.
htt:WRITE-JSON("FILE", "/dev/stdout", FALSE).
I have written this query and it returns data like this
[pt_terms_code] => 0.4%!N(MISSING)ET46
[pt_description] => 0.4%! (MISSING)DAYS NET 46
While I believe (from using a SQL query) that the data should be
0.4%45NET46
0.4% 45 DAYS NET 46
I'm making an assumption that the % is probably some special character (as I've run into similar issues in the past). I've tried pulling all the data from the table, and I get the same result, (ie, not creating a temp table and populating it with all the only the two fields I want).
Any suggestions around this issue?
I'm still very new to 4gl, so the above query might be terribly wrong. All comments and criticisms are welcome.
I suspect that if you try this:
FOR EACH platte.payment_terms NO-LOCK
WHERE ( active = true AND system_id = "000000" ):
display
payment_terms.terms_code
payment_terms.description
.
END.
You will see what the query actually returns. (WRITE-JSON is adding a layer after the query.) You will likely discover that your data contains something unexpected.
To my eye the "%" looks more like formatting -- the terms are likely 0.4%.
You then seem to have some issues in the contents of the description field. My guess is that there was a code page mismatch when the user entered the data and that there is gibberish in the field as a result.

In selector with asterisk * not working in report selection

What is the proper way to search a table for every record that starts in a similar way? I have tried:
"THESE. WORDS" IN {example_one.job_title} and {example_two.status} = "A"
But I need all combinations, including "THESE. WORDS*" Adding the asterisk doesn't work, I guess because of how IN works.
To summarize information in the comments,
to limit job_title by the list of values in these. words, you need your field on the left hand side and the values on the right.
you may want {example_one.job_title} LIKE 'keyword*'
If you found this information helpful, you can upvote and/or accept the answer.

progress 4gl :i want to avoid error messages while running the program

DEFINE TEMP-TABLE ttservice NO-UNDO
FIELD ad-num AS CHARACTER
INDEX ttprimary AS UNIQUE ad-num .
ASSIGN ttservice.ad-num = vehicles.ad-num NO-ERROR
In this, how to avoid error messages when i am adding duplicate records,
situation is:
when i try to add the duplicate records in temp table it doesnot accept,it is ok ,but it display error messages while running a program,iwant to suppres that error messages.,and avoid the duplicate adding records
You can test for the existence of a duplicate key before you try to create it.
(Filling in the blanks.)
DEFINE TEMP-TABLE ttservice NO-UNDO
FIELD ad-num AS CHARACTER /* you have a "num" field defined as character? that's misleading */
INDEX ttprimary AS UNIQUE ad-num .
for each vehicle no-lock: /* perhaps ad-num is non-unique in the vehicle table? */
find ttservice where ttservice.ad-num = vehicles.ad-num no-error.
if available ttservice then
do:
message "oops!". /* or whatever it is you want when a duplicate occurs... */
end.
else
do:
create ttservice.
ASSIGN ttservice.ad-num = vehicles.ad-num.
end.
end.
Here's another way to get unique ad-num values from the vehicles table:
DEFINE TEMP-TABLE ttservice NO-UNDO
FIELD ad-num AS CHARACTER
INDEX ttprimary AS UNIQUE ad-num .
FOR EACH vehicles NO-LOCK
BREAK BY vehicles.ad-num:
IF FIRST-OF(vehicles.ad-num) THEN
DO:
CREATE ttservice.
ASSIGN ttservice.ad-num = vehicles.ad-num.
END.
END.
Two valued answers have already been added by great professionals but i would like to add mine with minor changes.
def TEMP-TABLE ttservice NO-UNDO
FIELD iservid AS INT
INDEX tt-primary AS UNIQUE iservid.
VEHICLELOOP:
for each vehicles use-index <index-name>
NO-LOCK:
IF CAN-FIND(first ttservice
where ttservice.iservid = vehicles.iservid)
THEN
NEXT VEHICLELOOP.
ELSE DO:
create ttservice.
ASSIGN ttservice.iservid = vehicles.iservid.
END. /* VEHICLELOOP */
So I read the answers and think they're sufficient to fix your particular issue. But here's the general way of thinking you should assume when coding for Progress OpenEdge:
Adding no-error to statements (when they allow it) will "suppress errors", though sometimes they're inevitable and suppressing does no good to the stability of your application, always think of treating them (and displaying errors is a part of this).
Whether you choose to check for existence of a record prior to creation or just set your query to not iterate for undesired (repeated) records is up to you. I advise you to check for performance with different approaches (especially in doing a for each for a large table) to see which one is more satisfactory.
So here's my personal suggestion:
for each vehicles no-lock:
if can-find(first ttService where ttService.ad-num = vehicles.ad-num) then
next.
create ttService.
assign ttService.ad-num = vehicles.ad-num no-error.
if error-status:error then
message "Something went horribly wrong:" + error-status:get-message(1)
view-as alert-box error.
end.
In my example above, the error will only be shown if the assign actually fails. It is not likely to happen, I just wanted to show how the usage of no-error (and its treatment) works.
Anyway, hope it helps!

How to query same table twice in Progess OpenEdge Procedure?

In this example, I have two tables; Order Header (oe-hdr) and Location (location). The order header table contains two fields (sale-location-key and ship-location-key) which have an associated location name in the location table. If I were to use SQL to get my results, I would do something like this..
SELECT oe-hdr.order-num, oe-hdr.order-date, saleloc.location-name, shiploc.location-name
FROM oe-hdr,
(SELECT location.location-name
FROM oe-hdr, location
WHERE oe-hdr.sale-location-key = location-key) as saleloc,
(SELECT location.location-name
FROM oe-hdr, location
WHERE oe-hdr.ship-location-key = location-key) as shiploc
WHERE oe-hdr.order-num = saleloc.order-num
AND oe-hdr.order-num = shiploc.order-num
Does anyone know how to replicate this in a Progress procedure?
Define two buffers for "location" and then do a for-each with a link to the buffers:
DEFINE BUFFER saleloc FOR location.
DEFINE BUFFER shiploc FOR location.
FOR EACH oe-hdr
NO-LOCK,
EACH saleloc
WHERE saleloc.location-key = oe-hdr.sale-location-key
NO-LOCK,
EACH shiploc
WHERE shiploc.location-key = oe-hdr.ship-location-key
NO-LOCK
:
DISPLAY
oe-hdr.order-num
oe-hdr.order-date
saleloc.location-name
shiploc.location-name
DOWN
.
END.
one note - if the sale or ship-to doesn't exist in the location table, then the entire record will not be displayed. You'll need a different approach if you need that functionality - it'll involve moving the "linking" to a pair of "FIND" statements in the FOR EACH block.
To overcome Tims point about the missing addresses you could have a function or method (if using OO code) that returns the location-name and use that in the display. It would allow for better error handling on that front. Not sure the performance impact though.
Just a thought.

whoosh doesn't search for short words like "C#"

i am using whoosh to index over 200,000 books. but i have encountered some problems with it.
the whoosh query parser returns NullQuery for words like "C#", "C++" with meta-characters in them and also for some other short words. this words are used in the title and body of some documents so i am not using keyword type for them. i guess the problem is in the analysis or query-parsing phase of searching or indexing but i can't touch my data blindly. can anyone help me to correct this issue. Tnx.
i fixed the problem by creating a StandardAnalyzer with a regex pattern that meets my requirements,here is the regex pattern:
'\w+[#+.\w]*'
this will make tokenizing of fields to be done successfully, and also the searching goes well.
but when i use queries like "some query++*" or "some##*" the parsed query will be a single Every query, just the '*'. also i found that this is not related to my analyzer and this is the Whoosh's default behavior. so here is my new question: is this behavior correct or it is a bug??
note: removing the WildcardPlugin from the query-parser solves this problem but i also need the WildcardPlugin.
now i am using the following code:
from whoosh.util import rcompile
#for matching words like: '.NET', 'C++' and 'C#'
word_pattern = rcompile('(\.|[\w]+)(\.?\w+|#|\+\+)*')
#i don't need words shorter that two characters so i don't change the minsize default
analyzer = analysis.StandardAnalyzer(expression=word_pattern)
... now in my schema:
...
title = fields.TEXT(analyzer=analyzer),
...
this will solve my first problem, yes. but the main problem is in searching. i don't want to let users to search using the Every query or *. but when i parse queries like C++* i end up an Every(*) query. i know that there is some problem but i can't figure out what it is.
I had the same issue and found out that StandardAnalyzer() uses minsize=2 by default. So in your schema, you have to tell it otherwise.
schema = whoosh.fields.Schema(
name = whoosh.fields.TEXT(stored=True, analyzer=whoosh.analysis.StandardAnalyzer(minsize=1)),
# ...
)