VFP 5 COPY TO command - unicode

I need to copy my cursor information into text file that is encoded using UTF-8.
My current command was :-
COPY TO (FILE NAME) DELIMITED WITH CHARACTER ";"
By default the text file was saved into ANSI, how can I make it save into UTF-8?
EDIT: I am using VFP 5.

I'm not sure , try using StrConv()
strconv(filetostr(FILE NAME),10)

1.Convert all Character and Memo fields into UTF-8:
update table1 set field1=STRCONV(field1, 9)
This converts all non-ANSI characters into UTF-8 encoding.
Export it using your COPY TO command.

To expand on Oleg's suggestion, you can cycle through all fields in the given table by...
USE C:\SomePath\YourTable.dbf
*/ Get list of all fields in the table's structure
lnF = AFIELDS( laF, "YourTable" )
lcUpdFlds = ""
*/ Prepare a field for allowing comma between multiple fields
*/ but first time in is the "SET" command instead.
lcNextFld = "set "
FOR lnI = 1 TO lnF
*/ Is it a character-based field
IF laF[ lnI, 2] = "C" OR laF[ lnI, 2] = "M"
lcFld = laF[ lnI, 1]
lcUpdFlds = lcUpdFlds + lcNextFld + lcFld + " = STRCONV( " + lcFld + ", 9) "
*/ Any subsequent character based fields will have a COMMA
*/ added between them.
lcNextFld = ", "
ENDIF
ENDFOR
update YourTable &lcUpdFlds
Modified to do ONE update command and hit ALL columns vs running multiple updates... Especially on a LARGER table

Related

how to create comma separated value in progress openEdge

newbie question here.
I need to create a list. but my problem is what is the best way to not start with a comma?
eg:
output to /usr2/appsrv/test/Test.txt.
def var dTextList as char.
for each emp no-lock:
dTextList = dTextList + ", " + emp.Name.
end.
put unformatted dTextList skip.
output close.
then my end result is
, jack, joe, brad
what is the best way to get rid of the leading comma?
thank you
Here's one way:
ASSIGN
dTextList = dTextList + ", " WHEN dTextList > ""
dTextList = dTextList + emp.Name
.
This does it without any conditional logic:
for each emp no-lock:
csv = csv + emp.Name + ",".
end.
right-trim( csv, "," ).
or you can do this:
for each emp no-lock:
csv = substitute( "&1,&2" csv, emp.Name ).
end.
trim( csv, "," ).
Which also has the advantage of playing nicely with unknown values (the ? value...)
TRIM() trims both sides, LEFT-TRIM() only does leading characters and RIGHT-TRIM() gets trailing characters.
My vanilla list:
output to /usr2/appsrv/test/Test.txt.
def var dTextList as char no-undo.
for each emp no-lock:
dTextList = substitute( "&1, &2", dTextList, emp.Name )
end.
put unformatted substring( dTextList, 3 ) skip.
output close.
substitute prevents unknowns from wiping out list
keep list delimiter checking outside of loop
generally leave the list delimiter prefixed unless the prefix really needs to go as in the case when outputting it
When using delimited lists often you may want to consider a creating a list class to remove this irrelevant noise out of your code so that you can functionally just add an item to a list and export a list without tinkering with these details every time.
I usually do
ASSIGN dTextList = dTextList + (if dTextList = '' then '' else ',') + emp.name.
I come up (well my colleague did) he come up with this:
dTextList = substitute ("&1&3&2", dTextList, emp.Name, min(dTextList,",")).
But it is cool to see various ways to do this. Thank you for all the response
This results in no leading comma (delimiter) and no fiddling with trim/substring/etc
def var cDelim as char.
def var dTextList as char.
cDelim = ''.
for each emp no-lock:
dTextList = dTextList + cDelim + emp.Name.
cDelim = ','.
end.

Removing Unwanted commas from a csv

I'm writing a program in Progress, OpenEdge, ABL, and whatever else it's known as.
I have a CSV file that is delimited by commas. However, there is a "gift message" field, and users enter messages with "commas", so now my program will see additional entries because of those bad commas.
The CSV fields are not in double qoutes so I CAN NOT just use my main method with is
/** this next block of code will remove all unwanted commas from the data. **/
if v-line-cnt > 1 then /** we won't run this against the headers. Otherwise thhey will get deleted **/
assign
v-data = replace(v-data,'","',"\t") /** Here is a special technique to replace the comma delim wiht a tab **/
v-data = replace(v-data,','," ") /** now that we removed the comma delim above, we can remove all nuisance commas **/
v-data = replace(v-data,"\t",'","'). /** all nuisance commas are gone, we turn the tabs back to commas. **/
Any advice?
edit:
From Progress, I cal call Linux commands. So I should be able to execute C++/PHP/Shell etc all from my Progress Program. I look forward to advice, until then I shall look into using external scripts.
You are not providing quite enough data for a perfect answer but given what you say I think the IMPORT statement should handle this automatically.
In my example here commaimport.csv is a comma-separated csv-file with quotes around text fields. Integers, logical variables etc have no quotes. The last field contains a comma in one line:
commaimport.csv
=======================
"Id1", 123, NO, "This is a message"
"Id2", 124, YES, "This is a another message, with a comma"
"Id3", 323, NO, "This is a another message without a comma"
To import this file I define a temp-table matching the file layout and use the IMPORT statement with comma as delimiter:
DEFINE TEMP-TABLE ttImport NO-UNDO
FIELD field1 AS CHARACTER FORMAT "xxx"
FIELD field2 AS INTEGER FORMAT "zz9"
FIELD field3 AS LOGICAL
FIELD field4 AS CHARACTER FORMAT "x(50)".
INPUT FROM VALUE("c:\temp\commaimport.csv").
REPEAT :
CREATE ttImport.
IMPORT DELIMITER "," ttImport.
END.
INPUT CLOSE.
FOR EACH ttImport:
DISPLAY ttImport.
END.
You don't have to import into a temp-table. You could import into variables instead.
DEFINE VARIABLE c AS CHARACTER NO-UNDO FORMAT "xxx".
DEFINE VARIABLE i AS INTEGER NO-UNDO FORMAT "zz9".
DEFINE VARIABLE l AS LOGICAL NO-UNDO.
DEFINE VARIABLE d AS CHARACTER NO-UNDO FORMAT "x(50)".
INPUT FROM VALUE("c:\temp\commaimport.csv").
REPEAT :
IMPORT DELIMITER "," c i l d.
DISP c i l d.
END.
INPUT CLOSE.
This will render basically the same output:
You don't show what your data file looks like. But if the problematic field is the last one, and there are no quotes, then your best bet is probably to read it using INPUT UNFORMATTED to get it a line at a time, and then split the line into fields using ENTRY(). That way you can treat everything after the nth comma as a single field no matter how many commas the line has.
For example, say your input file has three columns like this:
boris,14.23,12 the avenue
mark,32.10,flat 1, the grange
percy,1.00,Bleak house, Dartmouth
... so that column three is an address which might contain a comma and is not enclosed in quotes so that IMPORT DELIMITER can't help you.
Something like this would work in that case:
/* ...skipping a lot of definitions here ... */
input from "datafile.csv".
repeat:
import unformatted v-line.
create tt-thing.
assign tt-thing.name = entry(1, v-line, ',')
tt-thing.price = entry(2, v-line, ',')
tt-thing.address = entry(3, v-line, ',').
do v=i = 4 to num-entries(v-line, ','):
tt-thing.address = tt-thing.address
+ ','
+ entry(v-i, v-line, ',').
end.
end.
input close.

Can't insert newline in msword form field using Powebuilder OLE

I have an application written in Powerbuilder 11.5 that automatically fills in form fields of a Word document (MS Word 2003).
The Word document is protected so only the form fields can be altered.
In the code below you can see I use char(10) + char(13) to insert a newline, however in the saved document all I see is 2 little squares where the characters should be.
I've also tried using "~r~n", this also just prints 2 squares.
When I fill in the form manually I can insert newlines as much as I want.
Is there anything else I can try? Or does anybody know of a different way to fill in word forms using Powerbuilder?
//1 Shipper
ls_value = ids_form_info.object.shipper_name[1]
if not isnull(ids_form_info.object.shipper_address2[1]) then
ls_value += char(10) + char(13) + ids_form_info.object.shipper_address2[1]
end if
if not isnull(ids_form_info.object.shipper_address4[1]) then
ls_value += char(10) + char(13) + ids_form_info.object.shipper_address4[1]
end if
if not isnull(ids_form_info.object.shipper_country[1]) then
ls_value += char(10) + char(13) + ids_form_info.object.shipper_country[1]
end if
if lnv_word.f_inserttextatbookmark( 'shipper', ls_value ) = -1 then return -1
The f_inserttextatbookmark is as follows:
public function integer f_inserttextatbookmark (string as_bookmark, string as_text, string as_fontname, integer ai_fontsize);
if isnull(as_text) then return 0
iole_word = create OLEOBJECT
iole_word.connectToNewobject( "word.application" )
iole_word.Documents.open( <string to word doc> )
iole_word.ActiveDocument.FormFields.Item(as_bookmark).Result = as_text
return 1
end function
Part of your problem is that carriage return is char(13), and line feed is char(10), so to make a CRLF in Windows and DOS you usually need to make char(13) + char(10). If these are out of order, many programs will balk. However, "~r~n" should have produced that for you.
I have success with (and I'm converting for brevity so it might only be close to correct):
lole_Word.ConnectToNewObject ("Word.Application")
...
lole_Word.Selection.TypeText (ls_StringForWord)
Maybe you can try other Word OLE commands to see if it's something to do with the specific command. (After the definition of the line break, I'm grasping at straws.)
Good luck,
Terry
Sounds like it may be a Unicode/Ansi character conversion thing.
for what its worth you could try this ...
http://www.rgagnon.com/pbdetails/pb-0263.html
Hope it helps.
I'm not using form fields, but I am able to insert newlines into a Word document from PowerBuilder using TypeText and "~n". Maybe you just need "~n".

RowFilter including [ character in search string

I fill a DataSet and allow the user to enter a search string. Instead of hitting the database again, I set the RowFilter to display the selected data. When the user enters a square bracket ( "[" ) I get an error "Error in Like Operator". I know there is a list of characters that need prefixed with "\" when they are used in a field name, but how do I prevent RowFilter from interpreting "[" as the beginning of a column name?
Note: I am using a dataset from SQL Server.
So, you are trying to filter using the LIKE clause, where you want the "[" or "]" characters to be interpreted as text to be searched ?
From Visual Studio help on the DataColumn.Expression Property :
"If a bracket is in the clause, the bracket characters should be escaped in brackets (for example [[] or []])."
So, you could use code like this :
DataTable dt = new DataTable("t1");
dt.Columns.Add("ID", typeof(int));
dt.Columns.Add("Description", typeof(string));
dt.Rows.Add(new object[] { 1, "pie"});
dt.Rows.Add(new object[] { 2, "cake [mud]" });
string part = "[mud]";
part = part.Replace("[", "\x01");
part = part.Replace("]", "[]]");
part = part.Replace("\x01", "[[]");
string filter = "Description LIKE '*" + part + "*'";
DataView dv = new DataView(dt, filter, null, DataViewRowState.CurrentRows);
MessageBox.Show("Num Rows selected : " + dv.Count.ToString());
Note that a HACK is used. The character \x01 (which I'm assuming won't be in the "part" variable initially), is used to temporarily replace left brackets. After the right brackets are escaped, the temporary "\x01" characters are replaced with the required escape sequence for the left bracket.

EDIFACT macro (readable message structure)

I´m working within the EDI area and would like some help with a EDIFACT macro to make the EDIFACT files more readable.
The message looks like this:
data'data'data'data'
I would like to have the macro converting the structure to:
data'
data'
data'
data'
Pls let me know how to do this.
Thanks in advance!
BR
Jonas
If you merely want to view the files in a more readable format, try downloading the Softshare EDI Notepad. It's a fairly good tool just for that purpose, it supports X12, EDIFACT and TRADACOMS standards, and it's free.
Replacing in VIM (assuming that the standard EDIFACT separators/escape characters for UNOA character set are in use):
:s/\([^?]'\)\(.\)/\1\r\2/g
Breaking down the regex:
\([^?]'\) - search for ' which occurs after any character except ? (the standard escape character) and capture these two characters as the first atom. These are the last two characters of each segment.
\(.\) - Capture any single character following the segment terminator (ie. don't match if the segment terminator is already on the end of a line)
Then replace all matches on this line with a new line between the segment terminator and the beginning of the next segment.
Otherwise you could end up with this:
...
FTX+AAR+++FORWARDING?: Freight under Vendor?'
s care.'
NAD+BY+9312345123452'
CTA+PD+0001:Terence Trent D?'
Arby'
...
instead of this:
...
FTX+AAR+++FORWARDING?: Freight under Vendor?'s care .'
NAD+BY+9312345123452'
CTA+PD+0001:Terence Trent D?'Arby'
...
Is this what you are looking for?
Option Explicit
Dim stmOutput: Set stmOutput = CreateObject("ADODB.Stream")
stmOutput.Open
stmOutput.Type = 2 'adTypeText
stmOutput.Charset = "us-ascii"
Dim stm: Set stm = CreateObject("ADODB.Stream")
stm.Type = 1 'adTypeBinary
stm.Open
stm.LoadFromFile "EDIFACT.txt"
stm.Position = 0
stm.Type = 2 'adTypeText
stm.Charset = "us-ascii"
Dim c: c = ""
Do Until stm.EOS
c = stm.ReadText(1)
Select Case c
Case Chr(39)
stmOutput.WriteText c & vbCrLf
Case Else
stmOutput.WriteText c
End Select
Loop
stm.Close
Set stm = Nothing
stmOutput.SaveToFile "EDIFACT.with-CRLF.txt"
stmOutput.Close
Set stmOutput = Nothing
WScript.Echo "Done."