I am new to openedge and i am trying to export initially a table to xml file.
My final aim is to export three tables to xml file.
I have tried to export in a simple delimited and is working.
I have tried
For txt
OUTPUT TO c:\temp\file.txt.
FOR EACH cGrSIRVATNBR:
EXPORT DELIMITER ";" cGrSIRVATNBR.
END.
OUTPUT CLOSE.
For xml
cGrSIRVATNBR:WRITE-XML("FILE","c:\temp\tt.xml", TRUE).
For xml i thing is only supported from 102b. That's why i am taking error (Unable to understand after -- cGrSIRVATNBR:) when using WRITE-XML.
I will appreciate any help.
This works fine for me:
define temp-table ttCust no-undo like customer.
for each customer no-lock where custNum = 1:
create ttCust.
buffer-copy customer to ttCust.
end.
temp-table ttCust:write-xml( "file", "cust.xml", true ).
You cannot directly write a db table to XML. You have to copy the records that you want into a temp-table first.
Related
I am trying to export a table from AnyLogic's database to an excel file in a parameters variation using this code:
Database myFile = new Database(this, "A DB from Excel", "DataBaseExport.xlsx");
ModelDatabase modelDB = getEngine().getModelDatabase();
modelDB.exportToExternalDB("flowchart_stats_time_in_state_log", myFile.getConnection(), "Sheet 1", true, true);
I am then given this error
With this after
It turns out that the name of the excel sheet "sheet1" is the culprit. Upon removing the "1" and creating matching column names in the excel file, the data is able to be exported.
Not sure about your issue, but maybe try this instead:
create a database object as below:
Call this code at runtime:
My high level purpose is to export mongo data to Bigquery so I could do data analysis.
I don't want to export as csv because doing so requires me to specify fields to export manually.
However, mongoexport to json will have these type data like
"registerTimestamp":{"$numberLong":"1429594506335"}
This tpye $numberLong really messes up my bigquery import. Error message like:
Errors:
query: Illegal field name: $numberLong
I don't find a way to export mongo without type. How to solve this export to bigquery problem....
I think you can do something like this:
Create a script call command.js containing:
printjson( db.collection.find().toArray() )
Then execute the command as below:
mongo dbname command.js > output.json
This article is the source and will provide more details
I end up exporting CSV and take it from there.
I think you could open the json file with vim then replace the type with an empty string using a regular expression like this:
to NumberLong(***), use
:%s/NumberLong((.*))/\1/g
to ObjectId(***), use
:%s/NumberLong((.*))/\1/g
My requirement is read text files from content repository in sap abap.I used SCMS_DOC_READ FM to read image file and creating url DP_CREATE_URL for creating image url but SCMS_DOC_READ not working for text.
Can any one suggest some code, FM or class .
There are two options based on your requirement:
Option 1: Use READ DATASET to read file.
DATA : FNAME(60) type c VALUE 'myfile.txt',
TEXT2(5) type c.
OPEN DATASET FNAME FOR INPUT IN TEXT MODE.
DO.
READ DATASET FNAME INTO TEXT2 LENGTH LENG.
WRITE: / SY-SUBRC, TEXT2.
IF SY-SUBRC <> 0.
EXIT.
ENDIF.
ENDDO.
CLOSE DATASET FNAME.
Option 2: Use Class CL_ABAP_CONV_IN_CE to read file.
Refer this tutorial page to get more information on this class.
You can easily find the answer there: http://scn.sap.com/thread/525075
If you want the short answer, you should use this(Note: I am not the author of this part):
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
FILENAME = "File path"
FILETYPE = 'ASC'
HAS_FIELD_SEPARATOR = 'X'
TABLES
DATA_TAB = IT.
Note : Internal table structure should be same as text File.
iam using 4GL in Progress OpenEdge 11.3 and i want to write a xml file from xsd schema file.
Can i generate a xml file from a XML Schema (xsd) with 4GL Progress OpenEdge?
thanks.
Well, you can use a method called READ-XMLSCHEMA (and it's counterpart WRITE-XMLSCHEMA).
These can be applied to both TEMP-TABLES and ProDataSets (depending of the complexity of the xml).
The ProDataSet documentation, found here, contains quite a lot information about this. There's also a book called Working with XML that can help you.
This is the basic syntax of READ-XMLSCHEMA (when working with datasets):
READ-XMLSCHEMA ( source-type, { file | memptr | handle | longchar },
override-default-mapping [, field-type-mapping [, verify-schema-mode ] ] ).
A basic example would be:
DATASET ds:READ-XMLSCHEMA("file", "c:\temp\file.xsd", FALSE).
However since you need to work with the actual XML you also will have to handle data. That data is handled in the TEMP-TABLES contained withing the Dataset. It might be easier to start with creating a static ProDataSet that corresponds to the schema and then handle it's data whatever way you want.
I'm trying to import a .csv file into phpmyadmin where several fields are purposefully left blank. I need these field to register as null values and not just left as a blank string.
I know in the field properties you can select to allow "null" vs. "not null" for each field, but it still doesn't change cell to a null value while importing. After the import I can manually go check the null box for each field on each record, but that it unrealistic considering the amount of data I'm working with.
Is there a way to get phpmyadmin to set these blank cell to null values on import?
I've been experience similar issues.
If you download a PhpMyAdmin CSV file with NULL values, you'll notice that NULL doesn't get encapsulated with quotes. So you'll have a line like this:
"1";"2";NULL;NULL
"2";"2";NULL;NULL
etc.
However, if you edit a CSV file in something like Open Office Calc, it might change this to put quotes around NULL, like so:
"1";"2";"NULL";"NULL"
"2";"2";"NULL";"NULL"
etc.
What should work is doing a search and replace for ["NULL" = NULL].
In your case, because you have empty (blank) fields, you'll be looking at doing a search and replace like this:
[,, = ,NULL,]
And probably a second pass for NULL values at the end of a line like so:
[,\n = ,NULL\n]
Ancient question, but in case another MySQL noob like myself comes across it.
The find/replace rigamarole jmbertucci describes is avoidable if you're in charge of the creation of the CSV file, for example when you're backing up your own databases. In phpMyAdmin, if you select "custom" export method, you will see replace NULL with: and the default is NULL. Simply change that to "NULL" and you save yourself a step.
I ran into this same problem and jmbertucci's answer worked great. I did run into one additional problem. In the case with a row of data like such
"hello","world",,,,,,
which has multiple sets of null values in a row doing a search replace with [,, = ,NULL,] as jmbertucci suggested won't work as you intend it to on the first pass. Instead you'll end up with
"hello","world",NULL,,NULL,,NULL
You should continue to do the search replace to until you end up with 0 occurrences replaced