While exporting the date field to flat file using JCL, I am having 2 issues
Select char(orderdate)
from orders
The date field is prefixed with '..12-10-2001'
The null field is exported as '............'
I tried COALESCE(cast(orderdate as varchar(10),'NULL'). I am getting the same results
Related
So if I have a source file containing 10 columns and my target contains 11 columns and the extra column is of type date . The source file contains a name like 'cust20201212' now I wish to extract only the date part and add this to my target table column in the date type column. Is it possible to achieve this using talend. I just want to extract the date as 2020-12-12 and 2020-12-01 and store it in the date column of oracle table.
Can we use tregexextract in this scenario?
You need firstly to get your Filename in the flow, or in a variable . Do you have it in a context variable, or does it come from a tFileList ?
If you have a tFileList in your job , you can access it with global Variable
: ((String)globalMap.get("tFileList_1_CURRENT_FILE"))
When you have this filename, you have to parse your filename to extract the data :
TalendDate.parseDate("yyyyMMdd",StringHandling.LEFT(StringHandling.RIGHT(*PLACE_HERE_FILENAME*,12),8))
StringHandling.RIGHT get the last part of your filename (8 chars + extension (4 chars) = 12)
StringHandling.LEFT gets you the first 8 chars of this expression (20201201)
TalendDate.parseDate convert the string representing your date to an actual date.
Then you can pass this new data to your oracle db.
At the end of importing a .txt file through the help of the wizard i get a message that some elements were not imported correctly. I have a column in the .txt which should contain dates, but for some reason when i select the column containing dates, and i set its type to date and time, for some reason access cannot recognize them as dates. I'm thinking that it's because of the language difference. I use dates like: 1.1.2011, whereas access uses 1/1/2011.
Where can i change the format?
You can in the Advanced section of the Import Wizard.
If that doesn't work, don't import but link the file and specify the date field as text.
Then create a simple select query where you use the linked table as source. Select all the fields you need.
For the date field, use this expression:
TrueDate: CDate(Replace([YourTextDateField], ".", "/"))
Clean up other fields as well.
Now use this query for the further processing of the data.
I have couple of csv file, all of my csv files are about to identical but some columns in csv file are differ from one another. As an example:
csv 1,2,3 have these columns:
id name post title cdate mdate path
but in csv 4,5 have these columns:
id name post title ddate mdate fpath
My output should be like this:
id name post title cdate mdate ddate path fpath
How to achieve this? Currently I am follwoing this:
But in this procedure I can extract data from csv but not in preferred output..
You need to put each file type in different folder, let's say files 1,2,3 in folder1 and 4,5 in folder 2.
Now, insert files from one folder into you Mongo DB, using this job:
tFileList --(iterate)--> tFileInputDelimited --(file_schema)--> tMap ---(DB_schema)--> tMongoDBOutput
Here, we use tMap to get DB schema from the file schema, extra columns will remain blanked.
Finally, using a second job which is the same first job but tFileList points to the second folder and tMap have a join between the already written data and the new set of files based on the id, also file schema is different.
tMongoDBInput
|
|
tFileList --(iterate)--> tFileInputDelimited --(file_schema)--> tMap ---(DB_schema)--> tMongoDBOutput
You can use OnSubJobOK to link the first and the second job.
I'm using hive to create and try to load file content into the table.
There's a column type "Date" and the date format in the file is dd/mm/yy, for example: 01/12/2013
But when I trie to load the data into table from the file, the column values corresponding to the "Date" is always NULL, as if failed to load the date content.
I put the column content into a txt file and upload to the hdfs, so, the column may be:
id, name, birthdate
and corresponding value are:
1, "Joan", 04/05/1989
But the "04/05/1989" seems can't be read into the table, always null.
Please tell me if the format in my txt file is wrong or I need some specific grammar when loading date type data into Hive table.
Thanks!
Date data type format is YYYY-MM-DD. You need to format field accordingly.
More details on
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-date
I have a set of pivot tables that use external csv files as their data sources. The csv files originally contained dates in the format dd/mm/yy (e.g. 31/01/13). The pivot tables did not recognise these as dates. I converted the dates in the csv files to dd/mm/yyyy (e.g. 31/01/2013) but these were still not recognised as dates by the pivot tables.
I tried setting up a calculated field =DATEVALUE(date_from_csv) but when used in the pivot table (I'm using the Max option to select the most recent date) I get #VALUE! errors.
I have tried converting the csv file to xlsx and also importing the data into the workbook that contains the pivot table - but I can't change from the external connection to use the internal data. I don't want to rebuild the pivots as there are a lot of variables and formatting that would take ages to redo.
Any ideas??
The problem was caused by the date column being blank for some rows and I found that if I moved a row to the top (after the header line) that had all the fields filled in, then Excel got the formats correct and the pivot tables now work!