How to filter the data for JDE object browser in date format? - jdedwards

I am new user for JDE and JDE Data browser as well. But for what I know the JDE data browser only enables us to do create some filter condition for our data.
Let say I wish to get the data in a range of dates (Example:20/11/2009 To 8/9/2012).
What comment should I put on JDE Data browser? or the JDE data browser didn't support this function?

The JDE Data browser nor the UTB have that capability. You could achieve this only if you had the field two times in your table. It could be achieved via SQL, but then you would have to face data conversion since JDE store data differently that what it display (dates are stored in julian format).
So the answer to your question: you cannot query a date range on one column in the data browser.
Work around: query via SQL, but you will have data conversion to do.
have a good day

Related

SAP incremental data load in Azure Data Factory

I'm trying to implement an Extractor pipeline in ADF, with several Copy Data activities (SAP ERP Table sources). To save some processing time, I'd like to have some deltas (incremental load). What's the best way to implement this?
What I'm trying at the moment is just to use the "RFC table options" in each Copy Data activity. However, this seems to be quite limited (only very simple queries allowed). Also, each SAP ERP table requires a different query. I found 3 different situations, regarding table field formats:
Timestamp in miliseconds (e.g. COVP);
Timestamp in YYYYMMDDHHMMSS (e.g. FAGLFLEXA);
Last change date and last change time, in separated fields (e.g. CATSDB)
Has anyone ever tried this? What would you advise?
Thanks!

ELT pipeline for Mongo

I am trying to get my data into Amazon Redshift using Fivetran, but have some questions in general about the ELT/ETL process. My source database is Mongo but I want to perform deep analysis on the data using a 3rd party BI tool like Looker, but they integrate with SQL. I am new to the ELT/ETL process and was wondering would it look like this.
Extract data from Mongo (handled by Fivetran)
Load into Amazon Redshift (handled by Fivetran)
Perform Transformation - This is where my biggest knowledge gap is. I obviously have to convert objects and arrays into compatible SQL types. I can perform a transformation on all objects to extract those to columns and transform all arrays to a table. Is this the right idea? Should I design a MYSQL schema and write all the transformations according to that schema design?
as you state, Fivetran will load your data into Redshift putting individual fields in columns where it can and putting everything else into varchar columns as JSON. So at that point you basically have a Data Lake - all your data in an analytical platform but basically still in source format and available for you to do whatever you want with it.
Initially, if you don't know much about your data and just want to investigate it, you can probably leave it as it is. Redshift has SQL functions that allow you to query the elements of a JSON structure so there is no need to build additional tables and more ETL just to allow you to investigate your data - especially as these tables may get thrown away once you understand your data and decide what you want to do with it.
If you have proper reporting requirements then that is the point where you can start to design a schema that will support these requirements (I'm not sure why you suggested a MYSQL schema as MYSQL is a database vendor?). Traditionally an analytical schema would be designed as a Kimball Dimensional model (facts and dimensions) but the type of schema you decide to design will depend on:
The database platform you are using (in your case, Redshift) and the type of structures it works best with e.g. star schema or "flat" tables
The BI tool you are using and how it expects to have data presented to it
For example (and I'm not saying this is a real world example), if Redshift works ok with star schemas but better with flat tables and Looker has to have a star schema then it probably makes more sense to build star schemas in Redshift as this is a single modelling exercise - rather than model flat tables in Redshift and then have to model star schemas in Looker.
Hope this helps?
It depends on how you need the final stage of your data analysis presented, and what the purpose of your data analysis is. As stated by NickW, assuming you need to integrate your data into a BI tool the schema should be adapted according to the tool's data format requirements.
a mongodb ETL/ELT process might looks like this:
Select Connection: Select the set connection
Collection Name:Choose the collection by using the [database].[collection] format.
If you pulling data from your authentication database, only the [collection] name can be determined. Examples: ea sample.products east .
Extract Method:
All: pull the entire data in the table.
Incremental: pull data by incremental value.
Incremental Attributes: Set the name of the incremental attribute to run by. I.e: UpdateTime .
Incremental Type: Timestamp | Epoch. Choose the type of incremental attribute.
Choose Range:
In Timestamp, choose your date increment range to run by.
In Epoch, choose the value increment range to run by.
If no End Date/Value entered, the default is the last date/value in the table.
The increment will be managed automatically
Include End Value: Should the increment process take the end value or not
Interval Chunks: On what chunks the data will be pulled by. Split the data by minutes, hours, days, months or years.
Filter: Filter the data to pull. The filter format will be a MongoDB Extended JSON.
Limit: Limit the rows to pull.
Auto Mapping: You can choose the set of columns you want to bring, add a new column or leave it as it is.
Converting Entire Key Data As a STRING
In cases the data is not as expected by a target, like key names started with numbers, or flexible and inconsistent object data, You can convert attributes to a STRING format by setting their data types in the mapping section as STRING
Conversion exists for any value under that key.
Arrays and objects will be converted to JSON strings.
Use cases:
Here are few filtering examples:
{"account":{"$oid":"1234567890abcde"}, "datasource": "google", "is_deleted": {"$ne": true}}
date(MODIFY_DATE_START_COLUMN) >=date("2020-08-01")

Some date values off by a day in JDV

When querying a source model in a VBD, with the source database being Informix 11, the values for a date column are sometimes returned as the prior day. For example, the actual value in Informix is Oct 10, but value shown when querying JDV source model is Oct 9. Querying Informix directly returns the correct date. I'm using JDV 6.4.0 with JDK 1.8.0_162 (x64) on Windows 10.
Any ideas? Thanks in advance!
To elaborate on what Ramesh is saying, you need to check the client and server jvm timezones. JDV will attempt to keep date/time calendar fields consistent across db, server, and client. If the Teiid client is in a different timezone than the server, the client will automatically alter UTC value for date/time values so that they match what the server would display - which is determined by the server timezone.
When a timestamp value is retrieved from the database we assume that it has already been adjusted by the driver to account for any timezone differences. If that is not the case there is a translator execution property called DatabaseTimeZone that will utilize the JDBC calendar based methods to adjust the retrieved date/time values.
A common issue would is a mismatch of daylight savings times - usually it's best to have the JDV server in a standard timezone.

Update core data database based on criteria using NSBatchUpdate function

I couldn't find anywhere for the answer. I have a database implemented with simple core data and an entity filled with data. Every time I open the app, I need to go through all database and one column contains a date. If the date is already passed (compared to today) I need to add one year to it.
After googling I found that NSBatchUpdate could help here. But how to go through all records in the database and update date value based on criteria?
Thanks in advance for support.

Importing geospatial data into mongodb

I have found a source for geo data that represents a country, and its constituents. I am building a service where I would like to have a user select a country and then based on that country select an area they would like results from. I am trying to download the data and import it into MongoDB but I am unsure which file format is best to download, and what tools I will need to convert the data to import it. The file options are ESRI file geodatabase, Shapefile, R file, Google Earth (.kmz), Geopackage, and ESRI personal geodatabase. Which one do I choose and what tools do I need for the next step? Is this even the correct approach to having the collection of countries and their states/territories?
Here is the open source data
gadm.org
MongoDB uses a subset of the GeoJSON format for its geo-spatial data. I would look at which formats are closest to that. A quick web-search turned up this tool for converting kml files to GeoJSON.