Acumatica IN error: unit conversion is missing - import

I am trying to import open Sales Orders through the Acumatica Import Scenario. I have an excel export that and I am manually assigning to certain fields (including the UOM). I didn't have any problems importing single line Sales Orders but when I have multiple line items, I get is this error:
IN error: unit conversion is missing
The UOM is assigned to EACH to both lines and the items are the same in each line of the SO.
I wouldn't think that this would be an issue if you had two line items with the same item on each line. I might be missing something else, but I am unaware.
Can anyone help?
http://i.stack.imgur.com/klZsb.jpg

You have an issue in Import Scenario, hard to determine what exactly is wrong, so, please review your scenario.

I've got this error message when my import scenario provided somehow non existent InventoryCD. As outcome, Acumatica wasn't able to initialize UOM column, and gave me mentioned error message.

Related

OROCRM Community importing Leads Error- Regions

I have been troubleshooting this issue for a while and cannot import a CSV file for Leads into OROCRM. I am using the default install with no custom regions.
When I import populating Address 1 Country ISO2 code: US, I keep getting the Error "addresses[0]: Custom region can be used only for countries without predefined regions"
If I leave Address 1 Country ISO2 code blank, I get addresses[0].country: This value should not be blank.
Any help would be greatly appreciated.
I have entered a lead manually I have no issues. I exported the newly created lead and updated some data, and I got the same error as above. You would think it would assign the default country since the United States is the default.

Case-insensitive column names breaks the Data Preview mode in Data Flow of Data Factory

I have a csv file in my ADLS:
a,A,b
1,1,1
2,2,2
3,3,3
When I load this data into a delimited text Dataset in ADF with first row as header the data preview seems correct (see picture below). The schema has the names a, A and b for columns.
However, now I want to use this dataset in Mapping Data Flow and here does the Data Preview mode break. The second column name (A) is seen as duplicate and no preview can be loaded.
All other functionality in Data Flow keeps on working fine, it is only the Data Preview tab that gives an error. All consequent transformation nodes also gives this error in the Data Preview.
Moreover, if the data contains two "exact" same column names (e.g. a, a, b), then the Dataset recognizes the columns as duplicates and puts a "1" and "2" after each name. It is only when they are case-sensitive unequal and case-insensitive equal that the Dataset doesn't get an error and Data Flow does.
Is this a known error? Is it possible to change a specific column name in the dataset before loading into Data Flow? Or is there just something I'am missing?
I testes it and get the error in source Data Preview:
I ask Azure support for help and they are testing now. Please wait my update.
Update:
I sent Azure Support the test.csv file. They tested and replied me. If you insist to use " first row as header", Data Factory can not solve the error. The solution is that re-edit the csv file. Even in Azure SQL database, it doesn't support we create a table with same column name. Column names are case-insensitive.
For example, this code is not supported:
Here's the full email message:
Hi Leon,
Good morning! Thanks for your information.
I have tested the sample file you share with me and reproduce the issue. The data preview is alright by default when I connect to your sample file.
But I noticed when we do the trouble shooting session – a, A, b are column name, so you have checked the first row as header your source connection. Please confirm it’s alright and you want to use a, A, b as column headers. If so, it should be a error because there’s no text- transform for “A” in schema.
Hope you can understand the column name doesn’t influence the data transform and it’s okay to change it to make sure no errors block the data flow.
There’re two tips for you to remove block, one is change the column name from your source csv directly or you can click the import schema button ( in the blow screen shot) in schema tab, and you can choose a sample file to redefine the schema which also allows you to change column name.
Hope this helps.

How can i import a son model with relation to a father model?

i've a application with 2 Google Drive Tables (FatherM & SonM models)
and with a many to one relation
i'm able to export the data in a spreadsheet
in the SonM model at export an extra column is created automaticaly by the export with the name of the relation (FatherM) and containing all the Keys of the fathers records
when i import the single model data Son spreadsheet i've an error :
V:1 Field names "FatherM" in the spreadsheet can't be found in the corresponding model.
yes it doesn't exist in the model but is created by the relation
how can i import SonM datas ?
The difference between "import single model" and "import all models and relations" modes that "import single model" doesn't import relations, instead it expects that all columns in your spreadsheet are fields of a model. You can try to use "import all models and relations" mode with a spreadsheet where is all other (except of SonM) spreadsheets tabs don't contain data.
Thanks very much for your answer
I’m now using the all models to import relation too.
Her are some comments. if you have any information to give me, don’t hesitate to tell me
My goal is to export all the models from a test environment in a spreadsheet, add row manually or modify, then later on I will import all the data on a prod environment
1) I’ve exported all the data OK
2) I’ve re imported all the data :
First remark : if one of the spread sheet tab is empty (only one line with the title column)
I’ve got an error at import : Cannot read property "length" from null. 0 records imported
I’ve deleted all the tabs that don’t have data, and now not any more this error
Second remark: I’ve another import message from google import
Value error at cell "V:6": Can't import association: "Invit - Event", because record key is not defined. 0 records imported
Event is my father model and Invit is my son model. (for One Event I’ve many Invitation)
About V:6 , in Invit tab V is the relation column to the Event. (it contains all the key that are link with the main _Key of the Event model, the name of this column is the name of the relation I’ve created)
The first 5 lines of this tab are the invitation I’ve made manually using the my appmaker application, (and each of these line have a _Key value on column A) the line 6 and after are invitation I’ve added manually (coming from another tool, theses are old invitations I need to import)
On line 6 and after the column A (_key) is empty
The cell value in V:6 is an Event(_Key) ant it exist. So I don’t understand the import error message from Google. (do you understand this message?)
Third remarks
I’ve jus done this test:
Create a new son relation using the appmaker tool
Export all the data
Re -import exactly the same data
And I’ve got this error
Drive Table internal error. Record not found. 0 associations imported
Do you know where I can find information about importing relation, on this page https://developers.google.com/appmaker/models/import-export nothing about relation
Thanks
Stéphane

SQL Update Record Failed because derived or constant field

I'm very new to SQL statements to update / manage records.
I have a very simple request and I have look into it online but it doesnt fully make sense as I am new.
My query is
Update [Data].[viewAppFieldList]
Set level = 308
Where fieldid = 23456
When I run it of course I get the error that it failed because it contains a derived or constant field.
Can someone explain why it is doing that, my assumption is because it is applicable in more than one place.
Secondly how do I go about manipulating it so this can be changed?
Thank you!

SAPUI5 searchelp shows duplicate lines

Using SAP Gateway, I import a search help into the model of an SEGW project.
This creates an entity, an entity set and an implementation.
Debugging in backend and frontend shows that the search help works correctly and the JSON result contains the expected values.
But the search help UI control doesn't show all values and some or all lines shown in the control are duplicates.
When you import a search help into the model of an SEGW project, you are asked, which of the search help fields are key fields.
You have to mark the fields that can uniquely identify each line of the search result.
You get the described result, if you don't mark all necessary key fields.
Example: you make a search help for purchase order positions.
if you mark only the purchase number as key field you get the described problem.
if you mark purchase number and position number as key fields, everything works as desired.
So delete the entity and entitySet generated from the search help import in SEGW, start the transaction SEGW again(!), import the search help again and mark all necessary fields that identify a search result line.
Marking too many fields as key fields doesn't give wrong results.
But the JSON result contains more data than necessary which can make the call slower than necessary and consumes more bandwidth.