My admin not working during install payment charge extension installation process - import

I need to import Bulk description of products, in my csv 1200 records are there, 800 records import successfully remaining not import, i get error like :
SQLSTATE[HY000]: General error: 1366 Incorrect string value:
'\xA0Ideal...' for column 'value' at row 1

save your file using notepad++ or the one that you like and select UTF-8
you are surely using some other format that it is NOT utf-8

Related

How to check if the file contains certain values before reading in Talend Studio

Hello beginner in Talend Studio here and first time poster. I am using Talend 8.0 and have a text file to ingest into a database that has the following:
H2||ID||portfolio||manager||name
D||5||8001-1101||48||John Doe
D||6||8001-1102||50||John Doe
D||7||8002-1101||20||Jane Doe
F3||||||||
where the delimiter is a double pipe (||)
ID, portfolio, manager and name and its associated records are the data I'd like to ingest. The first column with "H2", "D" and "F3" are the header, detail and footer indicators respectively. These indicators are not supposed to be ingested but will need to be checked for their presence when the file is read into talend studio.
I need to check if these three indicators are available in the file. If either of these indicators are not in the file, it should not ingest the file and output a message. If the indicators do exist, the data is ingested but only the data for the columns "ID","portfolio","manager" and "name"
I tried using the following components:
Which will read the table in its entirety including the H2 column. I then use t-map with a filter
row1.Header.contains("D")
which keeps rows that has "D" indicator. Appreciate if there is a better way to do this
Use row1.Header.contains("D")&&row1.Header.contains("H2")&&row1.Header.contains("F3") to filter header in ("D","H2","F3")
If you want the reject check the option in an other output and check output reject to true

How can I extract 7 million records from Oracle database to CSV file

I have already tried the export option in the sql developer,it is very time consuming.
I need to know if there is quicker way to extract data to CSV file.
enter image description here
I think you can try with Toad for oracle in DataGrid view then right click and export Data Set, you can start with a million with WHERE ROWNUM < 1000001

Data Conversion Failed SQL

I am using the import and export wizard and imported a large csv file. I get the following error.
Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data
conversion for column "firms" returned status value 2 and status text "The
value could not be converted because of a potential loss of data.".
(SQL Server Import and Export Wizard)
Upon importing, I use the advanced tab and make all of the adjustments. As for the field in question, I set it is numeric (8,0). I have since went through this process multiple times and tried 7,8,9,10,and 11 to no avail. I import the csv into excel and look at the respective column, firms. It shows no entry with more than 5 characters. I thought about making it DT_String but will need to manipulate that column eventually by averaging it. Also, have searched for spaces or strange characters and found none.
Any other ideas?
1) Try changing the Numeric precision to numeric(30,20) both in source and destination table.
2) Change the data type to str/wstr and adjust the output column width while importing. It will run fine. It happened with me as well while loading large CSV file of approx 5 GB. After load, use Try_convert function to convert it back to numeric and check the values which went null while conversion, you will find the root cause then.

SAS PROC IMPORT not creating OUT dataset as commanded

Situation: I'm importing an xlsx file with PROC IMPORT and wanting to send the data OUT to a new netezza database table.
My issue: SAS appears to run fine, but the log shows a completely different table name was been created with a libref that I'm not using (and this libref is cleared).
LIBNAME abc sasionza server=server database=db port=123 user=user pass=pass;
PROC IMPORT
OUT = abc.DesiredTableName
DATAFILE= "my/excelfile/file.xlsx"
DBMS=xlsx
REPLACE;
SHEET="Sheet1";
GETNAMES=YES;
RUN;
This "runs" just fine, or so it appears to. I check the log and I see this:
NOTE: The import data set has 11 observations and 7 variables.
NOTE: xyz.ATableCreatedDaysAgoInAnotherProgram data set was successfully created. NOTE: PROCEDURE IMPORT used (Total process time):
real time 0.55 seconds
cpu time 0.02 seconds
I thought, hmm, that is weird. libref xyz is actually cleared, so I couldn't possibly use it, and ATableCreatedDaysAgoInAnotherProgram is a tablename used in a completely different SAS E-Guide program I have going on.
Sounds like a memory or cache issue. So, I close all instances of SAS E-Guide and fire up a new one. I created a new program that only has my desired lines (the code listed above).
It runs, and I get the following log as a result:
NOTE: The import data set has 11 observations and 7 variables.
NOTE: WORK._PRODSAVAIL data set was successfully created.
NOTE: PROCEDURE IMPORT used (Total process time):
real time 0.55 seconds
cpu time 0.02 seconds
I will note that this is the first time I've actually tried to use PROC IMPORT to send something directly to a netezza table. Up until now, I've always imported files into WORK and worked with them for a bit before inserting them into a table in a database. I thought that maybe this is a SAS limitation I may not be aware of, but the SAS documentation for PROC IMPORT (https://v8doc.sas.com/sashtml/proc/z0308090.htm) says that you can specify a two level name in the OUT statement, so I feel that this should work. If it can't work, then I feel that SAS should error out instead of randomly creating a table name that I'm not even executing in my code.
Summary (tl;dr): Can you PROC IMPORT directly into a netezza database table using a libref? And if you can't, why is my code executing and producing text that isn't even related to what I'm doing?
Thanks, everyone!
The Solution: A column in the xlsx file being imported had a space in one of the column names... Simply removing the space in the column name and saving the changes to the xlsx file allowed for the PROC IMPORT code above to be executed flawlessly with the desired results being imported into the named netezza table.
NOTE: This fixed my problem, but it does not explain the SAS log showing text executing that wasn't actually in the code to be executed.
Sounds like you should report the issue with not getting a working ERROR message to SAS.
To make sure that your SAS/Netezza tables do not have variable names with spaces in them change the setting of the VALIDVARNAME option before running your program. That way PROC IMPORT will convert your column headings in the XLSX file into valid variable names.
options validvarname=v7;
libname out ...... ;
proc import out=out.table replace ...

XLSREAD unable to read file in MATLAB R2012

I have an excel-file which consists of about roughly 10,000 rows and has a size of around 800KB
When I try to import the data to MATLAB both with GUI import tool, or using XLSREAD I get the following message:
Could not open the spreadsheet. MATLAB reported the following error:
XLSREAD unable to read sheet "Sheet1"
File contains unexpected record length. Try saving as Excel 98
I tried saving as excel 98, but didn't help?...funny thing is, I can import other excel-files which are bigger than 10,000 rows and 800KB in size?!...
Ideas? =) My excel-file shouldn't consist anything special, just columns of numeric data with headers consisting of text...
UPDATE !
It seems this only comes when I use MATLAB in Ubuntu 12.10...When I tried it in Windows XP it works just fine....??
I know some time has passed, but I had the same problem with Ubuntu 16.04 and MATLAB R2016a . In my case it didn't worked to delete the columns.
My solution was to change the excel file from .xls to .xlsx and try it with xlsread again (of course with changed path).
Please don't ask me why it works if it's saved in an Microsoft format.
Try removing any empty columns / rows you have in your sheet, as well as explicitly delete several columns / rows after your data. Matlab seems to have a problem with "empty" columns / rows.