Incorrect data population issue in numeric field in PF in AS400 [duplicate] - db2

This question already has an answer here:
What do hyphens signify in Db2 for i query results?
(1 answer)
Closed 3 months ago.
I have a PF in DB2 which is showing a ++++ sign, the column value is defined as numeric 3 lengths.
I have tried ABS, ABSVAL, ROUND, TRUNCATE, REPLACE, and CHAR biffs on this column but none of them seems to show me what this ++++ actually is. Because of this ++++ sign, I cannot insert any data on this row, thereby stopping anything from being inserted after this row.
If possible, I am looking to remove this ++++ sign incorrect data from the file.
I shall be grateful for any help/guidance.

Thank you #jmarkmurphy and #charles for your thoughtful inputs, I have found my solution in both your suggestions.
I have tried to summarize the issue for future readers as below.
So apparently the ++++ sign or - sign is DB2's way of showing corrupt data. The possible reason for existence of such a data could be during data transmission between two systems, or improper handling of decimal data error done by the operator.
However, I have researched a lot on this but there is no way we can see what this ++++ or - sign actually holds.
But still just to our eyes satisfaction, as recommended by #jmarkmurphy
use the hex() function on the column we can see something like
notice that the hex value for ++++ is 404040, which represents # sign in ascii table
Reference fore HEx to ascii conversion -
https://www.freecodecamp.org/news/ascii-table-hex-to-ascii-value-character-code-chart-2/
The only possible way to deal with such corrupt data is to isolate them and remove them. thanks to #charles for this url
What do hyphens signify in Db2 for i query results?

Related

In PostgreSQL, is it possible to have a default format for real columns?

In PostgreSQL, I have a column with people's height in meters. If the height is, say 1.75 m, it shows properly, but if the height is 1.70 m, it shows as 1.7. I would like to have this already formatted to two decimal places, showing as 1.70 without formatting in each and every SQL call. Can I specify this in the table creation? Or a stored procedure, or something? I've seen a few things about timestamps, but not for real fields. Knowing how to format the decimal point as a colon (1,70) would be a plus.
Basically, presentation and "cosmetics" are the job of the application, not the database.
Having a default number of decimal places for floats would also create a problem, because the data returned by the database would not be the actual data in the column. So if you did a SELECT and it returned a value of 1.75, then if you searched for this value, you might not find it because the actual value stored was not 1.75 but 1.7499999999 and it was only rounded for display.
Potential solutions:
If you want to store a specified number of digits, use NUMERIC. This will solve the 1.7499999999 problem above. If you use NUMERIC, when doing a SELECT you get the actual contents of the column.
In your app, if you use an ORM, use a Decimal (or similar) type for the column with the appropriate settings so it displays the way you want.
Or create a view with the format applied to the column, but in this case if you want the trailing zero, the type will be text and not float, and it will not be searchable unless you create an extra index on it.
Generated column with the number formatted as you want, maybe easier than a view

Getting NULL Value in Stored Procedure TEXT Column

Below Query, I am using to get the SP definition but in TEXT column I am getting as NULL Value in IBM DATA Studio but I am able to CALL the SP.
SELECT PROCNAME, TEXT FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
Please Help
You have confirmed that the syscat.procedures.language is SQL, and that your query-tool is able to display a substr() of the text.
Workaround depends on the length(text) of the row of interest:
SELECT PROCNAME, substr(TEXT,1, 1024) FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
You may need to adjust the length of the substr extract depending on the length of the text and your configuration. For example substr(TEXT, 1, 2048 ) or a higher value for the length as necessary that your query-tool can cope with.
You can find the length of the text column with the LENGTH(TEXT) for the row of interest.
You can also CAST a CLOB to char or varchar to a length that fits within their limits and whatever query tool limitations you have.
Another option is to use a different query tool that can work with CLOB.
Are you using the latest version of Data Studio with the latest fix? It sounds like you might have an invalid UTF-8 character in you SP, or as you are using SUBSTR and SUBSTRING you are breaking a mulit-byte character in two.
You could try setting
-Ddb2.jcc.charsetDecoderEncoder=3
in your eclipse.ini to get Java to use a replacment character rather than replace the invalid string with nul
See this tech note
https://www-01.ibm.com/support/docview.wss?uid=swg21684365
Otherwise, do raise this with IBM Suppport

Import Flat File via SSMS to SQL Server fails

When importing a seemingly valid flat file (csv, text etc) into a SQL Server database using the SSMS Import Flat File option, the following error appears:
Microsoft SQL Server Management Studio
Error inserting data into table. (Microsoft.SqlServer.Import.Wizard)
Error inserting data into table. (Microsoft.SqlServer.Prose.Import)
Object reference not set to an instance of an object. (Microsoft.SqlServer.Prose.Import)
The target table may contain rows that imported just fine. The first row that is not imported appears to have no formatting errors.
What's going wrong?
Check the following:
that there are no blank lines at the end of the file (leaving the last line's line terminator intact) - this seems to be the most common issue
there are no unexpected blank columns
there are no badly escaped quotes
It looks like the import process loads lines in chunks. This means that the lines following the last successfully loaded chunk may appear to have no errors. You need to look at subsequent lines, that are part of the failing chunk, to find the offending line(s).
This cost me hours of hair pulling while dealing with large files. Hopefully this saves someone some time.
If the file you're importing is already open, SSMS will throw this error. Close the file and try again.
Make sure when you are creating your flat-file IF you have text (varchar) value in any of your columns, DO NOT select your file to be comma "," delimited. Instead, select vertical line "|" or something that you are SURE it can't be in those values. the comma is supper common to have in nvarchar filed.
I have this issue and none of the recommendations from other answers helped me!
I hope this saves someone some times and it took me hours to figure it out!!!
None of these other ones worked for me, however this did:
When you import a flat file, SSMS gives you a brief summary of the data types within each column. Whenever you see a nvarchar that's in an int or double column, change it to int or double. And change all nvarchars to nvarchar(max). This worked for me.
I've been working with csv data for a long time. I encountered the similar problems when I first started this job, however as a novice, I couldn't obtain a precise fault from the exceptions.
Here are a few things you should look at before importing anything.
Your csv file must not be opened in any software, such as Excel.
Your csv file cells should not include comma or quotation symbols.
There are no unnecessary blanks at the end of your data.
There is no usage of a reserved term as data. In Excel, open
yourfile and save it as a new file.
After considering all the suggestions, if anyone is still having issues, check the length of the DataType for your columns. It took hours for me to figure this out but increasing the nvarchar length from (50) to (100) worked for me.
One thing that worked for me : You can change the error range to 1 in "Modify colums"
Image for clarity of where it is
You get an error message with the specific line that's problematic in your file instead of "ran out of memory"
I fixed these errors by playing around with the data type. For instance, change my tinyint to smallint, smallint to int, and increased my nvarchar() to reasonable values, else I set it to nvarchar(MAX). Since most of the real-life data do have missing values, I checked allowed missing values in all columns. Everything then worked with a warning message.

Converting / Casting an nVarChar with Comma Separator to Decimal

I am supporting an ETL process that transforms flat-file inputs into a SqlServer database table. The code is almost 100% T-SQL and runs inside the DB. I do not own the code and cannot change the workflow. I can only help configure the "translation" SQL that takes the file data and converts it to table data (more on this later).
Now that the disclaimers are out of the way...
One of our file providers recently changed how they represent a monetary amount from '12345.67' to '12,345.67'. Our SQL that transforms the value looks like SELECT FLOOR( CAST([inputValue] AS DECIMAL(24,10))) and no longer works. I.e., the comma breaks the cast.
Given that I have to store the final value as Decimal (24,10) datatype (yes, I realize the FLOOR wipes out all post-decimal-point precision - the designer was not in sync with the customer), what can I do to cast this string efficiently?'
Thank you for your ideas.
try using REPLACE (Transact-SQL):
SELECT REPLACE('12,345.67',',','')
OUTPUT:
12345.67
so it would be:
SELECT FLOOR( CAST(REPLACE([input value],',','') AS DECIMAL(24,10)))
This works for me:
DECLARE #foo NVARCHAR(100)
SET #foo='12,345.67'
SELECT FLOOR(CAST(REPLACE(#foo,',','') AS DECIMAL(24,10)))
This is probably only valid for collations/culture where the comma is not the decimal separator (ie: Spanish)
While not necessarily the best approach for my situation, I wanted to leave a potential solution for future use that we uncovered while researching this problem.
It appears that the SqlServer datatype MONEY can be used as a direct cast for strings with a comma separating the non-decimal portion. So, where SELECT CAST('12,345.56' AS DECIMAL(24,10)) fails, SELECT CAST('12,345.56' AS MONEY) will succeed.
One caveat is that the MONEY datatype has a precision of 4 decimal places and would require an explicit cast to get it to DECIMAL, should you need it.
SELECT FLOOR (CAST(REPLACE([inputValue], ',', '') AS DECIMAL(24,10)))

Filemaker: making queries of large data more efficient

OK I have a Master Table of shipments, and a separate Charges table. There are millions of records in each, and it's come into Filemaker from a legacy system, so all the fields are defined as Text even though they may be Date, Number, etc.
There's a date field in the charges table. I want to create a number field to represent just the year. I can use the Middle function to parse the field and get just the year in a Calculation field. But wouldn't it be faster to have the year as a literal number field, especially since I'm going to be filtering and sorting? So how do I turn this calculation into its value? I've tried just changing the Calculation field to Number, but it just renders blanks.
There's something wrong with your calculation, it should not turn blank just because field type is different. I.e.:
Middle("10-12-2010", 7, 4)
should suffice, provided the calc result is set to Number. You may also wrap it into GetAsNumber(...), but, really, there's no difference as long as field type is right.
If you have FM Advanced, try to set up your calc in the Data Viewer (Tools -> Data Viewer) rather than in Define Fields, this would be faster and, once you like the result, you can transfer it into a field or make a replace. But, from the searching/sorting standpoint there's no difference between a (stored) calculation and a regular field, so replacing is pointless and, actually, more dangerous, as there's no way to undo a wrong replace.
Here's what i was looking for, from
http://help.filemaker.com/app/answers/detail/a_id/3366/~/converting-unstored-calculation-fields-to-store-data
:
Basically, instead of using a
Calculation field, you create am EMPTY
Number, date or text field and use
Replace Field Contents from the Records menu, and put
your calculation (or reference, or
both) there.
Not dissing FileMaker at all, but millions of records means FileMaker is probably the wrong choice here. Your system will be slow, slow, slow. FileMaker is great for workgroups and there is no way to develop a database app faster. But one thing FileMaker is not good at is handling huge numbers of records.
BTW, Mikhail Edoshin is exactly right.