"Number is not compatible with column definition or is not available for a not nullable column" - oracle-sqldeveloper

I have a problem when importing a txt file into an oracle table.
for some reason, SQL developer doesn't allow decimals to get imported as the image shows, and I believe that the data type is correct as it worked for me before.
please help, and thanks a lot

Please check that the decimal seperator on your system has not changed since the last time this worked. To test this replace all dots(.) with commas(,) in the text file and check if the error goes away.
I have reproduced your error on my own system, where comma is the decimal seperator, and using a file that uses dot as the decimal seperator.
Not sure if the scale, -127, will cause any issues for you. NUMBER columns defined without precision and scale get the scale -127 in SQL Developer import engine. I have never noticed this before but it is present in version 20.2.0.175.

Related

Why can not Google Dataprep handle the encoding in my log files?

We are receiving big log files each month. Before loading them into Google BigQuery they need to be converted from fixed with to delimited. I found a good article on how to do that in Google Dataprep. However, there seems to be something wrong with the encoding.
Each time a Swedish Character appears in the log file, the Split function seems to add another space. This messes up the rest of the columns, as can be seen in the attached screenshot.
I can't determine the correct encoding of the log files, but I know they are being created by pretty old Windows servers in Poland.
Can anyone advice on how to solve this challenge?
Screenshot of the issue in Google Dataprep.
What us the exact recipe you are using ? Do you use (split every x ) ?
When I used in a test case an ISO Latin1 text and ingested it as ISO 8859-1, the output was as expected and only the display was off
Can you try the same ?
Would it be possible to share an example input file with one or two rows ?
As a workaround you can use the RegEx, which should work.
It's unfortunately a bit more complex, because you would have to use multiple regex splits. Here's an example for the first two splits after 10 characters each /.{10}/ and split on //

How to store subscript and superscript values in Progress OpenEdge?

Is there a way to store subscript and superscript values in the Progress database, for example, chemical symbols and formulas, such as C2H5OH, and is it possible to display them ?
I tried copying from Word and pasting into fill in string fields but it doesn't format correctly, it doesn't recognize subscripted values and it is displayed as C2H5OH.
After some testing I've come this far:
1) You need to start your session with startup parameter -cpinternal utf-8 ie
prowin32.exe -cpinternal utf8
Depending on your need you might also need to set -cpstream utf-8 and possibly -cpcoll basic (or something else that matches your needs).
When I did this I had some strange crashes - but that might be because I edited a file saved in another codepage?
2) You need to get the data into your system (perhaps you already have it?).
I used Word and information found here and further explained here. The subscript font setting are just font settings (not unicode) so don't let that fool you (copy-pasting from your question is exactly the same). Basically you need to write the hexadecimal value of the subscript 2 (2082) in Word and then press Alt + X.
Assuming you want to write the actual data in a Progress based GUI I haven't been successful so far. Perhaps you could look at changing registry values as described in the links and continue along that path. I don't want to do that for just basic testing...
3) You will need a font with decent support for these characters. Some fonts don't support them at all!
Segoe UI:
Default system font (possibly) MS Sans Serif:
Arial:
5) Database? I'm unsure if you will need to use CLOB-fields to store this in your database or not. Most likely you shouldn't.
Hope this is enough to at least get you started!

Fortran: how can double precision variable read and hold string content from an input file

I'm converting a rather large old fix-format code written in Fortran 77 to free-format. Within the code I frequently encounter read statements like
DOUBLE PRECISION :: VARIABLE
read(1,10) VARIABLE
10 format(2A10)
However, what it reads from input file is in fact a line of string. The code runs perfectly fine, but it crashes when one tries to read VARIABLE from a namelist instead of a fixed format input file.
How is this possible in Fortran? Is there any reference where I can find more information about?
Any help is greatly appreciated.
This comes from the days before F77 when doubles and integers were used for storing characters. From the format statement, this is probably from a CDC which could store 10 six bit characters in each word. Double precision was two words so it was two lots of 10 characters. If you change the code to
CHARACTER(LEN=20) VARIABLE
READ(1,10) VARIABLE
10 FORMAT(A20)
It should work. There isn't a lot of information about on CDC compilers. I've never tried using a namelist with one so I can't really comment about it. Try http://bitsavers.trailing-edge.com/pdf/cdc/cyber/cyber_70/chippewa/Chippewa_Fortran-Run_Apr66.pdf

NPOI number wrong format read

I have an xlsx file and try to read numbers from it and put them in another file. Problem is, that some numbers are read incorrectly and i have no idea why. For example:
Number in excel | Number read
-----------------------------
139,8 | 1,398E+16
2,2 | 2,2E+16
Interesting thing is, that this problem happens only with some numbers. Formatting for all numbers is the same. NPOI reads the exact number from excel, not the formatted, so i checked values, but hey all are the same as formatted ones.
Ok, i guess i found a problem. Now i just need to find solution. I Extracted xlsx file and checked the real values stored in cells. Problem is that when i have value 139.80000000000001 it is read as 1,398E+16, so i guess NPOI interpretes the formatting wrong. It thinks that . (dot) separates thousands, while it doesn't.
Just for the record, I just updated from Alpha to Beta, and it worked. Now I get the exact value that is on the cell.
The beta can be found here.
Looks like this is a known issue, and there's a planned fix in an upcoming NPOI 2.0 beta 1 release:
RELEASE NOTES
...
fix decimal seperated by comma instead of dot
It looks to be a bug in NPOI 2.0 alpha. Please try NPOI 2.0 beta 1 if it still exists, we will plan to fix it in 2.0 final release

Bad MySQL import, now we have garbage showing in place of utf-8 chars

We restored from a backup in a different format to a new MySQL structure (which is setup correctly for UTF-8 support). We have weird characters showing in the browser, but we're not sure what they're called so we can find a master list of what they translate to.
I have noticed that they do, in fact, correlate to a specific character. For example:
â„¢ always translates to ™
— always translates to —
• always translates to ·
I referenced this post, which got me started, but this is far from a complete list. Either I'm not searching for the correct name, or the "master list" of these bad-to-good conversions as a reference doesn't exist.
Reference:
Detecting utf8 broken characters in MySQL
Also, when trying to search via MySQL query, if I search for â, I always get MySQL treating it as an "a". Is there any way to tweak my MySQL queries so that they are more literal searches? We don't use internationalization much so I can safely assume any fields containing the â character is considered to be a problematic entry, which would need to be remedied by our "fixit" script we're building.
Instead of designing a "fixit" script to go through and replace this data, I think it would be better to simply fix the issue directly. It seems like the data was originally stored in a different format than UTF-8 so that when you brought it into the table that was set up for UTF-8, it garbled the text. If you have the opportunity, go back to your original backup to determine the format the data was stored in. If you can't do that, you will probably need to do a bit of trial and error to figure out which format the data is in. However, once you know that, conversion is easy. Read the following article's section on Repairing:
http://www.istognosis.com/en/mysql/35-garbled-data-set-utf8-characters-to-mysql-
Basically you are going to set the column to BINARY and then set it to the original charset. That should make the text appear properly (a good check to know you are using the correct charset). Once that is done, set the column to UTF-8. This will convert the data properly and it will correct the problems you are currently experiencing.