Hi i'am having a input Big Decimal(15,12) value in source in output i am expecting that as string. For example, my source file with value 0.000000000000 by using tmap to convert it into string.
"String.valueOf(column name)"
for this im getting output is 0E-12. expected output is 0.0000000000 can anyone provide the solution?
may be this will help
new BigDecimal("String value").setScale(12, RoundingMode.HALF_DOWN).toPlainString();
Related
I'm trying write DataSet into text file.
Example
datasets
.wirte
.text(path)
What I intended is to write "some\text"(String which dataset contains).
As scala to interpret this String, we should set String value like something this
val text: String = "some\\text"
Of course when testing in scala, it prints out correct value ("some\text").
But when I write this dataset with spark.write, it appears to be written "some\\text"
Reading the internal codes, I just found escape option only for csv writing.
Is there any way to solve this problem?
Thanks
I've got a dataflow with a csv file as source. The column NewPositive is a string and it contains numbers formatted in European style with a dot as thousand seperator e.g 1.019 meaning 1019
If I use the function toInteger to convert my NewPositive column to an int via toInteger(NewPositive,'#.###','de'), I only get the thousand cipher e.g 1 for 1.019 and not the rest. Why? For testing I tried creating a constant column: toInteger('1.019','#.###','de') and it gives 1019 as expected. So why does the function not work for my column? The column is trimmed and if I compare the first value with equality function: equals('1.019',NewPositive) returns true.
Please note: I know it's very easy to create a workaround by toInteger(replace(NewPositive,'.','')), but I want to learn how to use the toInteger function with the locale and format parameters.
Here is sample data:
Dato;NewPositive
2021-08-20;1.234
2021-08-21;1.789
I was able to repro this and probably looks to be a bug to me . I have reported this to the ADF team , will let you know once I hear back from them . You already have a work around please go ahead that to unblock yourself .
I'm trying to find a possible way to convert PSS/E native .raw files to Pandapower format.
My objective is to take advantage of the network plotting capabilities that are available in Pandapower.
For that, I have to first be able to load my grid data into Pandapower.
For that, I have to somehow bridge the gap between PSSE .raw to Pandapower.
Literature says that a possible way of doing this is by using the 'psse2mpc' function available in Matpower.
I've tried to use it but I get the following error message:
(quote)
>> psse2mpc('RED1523.raw')
Reading file 'RED1523.raw' ............................................. done.
Splitting into individual lines ...error: regexp: the input string is invalid UTF-8
error: called from
psse_read at line 60 column 9
psse2mpc at line 68 column 21
(unquote)
I'was informed that maybe I should save my .raw file (natively generated with a PSSE/E v33 version) into an older .raw format (corresponding to previous PSS/E versions).
I've tried this as well but still have the same error message.
Apart from getting this error which so far impedes to reach my objective, I've been unable to guess the Pandapower "equivalent .raw" structure. Does anybody know how this input structure looks like in Pandapower?
If I would know how Pandapower needs to get the input data, I could even try to code a taylor-made python script that converts my .raw file into whatever is required from Pandapower.
If somebody could help me to get out of this labyrinth I would be most gratefull !!!
Thanks.
Eneko.
You need to check your .raw file to enter the other Inputs of the psse2mpc function. For instance, if I have the case39.raw file and I want to convert it to matpower format like case39mpc.m, then I must enter something like this:
psse2mpc ('case39.raw', 'case39mpc.m', '1', '29')
I am new to using Talend.
The first table is fed into tHashOutput1. This part works fine.
The tHashOutput1 is fed into tHashOut2 and tHashOutput3.
From tMap_3 is fed from user . When i try to feed tHashInput3 into the tMap i am not allowed to do this. What is wrong with this.
Please make sure that tHashInput3have the schema of tHashOutput1, as you can see there is a small yellow warning on the tHashInput3 but it not exist on tHashInput2. Also, there is no output from the tMap3 to see if it works.
I have a dictionary of PySpark RDDs and am trying to convert them to data frames, save them as variable and then join them. When I attempt to convert one of my RDDs to a data frame I get the following error:
File "./spark-1.3.1/python/pyspark/sql/types.py",
line 986, in _verify_type
"length of fields (%d)" % (len(obj), len(dataType.fields)))
ValueError: Length of object (52) does not match with length of fields (7)
Does anyone know what this exactly means or can help me with a work around?
I agree - we need to see more code - obfuscated data is fine.
You are using SparkQL it seems (sql types) - mapped onto what ? HDFS/Text
From the error it would appear that your create schema is incorrect - leading to an error - when to create a Data Frame.
This was due to the passing of an incorrect RDD, sorry everyone. I was passing the incorrect RDD which caused didn't fit the code I was using.