I'm trying to run a query using jdbc but i'm having difficulty injecting values into the prepared statement. Here is a sample of what I was doing:
String queryString = "... WHERE location <# box '((?, ?),(?, ?))' ..."
PreparedStatement ps = this.connection.prepareStatement(queryString);
ps.setDouble(1, x1);
ps.setDouble(2, y1);
ps.setDouble(3, x2);
ps.setDouble(4, y2);
ps.executeUpdate();
Which gives me this error:
org.postgresql.util.PSQLException: The column index is out of range: 1, number of columns: 0.
I think it thinks that the values in the single quotes are string literals and so it doesn't see the ? as parameters to inject.
Does anyone know how I could fix this? Or rather what else I should be looking to do?
Related
I have a MS SQL table which contains a list of files that are stored within an ADLS gen2 account. All files have the same schema and structure.
I have concatenated the results of the table into a string.
mystring = ""
for index, row in files.iterrows():
mystring += "'"+ row["path"] + "',"
mystring = mystring[:-1]
print(mystring)
OUTPUT
'abfss://[file]#[container].dfs.core.windows.net/ARCHIVE/2021/08/26/003156/file.parquet','abfss:/[file]#[container].dfs.core.windows.net/ARCHIVE/2021/08/30/002554/file.parquet','abfss:/[file]#[container].dfs.core.windows.net/ARCHIVE/2021/09/02/003115/file.parquet'
I am now attempting to pass the string using
sdf = spark.read.parquet(mystring)
however I am getting the error
IllegalArgumentException: java.net.URISyntaxException: Illegal character in scheme name at index 0: 'abfss://[file]#[container].dfs.core.windows.net/ARCHIVE/2021/08/26/003156/file.parquet','abfss:/[file]#[container].dfs.core.windows.net/ARCHIVE/2021/08/30/002554/file.parquet','abfss:/[file]#[container].dfs.core.windows.net/ARCHIVE/2021/09/02/003115/file.parquet','abfss:/[file]#[container].dfs.core.windows.net/ARCHIVE/2021/09/24/003516/file.parquet','abfss:/[file]#[container].dfs.core.windows.net/ARCHIVE/2021/10/07/002659/file.parquet'
When I manually copy and past mystring into read.parquet the code executes with no errors.
Maybe I'm going down a rabbit hole but some feedback would be much appreciated
After reproducing from my end, I could able to achieve following the below.
paths = []
for index,row in files.iterrows():
paths.append(row["path"])
df = spark.read.parquet(paths)
RESULTS:
NOTE: Make sure you have same schema in all the files.
I'm trying to understand the indexof expression(function) of Azure Data Factory.
Example
This example finds the starting index value for the "world" substring in the "hello world" string:
indexOf('hello world', 'world')
And returns this result: 6
I'm confused by what is meant by the 'index value' and how the example arrived at the result 6.
Also, using the above example, can someone let me know what would be the answer for the following expression?
#if(greater(indexof(string(pipeline().parameters.Config),'FilenameMask'),0),pipeline().parameters.Config.FilenameMask,'')
indexof
{"FilenameMask":"accounts*."}
'Config' represents a field in sql database
Per the docs:
Return the starting position or index value for a substring. This function is not case-sensitive, and indexes start with the number 0.
hello world
01234567890
^
+--- "world" found starting at position 6
Regarding the 2nd part of your question. Here's the expression re-written for a bit of clarity:
#if( greater(indexof(string(pipeline().parameters.Config),'FilenameMask'),0)
,pipeline().parameters.Config.FilenameMask
,'')
which can be read as follows:
if the index of the string "FilenameMask" within x is greater than 0 then
return x.Filenamemask
else
return an empty string
where x is pipeline().parameters.Config, which is the value of your "Config" column from the database table. It will hold values such as
{"sparkConfig":{"header":"true"},"FilenameMask":"cashsales*."}
and
{"FilenameMask":"accounts*."}
The ADF expression can also be read as follows:
if the JSON in the Config column contains a "FilenameMask" key then
return the value of the FilenameMask key
else
return an empty string
I am not able to convert my string to double.I am getting below error
conversion from string to double is not valid
I have tried below approach but all are giving me same error. I am using Assign activity in uipath with intvalue defined as double and row.Item("TaxResult") retrieves the value from excel
intval = Val(row.Item("Tax Result").ToString.Trim)
intVal = Double.Parse(row.Item("Tax Result").ToString.Trim , double)
intVal = cDbl(row.Item("Tax Result").ToString.Trim)
Out of the above three first one is returning to me 0 value while the below two is giving me an error
"conversion from string to double is not valid"
Tax Result column in excel stores the value like (5.2, 19.8, 98.87). I want to sum all these value as part of my requirement
First, you don't need the .Item after row, it should just be row("Tax Result")
Ideally you should use a Double.TryParse() to try and avoid any exceptions if it cant convert to Double.
This can be achieved using an If Statement as seen below, if the Try Parse is successful the string can be converted using Double.Parse(row("Tax Result").ToString.Trim) and on failure I have assigned intval to 0 but you could put handling here to handle if it can't convert the number
I have created a txt.File named combitable1.txt in
C:\Users\Yamaha R6\Desktop\FileOpenModelica
I want to "load" data of this file into combitable1D in OpenModelica. If you see the image, under the voice "table", I have wrote :
loadResource("C:/Users/Yamaha R6/Desktop/FileOpenModelica/combitable1.txt")
but when I simulate the model following error occurs :
15:51:20 Translation Error
Class loadResource not found in scope (looking for a function or record)
What can I do?
You don't need to use loadResource function in this case. You can leave the table name as it is
parameter Real table[:, :] = fill(0.0, 0, 2)
"Table matrix (grid = first column; e.g., table=[0,2])"
Your file formattig should be like following, assuming a text file myFile.txt:
#1
double myTable(200000,2)
0.000000 0.110519
0.001000 0.316248
0.002000 0.505827
0.003000 0.703204
0.004000 0.894942
0.005000 1.072796
... ...
With following inputs to the Modelica.Blocks.Sources.CombiTimeTable
parameter String fileName = "C:\\SomeLocation\\myFile.txt";
parameter String tableName = "myTable";
The fields do not have the right values.
table: leave empty
tableName: "tab1" (might be able to skip quotes)
filename: use loadResource - but give the full Modelica name:
ModelicaServices.ExternalReferences.loadResource("c:/users....");
(Technically loadResource is more for the case ModelicaServices.ExternalReferences.loadResource("modelica://A/combiTable.txt"); )
I am trying to update rows for a table using this query:
UPDATE point
SET ftp_base = ftp://ftp.geonet.org.nz/strong/processed/Proc/2007/02_Final/2001-02-04_191426/Vol3/data/20070204_191426_KFHS.v3a
WHERE evt_id = '1121';
It is giving me the error "syntax error at or near SET".
point is a reserved word (a datatype). You need to enclose this in double quotes:
UPDATE "point"
SET ftp_base = 'your value goes here'
WHERE evt_id = 1121
Don't forget the single quotes around the character values, and do not put them around numbers.