I have created a transformation in Pentaho Kettle where I am pulling some data from Mongo Collection, via Mongo Input, but the problem I am facing is I have created two named parameters in the same transformation and they are not being replaced in Mongo Query Expression Tab. Below is my mongo query expression:
{$and:[{'key1':{'$in':['${para1}']}},{'key2':{'$in':['${para2}']}}]}
Below are the two options I have tried from command line:
./pan.sh -file='/dir../pull_data.ktr' -param:para1=hello -param:para2=world -Level=Basic > /dir../etl.log
./pan.sh -file='/dir../pull_data.ktr' -param:"para1=hello" -param:"para2=world" -Level=Basic > /dir../etl.log
I am using Mac OSX and Pentaho Kettle (CE 5.2). Is it possible to use named parameters in the same transformation?
Use ./pan.sh -file='/dir../pull_data.ktr' -listparam to make sure you declared your parameters in the transformation settings.
Variable substitution inside of the JSON query should work AFAICS in the source code.
Related
I execute a query using the below Python script and the table gets populated with 2,564,691 rows. When I run the same query using Google Big Query console, it returns 17,379,353 rows (query is as-is). I was wondering whether there is some issue with the below script. Not sure whether --replace in bq query replaces the past result set instead of appending to it.
Any help would be appreciated.
dateToday = (time.strftime("%Y/%m/%d"))
dateToday1 = dateToday.replace('/','')
commandStr = "type C:\Users\query.txt | bq query --allow_large_results --replace --destination_table table:dataset1_%s -n 1" %(dateToday1)
In the Web UI you can use Query History option to navigate to respective queries.
After you locate them - you can expand respective entries and see what exactly query was executed
I am more than sure that just comparing query texts you will see source of "discrepancy" right away!
added
In Query History - not only you can see Query Text, but also all configuration properties that were used for respective query - like Write Preference for example and others. So even if query text the same you can see potential difference in configuration that will give you a clue
I am testing a web application in selenium IDE. I want to access a single row of table, but i am unable to select, plzzz tell me what command and target i should write?
Your question is ambiguous.
However I am assuming that you are trying to fetch the data/text available in a specific row of a table.
You can use the storeText command for this purpose. The syntax is as follows:
command: storeText
Target: locator
value: variable_name
I would suggest that you use Xpath as the locator. In the above command variable_name refers to the variable that will store the text which is fetched using the command.
To select a single row of the table, you need to know how to write Xpath for the required row.
Now to access the data in each of the rows in the table you can use storeText command within a while loop, and index that Xpath (element locator). As you store the text in the variable in each iteration, you can use the echo command to display the same onto the log. The required data can be extracted from the log using grep command in Linux.
I'm relatively new to DB2 for IBMi and am wondering the methods of how to properly cleanse data for a dynamically generated query in PHP.
For example if writing a PHP class which handles all database interactions one would have to pass table names and such, some of which cannot be passed in using db2_bind_param(). Does db2_prepare() cleanse the structured query on its own? Or is it possible a malformed query can be "executed" within a db2_prepare() call? I know there is db2_execute() but the db is doing something in db2_prepare() and I'm not sure what (just syntax validation?).
I know if the passed values are in no way effected by the result of user input there shouldn't be much of an issue, but if one wanted to cleanse data before using it in a query (without using db2_prepare()/db2_execute()) what is the checklist for db2? The only thing I can find is to escape single quotes by prefixing them with another single quote. Is that really all there is to watch out for?
There is no magic "cleansing" happening when you call db2_prepare() -- it will simply attempt to compile the string you pass as a single SQL statement. If it is not a valid DB2 SQL statement, the error will be returned. Same with db2_exec(), only it will do in one call what db2_prepare() and db2_execute() do separately.
EDIT (to address further questions from the OP).
Execution of every SQL statement has three stages:
Compilation (or preparation), when the statement is parsed, syntactically and semantically analyzed, the user's privileges are determined, and the statement execution plan is created.
Parameter binding -- an optional step that is only necessary when the statement contains parameter markers. At this stage each parameter data type is verified to match what the statement text expects based on the preparation.
Execution proper, when the query plan generated at step 1 is performed by the database engine, optionally using the parameter (variable) values provided at step 2. The statement results, if any, are then returned to the client.
db2_prepare(), db2_bind_param(), and db2_execute() correspond to steps 1, 2 and 3 respectively. db2_exec() combines steps 1 and 3, skipping step 2 and assuming the absence of parameter markers.
Now, speaking about parameter safety, the binding step ensures that the supplied parameter values correspond to the expected data type constraints. For example, in the query containing something like ...WHERE MyIntCol = ?, if I attempt to bind a character value to that parameter it will generate an error.
If instead I were to use db2_exec() and compose a statement like so:
$stmt = "SELECT * FROM MyTab WHERE MyIntCol=" . $parm
I could easily pass something like "0 or 1=1" as the value of $parm, which would produce a perfectly valid SQL statement that only then will be successfully parsed, prepared and executed by db2_exec().
I'm trying to update a column on SQL server r2 using the hashbytes command. Here's the simplified version of the command:
COMMAND: "UPDATE [tbl] SET [checksum] = HASHBYTES('MD5',[field1])"
The problem is that, it writes strange characters like this to all the fields:
"˜Iý¸¶C"KéS©c"
However, if I do a select (using the same fields):
select HASHBYTES('MD5',[field1]) from [tbl];
It returns a correct string:
0x9849FDB80C17B64322DA094BE963A963
Anyone know why it would do this. I've tried on a test database and the update command works as expected. But it doesn't work on our production server.
The reason you are getting this is because HASHBYTES returns a binary data type, and this is not text.
Using the build in function fn_varbintohexstr you can convert the binary data into text, as follows:
UPDATE [tbl] SET [checksum] = master.dbo.fn_varbintohexstr(HASHBYTES('MD5',[field1]))
I am using Rogue/Lift Mongo record to query MongoDb. I am trying to create different query according to the sort field name. I have therefore a string name of the field that I want to use to sort the results.
I have tried to use Record.fieldByName in OrderAsc:
...query.orderAsc (elem => elem.fieldByName(columnName).open_!)
but I obtain "no type parameter for orderAsc".
How can I make it working? Honestly all the type programming in Rogue is quite difficult to follow.
Thanks
The problem is that you cannot dynamically generate a query with Rogue easily. As solution I used Lift Mongo Db that allows the usage of strings (without compile checking) for these kind of operations that requires dynamic sorting.