I am testing a web application in selenium IDE. I want to access a single row of table, but i am unable to select, plzzz tell me what command and target i should write?
Your question is ambiguous.
However I am assuming that you are trying to fetch the data/text available in a specific row of a table.
You can use the storeText command for this purpose. The syntax is as follows:
command: storeText
Target: locator
value: variable_name
I would suggest that you use Xpath as the locator. In the above command variable_name refers to the variable that will store the text which is fetched using the command.
To select a single row of the table, you need to know how to write Xpath for the required row.
Now to access the data in each of the rows in the table you can use storeText command within a while loop, and index that Xpath (element locator). As you store the text in the variable in each iteration, you can use the echo command to display the same onto the log. The required data can be extracted from the log using grep command in Linux.
Related
I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.
I am pretty new to Pentaho so my query might sound very novice.
I have written a transformation in which am using CSV file input step and table input step.
Steps I followed:
Initially, I created a parameter in transformation properties. The
parameter birthdate doesn't have any default value set.
I have used this parameter in postgresql query in table input step
in the following manner:
select * from person where EXTRACT(YEAR FROM birthdate) > ${birthdate};
I am reading the CSV file using CSV file input step. How do I assign the birthdate value which is present in my CSV file to the parameter which I created in the transformation?
(OR)
Could you guide me the process of assigning the CSV field value directly to the SQL query used in the table input step without the use of a parameter?
TLDR;
I recommend using a "database join" step like in my third suggestion below.
See the last image for reference
First idea - Using Table Input as originally asked
Well, you don't need any parameter for that, unless you are going to provide the value for that parameter when asking the transformation to run. If you need to read data from a CSV you can do that with this approach.
First, read your CSV and make sure your rows are ok.
After that, use a select values to keep only the columns to be used as parameters.
In the table input, use a placeholder (?) to determine where to place the data and ask it to run for each row that it receives from the source step.
Just keep in ming that the order of columns received by the table input (the columns out of the select values) is the same order that it will be used for the placeholders (?). This should not be a problem with your question that uses only one placeholder, but keep that in mind as you ramp up using Pentaho.
Second idea, using a Database Lookup
This is another approach where you can't personalize the query made to the database and may experience a better performance because you can set a "Enable cache" flag and if you don't need to use a function on your where clause this is really recommended.
Third idea, using a Database Join
That is my recommended approach if you need a function on your where clause. It looks a lot like the Table Input approach but you can skip the select values step and select what columns to use, repeat the same column a bunch of times and enable a "outer join" flag that returns the rows without result from the query
ProTip: If you feel the transformation running too slow, try to use multiple copies from the step (documentation here) and obviously make sure the table have the appropriate indexes in place.
Yes there's a way of assigning directly without the use of parameter. Do as follows.
Use Block this step until steps finish to halt the table input step till csv input step completes.
Following is how you configure each step.
Note:
Postgres query should be select * from person where EXTRACT(YEAR
FROM birthdate) > ?::integer
Check Execute for each row and Replace variables in in Table input step.
Select only the birthday column in CSV input step.
I execute a query using the below Python script and the table gets populated with 2,564,691 rows. When I run the same query using Google Big Query console, it returns 17,379,353 rows (query is as-is). I was wondering whether there is some issue with the below script. Not sure whether --replace in bq query replaces the past result set instead of appending to it.
Any help would be appreciated.
dateToday = (time.strftime("%Y/%m/%d"))
dateToday1 = dateToday.replace('/','')
commandStr = "type C:\Users\query.txt | bq query --allow_large_results --replace --destination_table table:dataset1_%s -n 1" %(dateToday1)
In the Web UI you can use Query History option to navigate to respective queries.
After you locate them - you can expand respective entries and see what exactly query was executed
I am more than sure that just comparing query texts you will see source of "discrepancy" right away!
added
In Query History - not only you can see Query Text, but also all configuration properties that were used for respective query - like Write Preference for example and others. So even if query text the same you can see potential difference in configuration that will give you a clue
I am using a Java based program and I am writing a simple select query inside that program to retrieve data from the PostgreSQL database. The data come with the header which is an error for the rest of my codes.
How do I get rid of all column headings in an SQL query? I just want to
print out the raw data without any headings.
I am using Building Controls Virtual Test Bed (BCVTB) to connect my database to EnergyPlus. This BCVTB has a database actor that you can write a query in it and receive data and send it to your other simulation program. I decided to use PostgreSQL. however when I write Select * From mydb, it brings data with the column names (header). I just want raw data without header. what should I do?
PostgreSQL does not send table headings, not like a CSV file. The protocol (as used via JDBC) sends the rows. The driver does request a description of the rows that includes column names, but it is not part of the result set rows like the "header first" convention for CSV.
Whatever is happening must be a consequence of the BCVTB tools you are using, and I suggest pursuing it on that side of things.
I want to execute some subjob if the previously processed number of rows are greater than N. To do this, i'm using the following configuration:
tFixedFlowInput have some rows.
tAggregateRow uses the count function and outputs one row with the number.
tSetGlobalVar then stores this value into a global var that I can check in the Run If connector (In this case, (Integer)globalMap.get("tSetGlobalVar_1") > 3 ).
tMsgBox then shows if the condition is true.
What I would like is to do the same, but in a more elegant way, using the minimum components required. I would like to connect tAggregateRow with the Run If connector directly (or even tFixedFlow) with tMsgBox, but I haven't found a way to refer to the number of rows previously processed without using the output row2.count variable.
How could I do something like this?
What should I put in the If condition to refer to the tAggregateRow operation result without connecting it to another meaningless component like exposed at the beginning?
for any talend component look under outline tab under the left side workspace pane at the bottom. this lists down the properties available via global variables for that component. Some properties like count of records inserted by output components are only available once the component is executed completely (After).
for your case you can try directly using ((Integer)globalMap.get("tFixedFlowInput_1_NB_LINE")) which gives number of lines (after) given by tFixedFlowInput.