Watir-Webdriver limited functionally issue - watir-webdriver

When I run the code that is below everything works expect for the last line. I do not get an error, but the line of code in not working. How do I fix this problem?
require "watir-webdriver"
test = "test"
ia1 = "http://" + test + "mypurefleet.pureenergyservices.com/"
e = Watir::Browser.new :chrome
e.goto(ia1)
e.frame(:name => "content").text_field(:name => "txtPwd").set "name"
When I run the code in IRB I get the following response.
[14.628][INFO]: waiting for pending navigations...
[14.628][INFO]: done waiting for pending navigations
[14.658][INFO]: waiting for pending navigations...
[14.659][INFO]: done waiting for pending navigations
[14.659][INFO]: sending WebDriver response: 200 {
"sessionId": "65f50d0fd6ce5acad36bf310db8a7ef",
"status": 7,
"value": {
"message": "no such element\n (Session info: chrome=27.0.1453.110)\n (Dr
iver info: chromedriver=2.0,platform=Windows NT 6.1 SP1 x86)"
}
}
[14.667][INFO]: received WebDriver request: POST /session/65f50d0fd6ce5acad36bf3
10db8a7ef/element {
"using": "xpath",
"value": ".//frame[#name='content']"
}
[14.669][INFO]: waiting for pending navigations...
[14.670][INFO]: done waiting for pending navigations
[14.688][INFO]: waiting for pending navigations...
[14.688][INFO]: done waiting for pending navigations
[14.689][INFO]: sending WebDriver response: 200 {
sessionId: 65f50d0fd6ce5acad36bf310db8a7ef,
status: 0,
value: {
ELEMENT: 0.1280214337166388:1
}
}
.
.
.
.
.
[15.000][INFO]: waiting for pending navigations...
[15.001][INFO]: done waiting for pending navigations
[15.025][INFO]: waiting for pending navigations...
[15.025][INFO]: done waiting for pending navigations
[15.026][INFO]: sending WebDriver response: 200 {
"sessionId": "65f50d0fd6ce5acad36bf310db8a7ef",
"status": 0,
"value": null
}
[15.032][INFO]: received WebDriver request: POST /session/65f50d0fd6ce5acad36bf3
10db8a7ef/element/0.24873712522143:1/value {
"value": [ "name" ]
}
[15.033][INFO]: waiting for pending navigations...
[15.034][INFO]: done waiting or pending navigations
[15.092][INFO]: waitingforpendingnavigations...
[15.093][INFO]: done waiting for pending navigations
[15.093][INFO]: sending WebDriver response: 200 {
"sessionId": "65f50d0fd6ce5acad36bf310db8a7ef",
"status": 0,
"value": null
}
=> nil
irb(main):007:0>

Works fine here on mac:
MEDBEDbs-iMac:~ medbedb$ irb
1.9.3p392 :001 > require "watir-webdriver"
=> true
1.9.3p392 :002 > test = "test"
=> "test"
1.9.3p392 :003 > ia1 = "http://" + test + "mypurefleet.pureenergyservices.com/"
=> "http://testmypurefleet.pureenergyservices.com/"
1.9.3p392 :004 > e = Watir::Browser.new :chrome
=> #<Watir::Browser:0x..fbf7bb9bed066d83c url="about:blank" title="about:blank">
1.9.3p392 :005 > e.goto(ia1)
=> "http://testmypurefleet.pureenergyservices.com/"
1.9.3p392 :006 > e.frame(:name => "content").text_field(:name => "txtPwd").set "name"
=> {}
1.9.3p392 :007 > e.frame(:name => "content").text_fields.each { |p| puts p.html }
<input type="text" name="txtUser" size="20" maxlength="60" title="User ID">
<input type="password" name="txtPwd" size="20" maxlength="40" title="Password">
=> [#<Watir::TextField:0x..f95bb2b33ebcdbb2e located=true selector={:element=>(webdriver element)}>, #<Watir::TextField:0x130f931c73c6f6f8 located=true selector={:element=>(webdriver element)}>]
Try updating the chromedriver, also try this:
gem update --system
gem update

Related

Spark Job SUBMITTED but not RUNNING after submit via REST API

Following the instructions in this website, I'm trying to submit a job to Spark via REST API /v1/submissions.
I tried to submit SparkPi in the example:
$ ./create.sh
{
"action" : "CreateSubmissionResponse",
"message" : "Driver successfully submitted as driver-20211212044718-0003",
"serverSparkVersion" : "3.1.2",
"submissionId" : "driver-20211212044718-0003",
"success" : true
}
$ ./status.sh driver-20211212044718-0003
{
"action" : "SubmissionStatusResponse",
"driverState" : "SUBMITTED",
"serverSparkVersion" : "3.1.2",
"submissionId" : "driver-20211212044718-0003",
"success" : true
}
create.sh:
curl -X POST http://172.17.197.143:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
"appResource": "/home/ruc/spark-3.1.2/examples/jars/spark-examples_2.12-3.1.2.jar",
"sparkProperties": {
"spark.master": "spark://172.17.197.143:7077",
"spark.driver.memory": "1g",
"spark.driver.cores": "1",
"spark.app.name": "REST API - PI",
"spark.jars": "/home/ruc/spark-3.1.2/examples/jars/spark-examples_2.12-3.1.2.jar",
"spark.driver.supervise": "true"
},
"clientSparkVersion": "3.1.2",
"mainClass": "org.apache.spark.examples.SparkPi",
"action": "CreateSubmissionRequest",
"environmentVariables": {
"SPARK_ENV_LOADED": "1"
},
"appArgs": [
"400"
]
}'
status.sh:
export DRIVER_ID=$1
curl http://172.17.197.143:6066/v1/submissions/status/$DRIVER_ID
But when I try to get the status of the job (even after a few minutes), I got a "SUBMITTED" rather than "RUNNING" or "FINISHED".
Then I looked up the log and found that
21/12/12 04:47:18 INFO master.Master: Driver submitted org.apache.spark.deploy.worker.DriverWrapper
21/12/12 04:47:18 WARN master.Master: Driver driver-20211212044718-0003 requires more resource than any of Workers could have.
# ...
21/12/12 04:49:02 WARN master.Master: Driver driver-20211212044718-0003 requires more resource than any of Workers could have.
However, in my spark-env.sh, I have
export SPARK_WORKER_MEMORY=10g
export SPARK_WORKER_CORES=2
I have no idea what happened. How can I make it run normally?
Since you've checked resources and You have enough. It might be network issue. executor maybe cannot connect back to driver program. Allow traffic on both master and workers.

OpsManager mongodb deployment issue adding PLAIN auth

I'm trying to enable PLAIN authentication security over a mongodb replica shard managed with OpsManager following their documentation https://docs.opsmanager.mongodb.com/v4.0/tutorial/enable-ldap-authentication-for-group/ .
The issue I'm facing is at the automation-agent trying to get mongoS status while restarting after enabling security. Please see the error output below:
<mongos_5> [09:18:19.711] Failed to compute states :
<mongos_5> [09:18:19.711] Error calling ComputeState : <mongos_5> [09:18:19.632] Error getting current config from running mongo using conn params = mongos01:27017 (local=false) :
<mongos_5> [09:18:19.632] Error getting pid for mongos01:27017 (local=false) :
<mongos_5> [09:18:19.632] Error running command for runCommandWithTimeout(dbName=admin, cmd=[{serverStatus 1} {locks false} {recordStats false}]) :
result={"$clusterTime":{"clusterTime":6808443558471663617,"signature": {"hash":"e44BxV30B7dTpampo4VZsVuio7E=","keyId":6808441655801151517}},"code":13,"codeName":"Unauthorized",
"errmsg":"command serverStatus requires authentication","ok":0,"operationTime":6808443558471663617} connection=&{mongos01:27017 (local=false) 2 true 0xc4207b21a0 2020-03-26 09:18:19.627337419 +0000 UTC 0xc4207bdef0 <nil> }
identityUsed= : command serverStatus requires authentication
I noticed that even if opsmanager is not able to get the status the security was enabled successfully and PLAIN authentication mechanism works but the status hangs at
Start the process ... Start MongoDB process
I tried this over the API following mongodb-labs repo https://github.com/mongodb-labs/mms-api-examples/blob/master/automation/api_usage_example/configs/security_ldap_cluster.json but also manually following mongodb docs but everytime I'm facing the same error.
After all I enabled LDAP(PLAIN) only for mongo in mongoconfig file (see below the ops manager API snippet call example), and avoid enable in opsmanager for the agents also.
{
"args2_6": {
"net": {
"port": 28001
},
"replication": {
"replSetName": "rs0"
},
"storage": {
"dbPath": "/data/mongo"
},
"systemLog": {
"destination": "file",
"path": "/data/mongo/mongodb.log"
},
"security": {
"authorization": "enabled"
},
"setParameter": {
"saslauthdPath": "",
"authenticationMechanisms": "PLAIN,MONGO-CR,SCRAM-SHA-256",
}
}, ...

Gatling is freezing when I use jsonPath

I have Gatling version 3.0.0-RC4.
I have the following Gatling code
object Signup {
val feeder = csv("phones.csv").circular
var signup = tryMax(2) {
exec(
http("Get details")
.get("/v2/dummy/2")
.check(status.is(200))
)
.feed(feeder)
.exec(
http("Signup")
.post("/v2/user/customer")
.header(HttpHeaderNames.ContentType, HttpHeaderValues.ApplicationJson)
.header(HttpHeaderNames.Accept, HttpHeaderValues.ApplicationJson)
.header("Set-Cookie", "id=2")
.body(StringBody("""{
"email": "rm#gmail.com",
"phone_number": "${phoneNumber}",
"first_name": "Rishi",
"last_name": "Mukherjee",
"pin": "1234",
}""")).asJson
.check(
jsonPath("$.otp_data.otp_uuid").saveAs("lastResponse")))
}.exitHereIfFailed
}
In the line, jsonPath("$.otp_data.otp_uuid").saveAs("OTPUUID"), if I replace that with status.is(200), the code runs just fine. But, with this line, on running the program, it freezes and keeps showing the following
================================================================================
2018-10-12 17:36:11 5s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=1 KO=0 )
> Get details (OK=1 KO=0 )
---- Signup -------------------------------------------------------------
[--------------------------------------------------------------------------] 0%
waiting: 0 / active: 1 / done: 0
================================================================================
Problem is, I do not get any error or anything that will be helpful to debug. In fact this also happens when I run the included example AdvancedSimulationStep03.scala. What can be the issue or is there anything I may be missing?
So, I had JDK 11. The docs say I should have had JDK 8. Switched to JDK 8 and it got fixed.
Hopefully helps anyone else if they get stuck.

Jenkins Pipleline using Groovy_Terraform plan job

I am creating a pipeline using jenkins, trying to add a logic where if the terraform-plan is successful then only it will proceed with apply, therefore need get the return value as 0/1/2 from the sh terraform plan command but getting an error as below:
+ gt
+ echo 2
2
C:/Users/Smi/.jenkins/workspace/Pipe_Groovy#tmp/durable-33bd46fb/script.sh: line 2: gt: command not found
+ status
C:/Users/Smi/.jenkins/workspace/Pipe_Groovy#tmp/durable-33bd46fb/script.sh: line 2: status: command not found
[Pipeline] }
Below is the code:
sh "terraform init"
sh "terraform get"
sh "set +e; terraform plan -out=plan.out -detailed-exitcode; echo \$? > status"
def exitCode = readFile('status').trim()
def apply = false
echo "Terraform Plan Exit Code: ${exitCode}"
if (exitCode == "0") {
currentBuild.result = 'SUCCESS'
}
if (exitCode == "1") {
slackSend channel: '#ci', color: '#0080ff', message: "Plan Failed: ${env.JOB_NAME} - ${env.BUILD_NUMBER} ()"
currentBuild.result = 'FAILURE'
}
if (exitCode == "2") {
stash name: "plan", includes: "plan.out"
slackSend channel: '#ci', color: 'good', message: "Plan Awaiting Approval: ${env.JOB_NAME} - ${env.BUILD_NUMBER} ()"
try {
input message: 'Apply Plan?', ok: 'Apply'
apply = true
} catch (err) {
slackSend channel: '#ci', color: 'warning', message: "Plan Discarded: ${env.JOB_NAME} - ${env.BUILD_NUMBER} ()"
apply = false
currentBuild.result = 'UNSTABLE'
}
}
Please advice
Shell is trying to find a command named gt and fails, so your job fails at this step. You probably wanted to use > instead.
There is a way to get the exit code of a bash command without redirecting it into a file and than reading it:
def status = sh(returnStatus: true, script: "set +e; terraform plan -out=plan.out -detailed-exitcode")
Do something as follows:
#!/bin/bash -xe
set +e
/terraform plan -var-file="${env}-custom.tfvars" -refresh=false -detailed-exitcode
PLAN_ECODE=$?
echo ${PLAN_ECODE}
set -e
if [ "${PLAN_ECODE}" -eq 1 ]; then
echo "ERROR!!! something is wrong planning!"
exit 1
elif [ "${PLAN_ECODE}" -eq 0 ]; then
echo "No changes!"
elif [ "${PLAN_ECODE}" -eq 2 ]; then
echo "Applying in progress!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
/terraform apply -auto-approve -var-file="${env}-custom.tfvars"
echo "$i has been updated (check logs above for details)" >> /tmp/logs.txt
fi

can't accept new chunks because there are still 1 deletes from previous migration

I have a mongodb production cluster running in 2.6.11 with 20 replicatSets. I getting space disk issue, because the chunks majority are store in one replicatSet. When I check the log, I can see that move chunk failed because of "deletes from previous migration"
2015-12-28T17:13:32.164+0000 [conn6504] about to log metadata event: { _id: "db1-2015-12-28T17:13:32-56816dbc6b0464b0a5801db8", server: "db1", clientAddr: "xx.xx.xx.11:50077", time: new Date(1451322812164), what: "moveChunk.start", ns: "emailing_nQafExtB.reports", details: { min: { email: "xxxxxxx" }, max: { email: "xxxxxxx" }, from: "shard16", to: "shard22" } }
2015-12-28T17:13:32.675+0000 [conn6504] about to log metadata event: { _id: "db1-2015-12-28T17:13:32-56816dbc6b0464b0a5801db9", server: "db1", clientAddr: "xx.xx.xx.11:50077", time: new Date(1451322812675), what: "moveChunk.from", ns: "emailing_nQafExtB.reports", details: { min: { email: "xxxxxxx" }, max: { email: "xxxxxxx" }, step 1 of 6: 3, step 2 of 6: 314, note: "aborted", errmsg: "moveChunk failed to engage TO-shard in the data transfer: can't accept new chunks because there are still 1 deletes from previous migration" } }
I follow the answer from this question, but doesn't work for me. I run stepDown command on one primary and all my cluster primary. I do the same with the cleanUpOrphaned command.
Does somedody run over this problem ?
Thanks in advance for any insights.