Selenium looping through jenkins and permission denied in cli - command-line

After struggling to get proper testsuites, I'm now pretty disappointed by the fact that , while following as close as possible this tutorial (pretty straightforward, right ?) Setting up Selenium server on a headless Jenkins CI build machine, Jenkins keeps looping on the current build, outputting :
So I decided to run a selenium build by hand on the ci machine, and got this :
user#machine:/var/log$ export DISPLAY=":99" && java -jar /var/lib/selenium/selenium- server.jar -browserSessionReuse -htmlSuite *firefox http://staging.site.com /var/lib/jenkins/jobs/project/workspace/tests/selenium/testsuite.html /var/lib/jenkins/jobs/project/workspace/logs/selenium.html
24 janv. 2012 19:27:56 org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
19:27:59.927 INFO - Java: Sun Microsystems Inc. 20.0-b11
19:27:59.929 INFO - OS: Linux 3.0.0-14-generic amd64
19:27:59.951 INFO - v2.17.0, with Core v2.17.0. Built from revision 15540
19:27:59.958 INFO - Will recycle browser sessions when possible.
19:28:00.143 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
19:28:00.144 INFO - Version Jetty/5.1.x
19:28:00.145 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
19:28:00.147 INFO - Started HttpContext[/selenium-server,/selenium-server]
19:28:00.147 INFO - Started HttpContext[/,/]
19:28:00.183 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#16ba8602
19:28:00.184 INFO - Started HttpContext[/wd,/wd]
19:28:00.199 INFO - Started SocketListener on 0.0.0.0:4444
19:28:00.199 INFO - Started org.openqa.jetty.jetty.Server#6f7a29a1
HTML suite exception seen:
java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:900)
at org.openqa.selenium.server.SeleniumServer.runHtmlSuite(SeleniumServer.java:603)
at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:287)
at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:245)
at org.openqa.grid.selenium.GridLauncher.main(GridLauncher.java:54)
19:28:00.218 INFO - Shutting down...
19:28:00.220 INFO - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=4444]
While understanding the output is'nt that hard, finding what to do to remove this issue is.
Any chance you guys already have been facing that kind of stuff ? Thanks

I only just got past these problems myself, but I was able to run your command when I pointed it at my .jar, testSuite and report file. I'm thinking that perhaps the location of your files under,
/var/lib/selenium
could be part of the problem. Try putting them where your user has permission perhaps under
/home/USERNAME/selenium
Other than that the only thing I can say is make sure your .jar, testSuite and report file are valid.
Also (I assume this is an error of copy and paste into stack overflow) but, this part of your command is incorrect
/var/lib/selenium/selenium- server.jar
You are not getting the error I would expect from an incorrect jar location so I assume something was lost when you pasted to stackoverflow.

Related

Teamcity fails to fetch files from Azure Devops

I am very new to the TeamCity and want to learn auto build and deploy applications from online TFS. My project has windows and web apps developed in .NET only.
I have no code on my machine and directly using online TFS as source.
VCS Root setting works fine and connection gets successful.
In the Build Step, I am using Build as the first step using MSBuild. When I click on RUN it starts the process and shows the first step as UPDATING SOURCES. In the code checkout directory, it creates only directories and there is no even single source code file though I can see all code files in TFS online. After this step it shows error:
Failed to start MSBuild.exe. Failed to find project file at path:
C:\TeamCity\buildAgent\work\740b9db587af8795\ProjectName.sln
ProjectName.sln file exists in TFS at https://ProjectName.visualstudio.com/DefaultCollection/$/Main/MainBranch
I am not getting what things I am missing. I googled much about but I wasn't able to get what is missing here. What extra steps I need to do to get this working.
Appreciate your help.
FI: Teamcity Version 2018.1.3 (build 58658). Teamcity server and build agent are on the same machine: Win 10.
Edited:
Below is screenshot with the RUN process.
Teamcity-Activities.log
[2019-01-07 19:44:23,906] INFO - s.buildServer.ACTIVITIES.AUDIT - build_type_edit_settings: "MLD / Main {id=Mld_Main, internal id=bt2}" build configuration settings were edited ("version before: 45, version after: 46") by "'admin'(AB) {id=1}" with comment "runners of 'Main' build configuration were updated"
[2019-01-07 19:44:34,460] INFO - s.buildServer.ACTIVITIES.AUDIT - build_add_to_queue: Build BUILD_PROMOTION{id=52} was added to queue by "'admin'(AB) {id=1}"
[2019-01-07 19:44:34,466] INFO - tbrains.buildServer.ACTIVITIES - Build added to queue; Queued build {Build promotion {promotion id=52, configuration={id=Mld_Main, internal id=bt2}, queued}, triggered by "'admin'(AB) {id=1}" (##userId='1' type='user')}
[2019-01-07 19:44:34,539] INFO - tbrains.buildServer.ACTIVITIES - Build started; MLD / Main {id=Mld_Main, internal id=bt2} #25 {promotion id=52, agent="Agent-Name" {id=1}, triggered by "'admin'(AB) {id=1}" (##userId='1' type='user'). Started 2019-01-07 19:44:34.514, running}
[2019-01-07 19:55:38,847] INFO - tbrains.buildServer.ACTIVITIES - Finished build MLD / Main {id=Mld_Main, internal id=bt2} #25 {promotion id=52, agent="Agent-Name" {id=1}, triggered by "'admin'(AB) {id=1}" (##userId='1' type='user'). Started 2019-01-07 19:44:41.748, finished. Status "FAILURE 'Cannot start build runner; exit code -42 (Step: Build (MSBuild))'"}
Teamcity-tfs.log
[2019-01-07 19:42:03,434] INFO - .vcs.tfs.java.TfsJavaWebRunner - Starting TFS out of process application
[2019-01-07 19:42:03,439] INFO - .vcs.tfs.java.TfsJavaWebRunner - TFS out of process application has been started
[2019-01-07 19:42:08,312] INFO - .vcs.tfs.java.TfsJavaWebRunner - Web server started at http://localhost:64729/api/commands
[2019-01-07 19:42:08,312] INFO - gers.vcs.tfs.TfsTimeoutWatcher - TFS out of process app idle timer has started
[2019-01-07 19:42:25,586] INFO - .vcs.tfs.java.TfsJavaWebRunner - TFS java web command has finished: TestConnection $/Main/Feature -s="https://ProjectName.visualstudio.com/" -p="*******", completed in 22.140 second(s)
[2019-01-07 19:43:00,661] INFO - .vcs.tfs.java.TfsJavaWebRunner - TFS java web command has finished: GetCurrentVersion $/Main/Feature -s="https://ProjectName.visualstudio.com/" -p="*******", completed in 1.607 second(s)
[2019-01-07 19:44:35,894] INFO - .vcs.tfs.java.TfsJavaWebRunner - TFS java web command has finished: GetCurrentVersion $/Main/Feature 31529 -s="https://ProjectName.visualstudio.com/" -p="*******", completed in 1.293 second(s)
Finally got the issue issue resolved!
The token generated for the TFVC access was not having sufficient permissions. Changed the access token to have Full Access and it started working and I can see all code files in the checkout directory. Thanks jesse for your comments.

Service gnocchi-api not found

I've been installing Ceilometer for Openstack Pike on Ubuntu 16.04 LTS using this install guide.
Everything went ok, up to the moment, when but when I've tried to restart gnocchi-api I got message
Failed to start gnocchi-api.service: Unit gnocchi-api.service not found.
I checked /etc/init.d and there is no script gnocchi-api (although gnocchi-metricd is, and it's working properly). Tried reinstalling package gnocchi-api, but it didn't help. When starting gnocchi-api normally, from the command line it works, although sends a bunch of warnings (but I think they are common)
I'm looking for a way to make it work normally - like a service and using conf file.
2017-11-27 20:01:40.593 6059 INFO gnocchi.rest.app [-] WSGI config used: /usr/lib/python2.7/dist-packages/gnocchi/rest/api-paste.ini
2017-11-27 20:01:40.753 6059 WARNING keystonemiddleware._common.config [-] The option "__file__" in conf is not known to auth_token
2017-11-27 20:01:40.759 6059 WARNING keystonemiddleware._common.config [-] The option "configkey" in conf is not known to auth_token
2017-11-27 20:01:40.760 6059 WARNING keystonemiddleware._common.config [-] The option "here" in conf is not known to auth_token
2017-11-27 20:01:40.762 6059 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True.
2017-11-27 20:01:40.768 6059 WARNING keystonemiddleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
STARTING test server gnocchi.rest.app.build_wsgi_app
Available at http://127.0.1.1:8000/
DANGER! For testing only, do not use in production
apt-get currently pulls version 3.1.9 of gnocchi-api. If you manually install gnocchi-api 3.1.2, this service file is very much there in it.
service gnocchi-api start works fine with this.
But I am not sure if functionality is ok or if this is an intended change with 3.1.9.. Still to check these.
This is the same on the latest version on Ubuntu 16.04 / gnocchi version 4.2.0
Confirmed bug as of now: https://bugs.launchpad.net/ceilometer/+bug/1750933
gnocchi-api.service unit cannot be started as it has not been created.

Phantom JS Karma runner connects but doesn't execute tests in C9.io

I followed the directions as per the Karma website to run Karma in the Cloud9 IDE (http://karma-runner.github.io/0.10/plus/cloud9.html)
my karma.config correctly contains:
hostname: process.env.IP,
port: process.env.PORT,
runnerPort: 0,
my terminal output:
Running "karma:test" (karma) task
INFO [karma]: Karma v0.10.10 server started at http://0.0.0.0:8080/
INFO [launcher]: Starting browser PhantomJS
WARN [launcher]: PhantomJS have not captured in 60000 ms, killing.
INFO [launcher]: Trying to start PhantomJS again.
I am able to see the words "Karma starting..." at
http://..c9.io/
However, I receive the following error logged in my console:
Mixed Content: The page at 'https://test-raptoria.c9.io/?
_c9_id=livepreview24&_c9_host=https://ide.c9.io' was loaded
over HTTPS, but requested an insecure XMLHttpRequest
endpoint http://test-raptoria.c9.io/socket.io/
EIO=3&transport=polling&t=1426358810471-0'. This
request has been blocked; the content must be served over HTTPS.
Any ideas on how to fix this?? It seems like the requests are being blocked..
I've shared my workspace here:
https://ide.c9.io/raptoria/test
Thank you!
Could you try to change the request to socket.io to https? If that is not possible, Try loading the test-raptoria.c9.io with http instead of https.
I downloaded my entire workspace from test (without the data folder and dependencies), and created a new one called simple. I ran 'grunt test' again and it works. I must've corrupted my old workspace somehow.
Problem solved!
Thanks for helping me look into this.

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem

Jboss server log is showig error

I am calling a script within main script to start the jboss sever after releasing the build on server.it is successfully starting the JBOSS but showing the below output in server/log/ server.log file and at the console output which is hanged.
To run the next build i need to kill this manually which is not appropriate.
05:04:17,373 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-0.0.0.0-8209
05:04:17,451 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053)] Started in 2m:38s:444ms
05:04:20,912 WARN [PropertyMessageResources] Resource MessageResources_en_US.properties Not Found.
05:04:20,913 WARN [PropertyMessageResources] Resource MessageResources_en.properties Not Found.
Help would be really highly appreciable.
Thanks.
By default, when you start your jboss server, it is not started as a background process and the console just sits there with the logs when server is started, that must be the reason why your script seems to hang , in reality it is just awaiting console output from the server.
To start jboss as background operation, replace the lines of code where you fire the run.sh in startup script with
nohup /path/to/jboss_home/jboss/bin/run.sh -b0.0.0.0 > /tmp/logs/jbosslogs.log &
This should start jboss in the background and redirect all startup logs to jbosslogs.log file. Since it is a background process, it will not hang at all.