First Cypress test always fails no matter what in CircleCI - docker-compose

I have a CircleCI config in which I am running machine executor, starting up my application using docker-compose, then running E2E tests in Cypress, that are not in docker, against this app. Cypress basically spins up a headless browser and tries to hit the specified url.
Now, no matter what my first test always fails. I have created 10 tests that just hit the root url and click a button. I have those tests run first. First one always fails.
The error is
CypressError: cy.visit() failed trying to load:
http://localhost:3000/
Which basically means there was no response or a 4-500
I though the app might not be ready yet, so I added a sleep before starting tests. I set that to 9 minutes (10 minutes times out on CircleCI). First test failed. I ratcheted that down to 2 minutes. First test fails.
Again to be clear, the first 10 tests are the same, so it’s not test specific.
Update
I have crossposted this to the CircleCI forum.

I think your server isn’t ready.
Before run cypress test, you have to wait for server.
If so, don't use sleep.
You can use wait-on or start-server-and-test.
You can check this doc. https://docs.cypress.io/guides/guides/continuous-integration.html#Boot-your-server

So TJ Jiang was close with his answer. Unfortunately wait-on doesn't work with CircleCI.
So I wrote my own,
timeout 540 bash -c 'while [ $(curl -s -o /dev/null -w %{http_code} localhost:3000) != "200" ]; do sleep 15; done' || false
This pings the url every 15 seconds for 9 minutes till it receives a http status code of 200 or it exits.
To be specific I wrote a Makefile target that uses that line above, I don't know how just pasting that in config.yml would work.
So running this as a step prior to running the cypress tests does in fact make it so all 5 identical tests pass.
So now it pings for about 3 1/2 minutes then goes through. I had tested with sleep up to 9 minutes and always the same result. So I don't know why this works better than sleep but it clearly does.

I had the same issue. Solved it by server/route/wait. I first start the server, then create a rout alias, wait for it to call the Api endpoint (which triggers the app to fill the dom) then do the rest (cy.get/click).
Example before:
cy.get('myCssSelector')
After server/route/wait:
cy.server()
cy.route('GET', '*/project/*').as('getProject')
cy.wait('#getProject')
.then(resp => {
expect(resp.status).to.eq(201)
})
cy.get('myCssSelector')

Related

uwsgi: detect and empty full queue

I have a python app behind supervisor and uwsgi.
At a certain point, my app stopped to answer the queries with this message in the logs:
Tue Sep 6 11:06:53 2022 - *** uWSGI listen queue of socket "127.0.0.1:8200" (fd: 3) full !!! (101/100) ***
In my use case,
If a query is not answered within 1s, the answer does not matter anymore; the client app will automatically re-do the request
restarting the whole uwsgi takes around half an hour
Thus I prefer to lose few requests than restarting.
QUESTIONS :
Is it possible to detect a full queue from inside the python app ?
Is it possible to clear the queue from inside ?
Precision: this question is not about fixing the underlying issue. I'm working on that separately. It is only about knowing if this particular workaround is possible and how to implement it.
I'm using uwsgi 2.0.20. Looking at the queue framework does not help since uwsgi has no attribute (e.g.) queue_slot. Doc outdated ?
EDIT
I can reproduce the error with this simple bash script:
#!/bin/bash
for i in {0..200}
do
echo "Number: $i"
sleep 0.2
curl -X POST "http://localhost:1103/my_app" &
done
(my app accepts POST, not GET)

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

Travis Build fails after 49 min even when logging output for all jobs every 1-2 min

I have a build for an Ionic project and its E2E testing with SauceLabs. The build is timing out after 49 min 17 sec(50 min). All of my jobs are running well and logging output frequently at least every 1-2 min. The timeout is happening consistently at 50 min.
My build goes meets all the requirements as mentioned here to not suffer a time out. Also, there is no timeout for the build as mentioned in the docs. So the build shouldn't timeout as it is happening in the case. Any resolutions for this Issue?
Here are some of the logs:
https://travis-ci.org/magician03/moodlemobile2/builds/241500777
https://travis-ci.org/magician03/moodlemobile2/builds/241414546
https://travis-ci.org/magician03/moodlemobile2/builds/241401570
Your build ends with this message:
The job exceeded the maximum time limit for jobs, and has been
terminated.
It is the expected behaviour. Exists a limit of 50 minutes as explained here and here:
Build Timeouts #
It is very common for test suites or build scripts to hang. Travis CI
has specific time limits for each job, and will stop the build and add
an error message to the build log in the following situations:
A job produces no log output for 10 minutes
A job on travis-ci.org takes longer than 50 minutes
A job running on OS X infrastructure takes longer than 50 minutes - (applies to travis-ci.org or travis-ci.com)
A job on Linux infrastructure on travis-ci.com takes longer than 120 minutes
Some common reasons why builds might hang:
Waiting for keyboard input or another kind of human interaction
Concurrency issues (deadlocks, livelocks and so on) Installation of
native extensions that take very long time to compile There is no
timeout for a build; a build will run as long as all the jobs do as
long as each job does not timeout.
Your build doesn't complete before for a specific issue in your build.
I would ask another question focused in your code and language node_jsand no in this limit.
I develop native apps so I can not help on this topic but I found this ticket:
It seems that they updated Node.js to 6.X, tested it using Travis-ci, it failed and currently they don't use Travis-ci, so I would ask directly to MoodleHQ in their forums.
jleyva Juan Leyva added a comment - 03/Nov/16 6:05 PM Dani, can you
enable in your Travis account your moodlemobile2 repository so we can
see if Travis is working with the new dependencies? I already changed
the tracker fields so Travis is aware of the branch (but it requires
first you to enable you forked moodlemobile2 repo)
jleyva Juan Leyva added a comment - 03/Nov/16 7:31 PM Builds are
failing: https://travis-ci.org/dpalou/moodlemobile2/builds/172896611
Protractor or Jasmine or whatever is not working with this dependency
set
You can also check related issues and compare, this configuration works using:
node_modules/.bin/protractor e2e-tests/protractor.conf.js --directConnect
in protractor-conf.js change chromeOnly to directConnect

Devel::Cover not collecting any data after startup with mod_perl2

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?

crontab with wget - why is it running twice?

I've a php script which runs from webservice and insert to DB.
crontab -e
......other cron tasks above.......
...
..
..
# Run test script.php at 1610
10 16 * * * /usr/bin/wget -q -O /home/username/my_cronjobs/logs/cron_adhoc http://localhost/project/script.php
Apparently, at 16:10, this script is run twice!
16:10:01 and 16:25:02
Is it something wrong and gotta do with using wget??
Or did i set the schedule on cron job wrongly?
When i run http://localhost/project/script.php from browser, it will only run once..
Any idea regarding this problem ?
I've tested, there are no other users running the same job... I suspect the way wget works.
As my script needs at least 20mins to complete without sending back a response (it is pulling alot of data from webservicces and save to db) .. suspect there's a time out or retry of wget by default causing this problem.
The wget docs give a default read-timeout of 900 seconds, or 15 minutes.
if, at any point in the download, no data is received for more than
the specified number of seconds, reading fails and the download is
restarted
This is why you were seeing the file called again 15 minutes later. You can specify a longer read-timeout by adding the parameter and an appropriate number of seconds:
--read-timeout=1800
i think i solve my own question.
My php takes some time to load, i guess wget retries or time out after some default specified time.
I solved it by using /usr/bin/php
Whose user's crontab is this?
Check if there is another user for which you set up cron job at different time and forgot about it.