testlink can not import more than 1131 testcases - import

My testlink environment is:
Linux Ubuntu14.04.4
Apache 2.4.7
Mysql 5.6
php5
TestLink 1.9.17
I tried to import more than 2000 testcases,but it only succeeded import 1131 testcases.and it receive a white page on the right frame on import page.and when I deleted the testsuite,it only delete the The first three sub testsuites,the fouth sub testsuite is not deleted.
I tried to change max_input_vars = 1000 to max_input_vars = 10000 in php.ini,but still the same problem.

I config /etc/php5/apache2/php.ini as follows,and succeed
max_input_vars = 1000
max_input_vars = 10000
memorylimit 128M
memorylimit 256M
post_max_size 12M
upload_max_filesize 10M
sudo /etc/init.d/apache2 restart

Related

Unable to configure a new agent : Failed to create CoreCLR, HRESULT: 0x80004005

We had some 12 agents (vsts-agent-linux-x64-2.188.4) running on one Az VM (Ubuntu 20.04.2 LTS) as processes (./config.sh && screen ./run.sh). All was well..
I had to run some command related to /tmp folder but it kept showing busy and we suspected that our Agents might be using /tmp. Unfortunately instead of any other clean way of stopping the agents, we killed all processes on this VM manually, including the agents'.
After the /tmp related command ran successfully, I tried running screen ./run.sh from one of the agent directories. And I got an error:
Failed to create CoreCLR, HRESULT: 0x80004005
I also had tried :
.agent2/run.sh and I got the error :
ldd: ./bin/libcoreclr.so: No such file or directory
ldd: ./bin/System.Security.Cryptography.Native.OpenSsl.so: No such file or directory
ldd: ./bin/System.IO.Compression.Native.so: No such file or directory
ldd: ./bin/System.Net.Http.Native.so: No such file or directory
Failed to create CoreCLR, HRESULT: 0x80004005
I even downloaded a new .tar for the agent and ran a fresh ./config . But I get the same error on ./config as well
Is there a solution to this? Please help
export COMPlus_EnableDiagnostics = 0, and then running ./config from the agent directory. worked!
I had this issue when running as the non-privileged user specified in the systemd file but running as root user worked fine.
Finally used:
strace -f -o trace.log /<executable path>/<executable name>
Which led me to:
9183 mknod("/tmp/clr-debug-pipe-9183-8112345738-in", S_IFIFO|0700) = -1 EACCES (Permission denied)
This caused me to compare the /tmp directory between working and non-working boxes.
[<not-working-hostname>]$ ll /
drwxrwxr-x 7 root root 93 Jan 5 21:37 tmp
[<working-hostname>]$ ll /
drwxrwxrwt 7 root root 93 Jan 5 21:59 tmp
(Note the r-x vs rwt)
Fix:
[<hostname>]# chmod 1777 /tmp

Selenium webdriver: Unknown error: cannot determine loading status [duplicate]

I'm using InstaPy which use Python and Selenium. I start the script per Cron and from time to time it crashes. So it'r really irregular, sometimes it runs well through. I'v posted on GitHub Repo as well already but didn't get an answer there, so i'm asking here now if someone has an idea why.
It's a digital ocean ubuntu server and i'm using it on headless mode. The driver version are visible on the log. here are error messages:
ERROR [2018-12-10 09:53:54] [user] Error occurred while deleting cookies from web browser!
b'Message: invalid session id\n (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)\n'
Traceback (most recent call last):
File "/root/InstaPy/instapy/util.py", line 1410, in smart_run
yield
File "./my_config.py", line 43, in <module>
session.follow_user_followers(['xxxx','xxxx','xxxx','xxxx'], amount=100, randomize=True, interact=True)
File "/root/InstaPy/instapy/instapy.py", line 2907, in follow_user_followers
self.logfolder)
File "/root/InstaPy/instapy/unfollow_util.py", line 883, in get_given_user_followers
channel, jumps, logger, logfolder)
File "/root/InstaPy/instapy/unfollow_util.py", line 722, in get_users_through_dialog
person_list = dialog_username_extractor(buttons)
File "/root/InstaPy/instapy/unfollow_util.py", line 747, in dialog_username_extractor
person_list.append(person.find_element_by_xpath("../../../*")
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py", line 351, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py", line 659, in find_element
{"using": by, "value": value})['value']
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash
from unknown error: cannot determine loading status
from tab crashed
(Session info: headless chrome=70.0.3538.110)
(Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/InstaPy/instapy/instapy.py", line 3845, in end
self.browser.delete_all_cookies()
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 878, in delete_all_cookies
self.execute(Command.DELETE_ALL_COOKIES)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: chrome not reachable
(Session info: headless chrome=71.0.3578.80)
(Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)
Any idea what the reason could be and how to solve it?
Thanks for the inputs. And the guys from http://treestones.ch/ helped me out.
Though you see the error as:
Error occurred while deleting cookies from web browser!
b'Message: invalid session id\n (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)\n'
The main exception is:
selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash
from unknown error: cannot determine loading status
from tab crashed
Your code trials would have given us some clues what going wrong.
Solution
There are diverse solution to this issue. However as per UnknownError: session deleted because of page crash from tab crashed this issue can be solved by either of the following solutions:
Add the following chrome_options:
chrome_options.add_argument('--no-sandbox')
Chrome seem to crash in Docker containers on certain pages due to too small /dev/shm. So you may have to fix the small /dev/shm size.
An example:
sudo mount -t tmpfs -o rw,nosuid,nodev,noexec,relatime,size=512M tmpfs /dev/shm
It also works if you use -v /dev/shm:/dev/shm option to share host /dev/shm
Another way to make it work would be to add the chrome_options as --disable-dev-shm-usage. This will force Chrome to use the /tmp directory instead. This may slow down the execution though since disk will be used instead of memory.
chrome_options.add_argument('--disable-dev-shm-usage')
from tab crashed
from tab crashed was WIP(Work In Progress) with the Chromium Team for quite some time now which relates to Linux attempting to always use /dev/shm for non-executable memory. Here are the references :
Linux: Chrome/Chromium SIGBUS/Aw, Snap! on small /dev/shm
Chrome crashes/fails to load when /dev/shm is too small, and location can't be overridden
As per Comment61#Issue 736452 the fix seems to be have landed with Chrome v65.0.3299.6
Reference
You can find a couple of relevant discussions in:
org.openqa.selenium.SessionNotCreatedException: session not created exception from tab crashed error when executing from Jenkins CI server
In case someone is facing this problem with docker containers:
use the flag --shm-size=2g when creating the container and the error is gone.
This flag make the container to use the host's shared memory.
Example
$ docker run -d --net gridNet2020 --shm-size="2g" -e SE_OPTS="-browser applicationName=zChromeNodePdf30,browserName=chrome,maxInstances=1,version=78.0_debug_pdf" -e HUB_HOST=selenium-hub-3.141.59 -P -p 5700:5555 --name zChromeNodePdf30 -v /var/lib/docker/sharedFolder:/home/seluser/Downloads selenium/node-chrome:3.141.59-xenon
Source: https://github.com/SeleniumHQ/docker-selenium
I was getting the following error on my Ubuntu server:
selenium.common.exceptions.WebDriverException: Message: unknown error:
session deleted because of page crash from tab crashed (Session
info: headless chrome=86.0.4240.111) (Driver info:
chromedriver=2.41.578700
(2f1ed5f9343c13f73144538f15c00b370eda6706),platform=Linux
5.4.0-1029-aws x86_64)
It turned out the the cause of the error was insufficient disk space on the server and the solution was to extend my disk space. You can check this question for more information.
We need to specify the shm memory separatly, --shm-size=2g
In case of docker,
use the following config - this working fine for me
services:
chrome:
image: selenium/node-chrome:4.0.0-rc-1-prerelease-20210823
shm_size: 2gb
Message: unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed
(Session info: headless chrome=95.0.4638.69)
This error occurred because there was not enough waiting time for web pages to load
The answers above solved my issue, but since i needed to run it from a docker-compose.yml i used this configuration which calls my regular unchanged DockerFile
docker-compose.yml
version: '1.0'
services:
my_app:
build:
context: .
#when building
shm_size: 1gb
#when running
shm_size: 1gb
DockerFile (selenium on Ubuntu -WSL-)
FROM python:3.10
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
# install selenium
RUN pip install selenium==3.8.0
#install and prepar app
COPY ./requirements.txt ./
# COPY . /app
RUN pip3 install -r requirements.txt
RUN apt-get install -y libnss3
ENV APP_DIR=/app/my_app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
# COPY . ${APP_DIR} #not needed since we are mapping the volume in docker-compose
CMD [ "my_app.py" ]
ENTRYPOINT [ "python" ]
This happened to me while trying to open a new web page with the same driver in Chromium. It worked fine in my local machine where I use Chrome.
Did not worked:
driver = webdriver.Chrome(options=options)
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {
"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'})
driver.get('url1')
# Do operations with url1
driver.get('url2')
# Do operations with url2 -> did not work and crashed
Below is the solution, I am using which is working for me. i.e re-initializing the driver
def setup_driver():
global driver
driver = webdriver.Chrome(options=options)
driver.maximize_window()
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {
"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'})
setup_driver()
driver.get('url1')
# Do operations with url1
driver.close()
setup_driver()
driver.get('url2')
# Do operations with url2
driver.close()
I'm not sure whether this is the only possible cause and solution, but after thorough investigation of this error which I encountered every now and then, I found the following evidences:
In the log of the Selenium Grid nodes (which you can show by executing the following command on the docker host: sudo docker logs <container-id>) I found many errors reading: [SEVERE]: bind() failed: Cannot assign requested address (99). From what I read, this error usually means that there are no available ports.
When showing the processes running inside a node (sudo docker exec -it bash and then ps aux), I found more than 300 instances of chrome-driver processes (you can count them using ps aux|grep driver|wc -l)
When running locally, I know that the chrome-driver process is normally invoked when you create an instance of ChromeDriver and is terminated when you call driver.Quit() (I work in C#, not Python). Therefore I concluded that some tests don't call drive.Quit().
The conclusion
In my case, I found that even though we had a call to driver.Quit() in the [TearDown] method (we use NUnit), we had some more code before that line, that could throw and exception. When one of these preceding lines threw an exception, the line that calls driver.Quit() is not reached, and therefore over time we were "leaking" chrome-driver processes on the Selenium Grid nodes. These orphan processes caused a resource leak of available ports (and probably also memory), which also caused the browser's page to crash.
The solution
Given the above conclusion, the solution was pretty straight forward. We had to wrap the code that precedes driver.Quit() in a try/finally, and put the call to driver.Quit() in the finally clause, like this:
[TearDown]
public void MyTearDown()
{
try
{
// Perform any tear down code you like, like saving screenshots, page source, etc.
}
finally
{
_driver?.Quit();
}
}
I was having the same problem, I checked the log at which point in my script the bug happened and I added some wait, ie, time.sleep(2) just before the bug, and my problem was fixed.

Concourse CI Windows Worker

I'm trying to setup a Concourse CI environment with a Windows 7 worker.
I have one machine under Ubuntu Server (16.04) hosting my TSA server and one worker (for the support of git resources), and a second one under Windows 7 hosting a worker.
Everything seems to work fine as:
I can login into the web ui
the fly -t my_concourseci workers command returns :
name containers platform tags team state version
ubuntu 1 linux none none running 1.1
windows7 0 windows none none running 1.1
the fly -t my_concourseci execute -c test.yml command returns:
executing build 146
initializing
running echo Hello World!
Hello World!
with the following content in test.yml file:
platform: windows
run:
path: echo
args: [ "Hello World!" ]
Nevertheless when I add an input to my task:
platform: windows
> inputs:
> - name: concourse
run:
path: echo
args: [ "Hello World!" ]
I get the following error:
executing build 148
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5698k 0 5698k 0 0 1948k 0 --:--:-- 0:00:02 --:--:-- 1949k
initializing
failed to stream in to volume
errored
If I look at the windows worker log I got this error that comes up:
{"timestamp":"1500642862.643555164",
"source":"baggageclaim",
"message":"baggageclaim.api.volume-server.stream-in.bad-stream-payload",
"log_level":1,
"data":{"error":"tar extract failed (exit status 2). output: \"\\ngzip: stdin: not in gzip format\\n/usr/bin/tar: Child returned status 1\\n/usr/bin/tar: Error is not recoverable: exiting now\\n\"",
"session":"2.1.8730",
"volume":"15bf1fc6-0727-4962-6c84-18446e54ab96"}
}
Any ideas about what can cause a not in gzip format error ? Knowing that if I run the exact same task on a linux platform every works fine.
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
inputs:
- name: concourse
run:
path: echo
args: [ "Hello World!" ]
----- STDOUT
executing build 149
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5699k 0 5699k 0 0 1917k 0 --:--:-- 0:00:02 --:--:-- 1918k
initializing
Pulling busybox#sha256:2605a2c4875ce5eb27a9f7403263190cd1af31e48a2044d400320548356251c4...
sha256:2605a2c4875ce5eb27a9f7403263190cd1af31e48a2044d400320548356251c4: Pulling from library/busybox
9e87eff13613: Pulling fs layer
9e87eff13613: Verifying Checksum
9e87eff13613: Download complete
9e87eff13613: Pull complete
Digest: sha256:2605a2c4875ce5eb27a9f7403263190cd1af31e48a2044d400320548356251c4
Status: Downloaded newer image for busybox#sha256:2605a2c4875ce5eb27a9f7403263190cd1af31e48a2044d400320548356251c4
Successfully pulled busybox#sha256:2605a2c4875ce5eb27a9f7403263190cd1af31e48a2044d400320548356251c4.
running echo Hello World!
succeeded
Thanks.
Launch powershell as Administrator, and run the concourse_worker.exe from there. That worked for me.
I am running a Windows Server 2016 Base on AWS based on this AMI ami-e1876a98
Thanks for your answers, it helps me solve my problem.
I were starting my concourse worker within a MSYS 1.0 environment. The thing is that MSYS contains tar and gunzip binaries within the $PATH. When I started concourse worker inside a Power Shell or cmd.exe without any MSYS unix-like binaries in the $PATH it worked like a charm !
Note: Be sure to have no MSYS binaries in the $PATH Windows environment variable for this to work, especially check that Git-Bash environment tools are not added to your windows $PATH environment variable.
Thanks again.

ZFS recovery with missing device

what i wanted:
zfs add poolname cache nvme-partition.
what i did:
zfs add poolname nvme-partition
zfs export poolname
device-no-longer-availeable
zfs import
shows poolname but cannot import
any idea how to recover the 10 TB of my poolname without the 10 GB of my nvme-partition?

MongoDB fatal error: runtime: out of memory

i was trying to import a large collection in mongodb but everytime i try to import
-rw-r--r-- 1 root root 3491368960 Jul 15 06:15 activity-june.json
mongoimport --db analytics --collection reports < activity-june.json
it gives me error like this:
fatal error: runtime: out of memory
goroutine 37 [running]:
runtime.throw(0xcba857)
/usr/local/go/src/pkg/runtime/panic.c:520 +0x69 fp=0x7fd4984fea68 sp=0x7fd4984fea50
runtime.SysMap(0xc308100000, 0x100000000, 0x42b700, 0xcd7eb8)
/usr/local/go/src/pkg/runtime/mem_linux.c:147 +0x93 fp=0x7fd4984fea98 sp=0x7fd4984fea68
runtime.MHeap_SysAlloc(0xce3ea0, 0x100000000)
/usr/local/go/src/pkg/runtime/malloc.goc:616 +0x15b fp=0x7fd4984feaf0 sp=0x7fd4984fea98
MHeap_Grow(0xce3ea0, 0x80000)
/usr/local/go/src/pkg/runtime/mheap.c:319 +0x5d fp=0x7fd4984feb30 sp=0x7fd4984feaf0
MHeap_AllocLocked(0xce3ea0, 0x80000, 0x0)...
heres my disk space:
-bash-4.1# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 4128764 133416 -1
and heres my last query on my database, its not inserting anything.
> db.reports.find().count();
7
Kindly check on this
1) Your computer should be having 64 bit, if it 32 bit then we cannot import more than 2GB of data
2) Each of your document should not exceed 16MB
http://docs.mongodb.org/manual/reference/program/mongoimport/
also check this post,
https://forum.syncthing.net/t/fatal-error-runtime-out-of-memory/2190