Unable to configure a new agent : Failed to create CoreCLR, HRESULT: 0x80004005 - azure-devops

We had some 12 agents (vsts-agent-linux-x64-2.188.4) running on one Az VM (Ubuntu 20.04.2 LTS) as processes (./config.sh && screen ./run.sh). All was well..
I had to run some command related to /tmp folder but it kept showing busy and we suspected that our Agents might be using /tmp. Unfortunately instead of any other clean way of stopping the agents, we killed all processes on this VM manually, including the agents'.
After the /tmp related command ran successfully, I tried running screen ./run.sh from one of the agent directories. And I got an error:
Failed to create CoreCLR, HRESULT: 0x80004005
I also had tried :
.agent2/run.sh and I got the error :
ldd: ./bin/libcoreclr.so: No such file or directory
ldd: ./bin/System.Security.Cryptography.Native.OpenSsl.so: No such file or directory
ldd: ./bin/System.IO.Compression.Native.so: No such file or directory
ldd: ./bin/System.Net.Http.Native.so: No such file or directory
Failed to create CoreCLR, HRESULT: 0x80004005
I even downloaded a new .tar for the agent and ran a fresh ./config . But I get the same error on ./config as well
Is there a solution to this? Please help

export COMPlus_EnableDiagnostics = 0, and then running ./config from the agent directory. worked!

I had this issue when running as the non-privileged user specified in the systemd file but running as root user worked fine.
Finally used:
strace -f -o trace.log /<executable path>/<executable name>
Which led me to:
9183 mknod("/tmp/clr-debug-pipe-9183-8112345738-in", S_IFIFO|0700) = -1 EACCES (Permission denied)
This caused me to compare the /tmp directory between working and non-working boxes.
[<not-working-hostname>]$ ll /
drwxrwxr-x 7 root root 93 Jan 5 21:37 tmp
[<working-hostname>]$ ll /
drwxrwxrwt 7 root root 93 Jan 5 21:59 tmp
(Note the r-x vs rwt)
Fix:
[<hostname>]# chmod 1777 /tmp

Related

Output from Eclipse CDT on Linux have incorrect permission

I'm having problems with files created from within Eclipse running on Linux.
All files (as far as I know) only gets the owner read and write permissions. Build output is the same but with the execute permission added.
The "test" file is created in Eclipse by File > New, and the "test1" file is created in command line by "touch" by the same user in the same folder.
-rw-------. 1 user stdgroup 0 Mar 15 10:22 test
-rw-r--r--. 1 user stdgroup 0 Mar 15 10:23 test1
My umask is set to 0022, so the output from the "touched" file is correct.
I'm running Eclipse Oxygen.2 Release 4.7.2 on Linux RedHatEnterpriseServer 7.3.
Anyone knows if there is a setting in Eclipse that can cause this, or any other ideas as to why this happens?

what is information located at /var/lib/mongo/dbname.ns

May I know what kind of information is located at this directory. When I tried to know I wrote following two commands
on linux box and got the following result:
cd /var/lib/mongo/
ls
collection-29102--3481430970472628260.wt
index-19537--3481430970472628260.wt
_mdb_catalog.wt
collection-2910--3481430970472628260.wt
index-19539--3481430970472628260.wt
mongod.lock
collection-29104--3481430970472628260.wt
index-19541--3481430970472628260.wt
sizeStorer.wt
collection-29106--3481430970472628260.wt
index-19543--3481430970472628260.wt
storage.bson

Selinux - File Contexts Look Good, But Selinux Won't Allow Write

I am trying to learn Selinux. With a sandbox and using VSFTPD to experiment with, I have a vsfptd server running in Centos. I have annonmous users to place files in /var/ftp/incoming. On a remote machine I can have the user successfully log in but could not place the file on the remove vsftpd server:
$ftp mysql_server
Connected to mysql_server (192.168.1.31).
220 Welcome to blah FTP service.
Name (mysql_server:root): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer ftp> put atd
local: atd remote: atd
227 Entering Passive Mode (192,168,1,31,19,161).
553 Could not create file.
ftp>
On the VSFTPD server, aureport -a report shows:
[root#mysql_server ftp]# aureport -a
AVC Report
========================================================
# date time comm subj syscall class permission obj event
========================================================
4. 04/08/2013 13:30:36 vsftpd unconfined_u:system_r:ftpd_t:s0-s0:c0.c1023 21 dir write system_u:object_r:public_content_t:s0 denied 28
5. 04/08/2013 13:34:57 vsftpd unconfined_u:system_r:ftpd_t:s0-s0:c0.c1023 2 dir write system_u:object_r:public_content_t:s0 denied 47
I checked the directory and the file contexts look good, so I don't understand why Selinux won't allow vsftpd to write to the incoming directory:
[root#mysql_server ftp]# ls -Z
drwx-wx---. root ftp system_u:object_r:public_content_t:s0 incoming
drwxr-xr-x. root root system_u:object_r:public_content_t:s0 pub
[root#mysql_server ftp]#
You need to run the following commands to allow in SELinux upload and edit files:
setsebool -P allow_ftpd_full_access on
setsebool -P ftp_home_dir on
Your SELinux type is not correct. Use 'public_content_rw_t' instead of 'public_content_t'. Read more on http://beginlinux.com/blog/2008/11/vsftpd-and-selinux-on-centos/

Can't get launchd to work at Startup/Shutdown on OS X Lion

I followed some online guides trying to get some headless VMs to start/suspend automatically at boot/shutdown on my Mac. I can't get it to work at all. This is my first time trying to get script runs on Startup/Shutdown, so it could be that I'm just missing something very basic and if that's the case I apologize.
These are the steps I followed:
Created a directory /Library/StartupItems/HeadlessVM
Created two files within that directory:
-rwxr--r-- 1 root wheel 242 Feb 19 19:05 HeadlessVM
-rw-r--r-- 1 root wheel 188 Feb 20 12:42 StartupParameters.plist
Contents for HeadlessVM
$ cat HeadlessVM
#!/bin/sh
. /etc/rc.common
StartService ()
{
ConsoleMessage "Starting HeadlessVM"
/usr/local/bin/runvmheadless
}
StopService ()
{
ConsoleMessage "Suspending HeadlessVM"
/usr/local/bin/suspendvmheadless
}
RunService "$1"
Contents for StartupParameters.plist
$ cat StartupParameters.plist
{
Description = "Runs/Suspends Virtual Machine Headless on OS X Startup/Shutdown";
Provides = ("HeadlessVM");
Uses = ("Disks");
OrderPreference = ("Late");
}
Created my script files, that will perform both actions:
-rwxr-xr-x# 1 xxxxxxx admin 164 Feb 19 01:06 runvmheadless
-rwxr-xr-x# 1 xxxxxxx admin 160 Feb 19 01:19 suspendvmheadless
Contents for runvmheadless
$ cat runvmheadless
#!/bin/bash
"/Applications/VMware Fusion.app/Contents/Library/vmrun" -T fusion start "/Volumes/Archive/Virtual Machines/vm.vmwarevm/vm.vmx" nogui
Contents for suspendvmheadless
$ cat suspendvmheadless
#!/bin/bash
"/Applications/VMware Fusion.app/Contents/Library/vmrun" -T fusion suspend "/Volumes/StaticData/Virtual Machines/vm.vmwarevm/vm.vmx"
My troubleshooting so far:
If I run the scripts from the terminal, they work as intended.
If I run sudo /sbin/SystemStarter (start or stop) "HeadlessVM" it also works.
On console I only see the following when I reboot, no clue on what's wrong tho.
2/20/12 12:11:09.128 PM SystemStarter: Runs/Suspends Virtual Machine Headless on OS X Startup/Shutdown (100) did not complete successfully
Appreciate any help, Thank you.
I found what was wrong. The code above is fine, the problem is that my scripts are trying to get data from an encrypted secondary disk which was not available at boot time.
I used this in order to bypass this problem:https://github.com/jridgewell/Unlock
Thanks

Capifony and directory owners

When I cap deploy my Symfony2 project, then log into my server I see that the the dev (app_dev.php) runs ok but the prod version (app.php) does not.
The error is
[Tue Jan 03 14:31:48 2012] [error] [client xxx.xxx.xxx.xxx] PHP Fatal error: Uncaught exception 'RuntimeException' with message 'Failed to write cache file "/var/www/example/prod/releases/20120103202539/app/cache/prod/classes.php".' in /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache:1079\nStack trace:\n#0 /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache(1017): Symfony\\Component\\ClassLoader\\ClassCollectionLoader::writeCacheFile('/var/www/example/p...', '<?php ????name...')\n#1 /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache(682): Symfony\\Component\\ClassLoader\\ClassCollectionLoader::load(Array, '/var/www/example/p...', 'classes', false, false, '.php')\n#2 /var/www/example/prod/releases/20120103202539/web/app.php(10): Symfony\\Component\\HttpKernel\\Kernel->loadClassCache()\n#3 {main}\n thrown in /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache on line 1079
Looking at the recently deployed cache directory I see:
drwxrwxrwx 4 root root 4096 Jan 3 14:28 .
drwxrwxr-x 5 root root 4096 Jan 3 14:28 ..
drwxr-xr-x 6 www-data www-data 4096 Jan 3 14:28 dev
drwxrwxr-x 7 root root 4096 Jan 3 14:28 prod
I can fix the issue with chown -R www-data.www-data prod/ but I wondered if I can stop this from happening in the first place? And why do the directories have different owners?
This happens because your web-server is running by user, who is not able to write to just created cache/prod directory.
There are two solutions, which I know and use. First, add extra commands to run after deployment to Capfile. Capfile will like this:
load 'deploy' if respond_to?(:namespace) # cap2 differentiator
Dir['vendor/bundles/*/*/recipes/*.rb'].each { |bundle| load(bundle) }
load Gem.find_files('symfony2.rb').last.to_s
after "deploy:finalize_update" do
run "sudo chown -R www-data:www-data #{latest_release}/#{cache_path}"
run "sudo chown -R www-data:www-data #{latest_release}/#{log_path}"
run "sudo chmod -R 777 #{latest_release}/#{cache_path}"
end
load 'app/config/deploy'
Second solution is more elegant. You specify correct user, who can write to cache in deploy.rb and make sure that you don't use sudo:
set :user, "anton"
set :use_sudo, false
In the last version of capifony, they've added the option to set writable directories.
Here's the official article which explains what I've written below : http://capifony.org/cookbook/set-permissions.html
You have to deploy using sudo (not a good practice, but it gets the job done)
set :use_sudo, false
# To prompt the sudo password
default_run_options[:pty] = true
and tell capifony which files to make cache and logs folder writable :
set :writable_dirs, ["app/cache", "app/logs"]
set :webserver_user, "www-data"
set :permission_method, :acl
(you have to install acl on your machine, or use :chwon instead of :acl)
EDIT :
I've just realized that this is not enough, the "set_permissions" task is not automatically called, so you have to explicitly run
cap deploy:set_permissions
Or add this line in your deploy.rb :
before "deploy:restart", "deploy:set_permissions"
I solved this problem by adding cache folder to shared folders.
set :shared_children, [app_path + "/cache", app_path + "/logs", web_path + "/uploads", "vendor"]
This way the directory is not recreated each time during deployment, so there is no problem with permissions.
Yes, don't need recreate cache every time after deploy, this solution is logical and pragmatical.
Second solution from Anton - is work if you cache folder permission true in develop environment