Cmt migration tool validation error - openvz

I am using cmt (https://github.com/marcosnils/cmt) for container migration and I have problem in the validation.
# cmt validate --src `pwd` --dst walid#192.168.1.12
2016/01/11 17:31:53 Error criu does not exist in dst
I am sure both have the patched version of criu (https://github.com/marcosnils/criu) on both servers.
And I even tried it the other way around with the same result.

The error seems to be because all the tools needs a sudo permission.
so to remove this need.
Add this to end of /etc/sudoers
'walid ALL=(ALL) NOPASSWD: ALL'
After this the validation successes

Related

Dspace 7.2 - import of data from 5.8 has taken over a month

Inherited a system that is old (5.8), on an EOL box, and is having real issues post Log4j and some necessary network changes. Finally got permission to do a new version, so we are importing our Library collection into 7.2. We have 36G in our assetstore.
/opt/dspace/bin/dspace packager -r -a -k -t AIP -o skipIfParentMissing=true -e <ADMIN_EMAIL>-i /0 /opt/AIP/fac_site.zip
Current Meta-data Entry Count is 1530880
However, this process has been running for about seven weeks now!
Is this normal?
Is there any way we can see how much longer it will take? (management is nervous, understandably, as the current live version is very fragile)
Is there anyway to expedite this?
Thanks very much for any assistance that can be offered.

new Sphinx version attempts a non-existing connection

I recently upgraded sphinx to version 2.2.11 on Ubuntu.
THen I started getting daily emails where a process is attempting to connect and generating this error:
ERROR: index 'test1stemmed': sql_connect: Access denied for user 'test'#'localhost'
ERROR: index 'test1': sql_connect: Access denied for user 'test'#'localhost'
The email warning has a topic which I assume is the info regarding the root of the problem
. /etc/default/sphinxsearch && if [ "$START" = "yes" ] && [ -x /usr/bin/indexer ]; then /usr/bin/indexer --quiet --rotate --all; fi
so /etc/default/sphinxsearch does have the start variable as yes.
but the /usr/bin/indexer is total gibberish.
Such a user never existed on the system AFAIK.
It would be interesting to know how this process got generated, but more importantly
How can this process be safely stopped?
I've seen that happen, it comes from the Sphinx install 'package'. Whoever setup that package, created a cron task that does that indexer --all command, that just tries to reindex every index (once a day IIRC). The package maintainer thought they being helpful :)
From https://packages.ubuntu.com/bionic/ppc64el/sphinxsearch/filelist
looks like it might be in
/etc/cron.d/sphinxsearch
You could remove that cron task, if dont want it.
Presumably you already have some other process for actually updating your actual real 'live' indexes. (either dedicated cron tasks, or maybe use RT indexes or whatever)
Also it seems you still have these 'test' indexes in your sphinx.conf. Maybe left over from the initial installation. Installing a new package I dont think would overwrite sphinx.conf to add them later?
May want to clear them out of your sphinx.conf if don't use them, could simplify the file.
(although possibly still want to get of the --all cron, which just blindly reindexes everything daily!)

Upgrading postgres 9.5 to 11

So ive been tasked of upgrading our postgres server to version 11, however all the guide ive found either dont work for me or are not complete.
I have tried 2 methods and had to recall all changes:
https://www.hutsky.cz/blog/2019/02/upgrade-postgresql-from-9-3-to-11/
In this method not only are the dependency checks and upgrade commands exactly the same but also none of these commands work for me, i keep getting the error of:
"You must identify the directory where the new cluster binaries reside.
Please use the -B command-line option or the PGBINNEW environment variable.
Failure, exiting"
And ive been unable to find any fix to this.
And also tried the delete old method :
https://techcyclist.com/postgres/upgrading-postgres-to-the-latest-version-on-centos-7-server/
but in this method he deletes the old postgres completely and also the config files, but our config files have been made by the EX sys admin and i simply dont have the time it takes to study the configs to redo them in the new version, and i cant risk simply replacing the new config file with the ole one.
If anyone has done such a assignment and is willing to help, i would much appreciate it.
I used : yum install postgresql11 postgresql11-contrib postgresql11-devel postgresql11-libs postgresql11-server
to install the new postgres 11 and :
/usr/pgsql-11/bin/initdb -D /var/lib/pgsql/11/data
to init it. with a few dependencies installing in between.
afterwards all other commands :
/usr/pgsql-11/bin/pg_upgrade --old-bindir=/usr/pgsql-9.3/bin/ --
new-bindir=/usr/pgsql-11/bin/ --old-
datadir=/var/lib/pgsql/9.3/data/ --new-
datadir=/var/lib/pgsql/11/data/ --check
gave errors as described.

Rails spec test error, no password supplied

I have an existing project, with created many years ago. No one at out company has succeeded in running them. I have updated files to fit the latest rails version but when I run bundle exec rspec spec/models/promo_code_spec.rb,
I get
/Users/mmiller/.rvm/gems/ruby-2.1.6#global/gems/activerecord-4.1.9/lib/active_record/connection_adapters/postgresql_adapter.rb:888:in `initialize': fe_sendauth: no password supplied (PG::ConnectionBad)
I have followed these steps in this post:
fe_sendauth: no password supplied
but still getting the same error.
Any advice on how to resolve this issue?
Change this,
local all all trust
to,
local all all md5
and create a super_user in PostgreSQL with username and password, and add that username and password to your database.yml file
Welp, for my case, as a complete new user of Rails testing of any kind, I got this error when I wanted to use RSpec and forgot to create a test database. Make sure your test db has been assigned its own name in database.yml.
One helpful SO post mentioned this command line to get things initialized:
bundle exec rake db:reset RAILS_ENV=test
Then on to joy...

restore complete filesystem to default security context

I'm a selinux newbie and had to change the security context of a mercurial repo and config file on a CentOS box to get it serves from httpd.
Accidentally I issued "chcon -Rv --type=httpd_sys_script_exec_t /", which I could only stop when already masses of files and directories have been modified.
I read about restorecon to restore something to its default context, but it doesn't work for me, I got "permission denied".
What can I do to restore the whole filesystem to its selinux defaults?
You could try doing a fixfiles relabel to get things back in order. Else you could edit /etc/selinux/config and set the system to no longer enforce SELinux. Good luck!
You could either of the following to fix this.
fixfiles
create a file /.autorelabel and reboot the system.
restorecon -f file
Usually the conf file will be /etc/selinux/targeted/contexts/files/file_contexts