How do I force-reinstall a package with Ansible? - deployment

I'm using Ansible to deploy .deb packages from a custom repository.
Sometimes a developer can forget to change the package number, so the repository will have the new package with the old version. This is unnecessary, so I would like to always reinstall the package. How do I do that?
There is the force=yes option for apt module. Ansible documentation says:
If yes, force installs/removes.
But that seems to be about force-accepting any warnings. At least when I turn it off, Ansible gets blocked with a warning about an untrusted source. (Both the repository and the servers are in the same intranet, so that should not be an issue)
I could use this:
- name: force-reinstall myservice
shell: apt-get --reinstall install myservice
But this way I cannot use other options for apt module, and Ansible gets blocked on warnings the same way.
Is there a way to always reinstall a package and avoid blocking on any interactivity?

The proper way is definitely to use the correct version number. But if you don't want to enforce that then the easiest workaround is to first remove the package and then install it again. That is effectively same as reinstalling.
- name: remove package
apt: name=package_name state=absent
- name: install package
apt: name=package_name state=present update_cache=yes

Unfortunately I don't see any possibility for a "reinstall" with the apt package module.
The only possibility is the one already mentioned in the question via shell or better via command module.
- name: Reinstall of package by command.
command: "apt --reinstall install package"
With the apt module you only have the possibility to do an uninstall (state=absent) followed by a new install (state=present). I don't see a problem with the idempotent approach of Ansible, if you use an appropriate condition via when. For example:
- name: Reinstall of package by uninstall and new install.
apt:
name: package
state: "{{ item }}"
with_items: ['absent', 'present']
when: reinstall_is_required
But: An uninstall and a new install can be very risky depending on the package, also this is not the same as a reinstall like "apt --reinstall install package".
I faced the same problem of needing the option of a reinstall.
I installed an application and in the config of the application I changed the execution user for this application. Reinstall changes the owners for all files and folders accordingly.
Uninstall and new install: Uninstall removes all config files (including the one I changed) and the new install creates the package default config files.
Reinstall via apt: The config files remain unchanged (are not overwritten by the package default config) and the configuration of the application is applied with the customized config.
For this reason, the reinstall option of apt is the only option for me.

Related

PyPI install_requires direct links

I have a Python library (https://github.com/jcrozum/PyStableMotifs) that I want to publish on PyPI. It depends on another library (https://github.com/hklarner/PyBoolNet) that I do not control and that is only available on GitHub, and in particular, it is not available on PyPI. My setup.py looks like this:
from setuptools import
setup(
... <other metadata> ...,
install_requires=[
'PyBoolNet # git+https://github.com/hklarner/PyBoolNet#2.3.0',
... <other packages> ...
]
)
Running pip install git+https://github.com/jcrozum/PyStableMotifs works perfectly, but I can't upload this to PyPI because of the following error from twine:
Invalid value for requires_dist. Error: Can't have direct dependency: 'PyBoolNet # git+https://github.com/hklarner/PyBoolNet#2.3.0'
My understanding is that direct links are forbidden by PyPI for security reasons. Nonetheless, PyBoolNet is a hard requirement for PyStableMotifs. What do I do? Give up on PyPI?
I just want pip install PyStableMotifs to work for my users. Ideally, this command should install the dependencies and I should not have to maintain two versions of setup.py.
Failing that, I have considered creating a "dummy" package on PyPI directing users to install using the command pip install git+https://github.com/jcrozum/PyStableMotifs. Is this a bad idea (or even possible)?
Are there already established best practices for this situation or other common workarounds?
EDIT:
For now, I have a clunky and totally unsatisfying workaround. I'm keeping two versions; a GitHub version that works perfectly, and a PyPI version that has the PyBoolNet requirement removed. If the user tries to import PyStableMotifs without PyBoolNet installed, an error message is shown that has install instructions for PyBoolNet. This is far from ideal in my mind, but it will have to do until I can find a better solution or until PyPI fixes this bug (or removes this feature, depending on who you ask).
My recommendation would be to get rid of the direct URL in install_requires, and tell your users where they can find that dependency PyBoolNet since it is not on PyPI. Don't force them on a specific installation method, but show them an example.
Maybe simply tell your users something like:
This project depends on PyBoolNet, which is not available on PyPI. One place where you can find it is at: https://github.com/hklarner/PyBoolNet.
One way to install PyStableMotifs as well as its dependency PyBoolNet is to run the following command:
python -m pip install 'git+https://github.com/hklarner/PyBoolNet#2.3.0#egg=PyBoolNet' PyStableMotifs
You could additionnally prepare a requirements.txt file and tell your users:
Install with the following command:
python -m pip install --requirement https://raw.githubusercontent.com/jcrozum/PyStableMotifs/master/requirements.txt
The content of requirements.txt could be something like:
git+https://github.com/hklarner/PyBoolNet#2.3.0#egg=PyBoolNet
PyStableMotifs
But in the end, you should really let your users choose how to install that dependency. Your project only need to declare that it depends on that library but not how to install it.

Completely uninstall Eclipse 4.7 version in RHEL 7.4 Maipo

I'm trying to uninstall the current version of Eclipse IDE in my RHEL machine by simply deleting all the files like:
sudo rm -rf ~/.eclipse
sudo rm -rf ~/eclipse-workspace
I also tried
sudo yum remove 'eclipse*'
However, these didn't seem to solve the purpose.
Any help will be appreciated, thanks!
Applications on Linux systems are most often installed using so-called packages, which are managed by a package management system. In the case of RHEL, packages use the RPM format, and the package manager of choice is a tool called yum.
Both installation and removal of software (packages) should be done using yum, so as to allow the package management system keep track of all installed files and current status. Therefore, you shouldn't try to remove software by simply deleting files from the file system. Instead, use the yum command. See the RHEL System Admin Guide for a detailed explanation of how to use yum to search, install, upgrade, and remove packages: Working with Packages.
You have tried the correct command (yum remove <package-name>), but you need to use the correct package name. On RHEL 7.4, the latest version of Eclipse is available as a part of the DevTools channel, and the package name is rh-eclipse47 (see Enabling the Red Hat Developer Tools Repositories). Note that you may have also installed an older version, which would be, for example, rh-eclipse46.
To find out what is the name of the package you have installed, you can run, for example, the following command:
yum list installed | grep eclipse
There is also the possibility that you installed the software not from an RPM package but manually, e.g. from a .tar.gz file distrubuted from eclipse.org. If that's the case, you will need to use the uninstaller program supplied with that distribution of the software.
Write command as:
rpm -qa|grep eclipse
This will give a list of installed packages. Remove all the packages by giving below command:
rpm -e *package-name*
Done!!!

Install a package to a docker container (managed by dokku)

I have a hard time understanding where is the right place to place a code that will install the needed packages for the given docker container managed by dokku.
We have a scala application and, unfortunately, we need to have one shell call that is dependent on an environment. I would like to install the given package for the given container using "apt-get install". Right now I am using a custom plugin with a file named "post-release-build". However, I don't have the permission to install anything in that phase.
Basically, my script that should be invoked looks like this (based on a dockerfile that is available online):
apt-get update
apt-get install -y build-essential xorg libssl-dev libxrender-dev wget gdebi
wget http://download.gna.org/wkhtmltopdf/0.12/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
gdebi --n wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
echo "-----> wkhtmltox installed!"
Is there a way how to make it work? I would also prefer to have such a file somewhere in the application so I don't need to setup environment before pushing the app (in the future).
EDIT:
I have found a plugin that should be capable of installing packages using apt-get (https://github.com/F4-Group/dokku-apt) however, I am a little bit unlucky because it downloads a package that is not working properly.
Since just downloading with apt-get will download a package that fails, I investigated deeper into dokku and came out with a new plugin that should install the package for you.
I have created a script, documented how to use it and licenced it over MIT license so feel free to use it. Hopefully it will save you the time I had to spend realizing what is going on.
URL: https://github.com/mbriskar/dokku-wkhtmltopdf

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.

How can I trigger a 'yum clean all' from within a yum plugin?

I'm writing a yum plugin that updates the URLs of local repos. When the repo URL changes, I'd like to have yum run a yum clean all to make sure no out-of-date information is cached. I know yum has a hook for running code when yum clean [plugins|all] is requested but is it possible to trigger a clean all from within one of the plugin's other hook functions?
You can do this easily. Yum exposes a library which is consumed by command line program. Here is an example code for yum clean all:
import sys
sys.path.append("/usr/share/yum-cli")
import cli
ybc = cli.YumBaseCli()
ybc.cleanCli(["all"])
In case you want to do more then "clean all" using function check all the APIs exposed by CLI library methods exposed at /user/share/yum-cli folder :)
Regards,