How do libatlas3, liblapacke, and libopenblas0 interact with eachother? - lapack

I'm trying to figure out the interaction between the following library packages in Debian 11 and Ubuntu 20.04:
libatlas3-base
liblapacke
OpenBLAS
libopenblas0-openmp
libopenblas0-pthread
libopenblas0-serial
It looks like the OpenBLAS packages can only be used one at a time because they are in different subdirs shown here. How do I select the active one?
/usr/lib/x86_64-linux-gnu/openblas-openmp/libopenblas.so.0
/usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so.0
/usr/lib/x86_64-linux-gnu/openblas-serial/libopenblas.so.0
Once I've selected the active OpenBLAS implementation, will libatlas3 or liblapacke use the active implementation? How can you tell what they are using?

Libraries are selected with alternatives:
~# update-alternatives --config libblas.so.3-x86_64-linux-gnu
~# update-alternatives --config liblapack.so.3-x86_64-linux-gnu
liblapacke.so.3 will use whichever pair of liblapack.so.3 and libblas.so.3 libraries are currently selected from above.
libatlas3-base provides liblapack_atlas.so.3 which will always use the ATLAS implementation.

Related

"community.kubernetes.k8s" can't be resolved in Ansible even if the collection is installed

I want to create some Kubernetes objects using Ansible. The community.kubernetes.k8s can do this, which is included in the community.kubernetes collection. When I try to create a namespace
- name: Create ns
community.kubernetes.k8s:
api_version: v1
kind: Namespace
name: myapp
state: present
Ansible throws an error that the collection is not installed:
ERROR! couldn't resolve module/action 'community.kubernetes.k8s'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/home/user/ansible-project/ansible/roles/k8s/tasks/main.yml': line 14, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Create ns
^ here
But it is allready installed, as ansible-galaxy collection install confirms:
$ ansible-galaxy collection install community.kubernetes
Process install dependency map
Starting collection install process
Skipping 'community.kubernetes' as it is already installed
My installed Ansible version is 2.9.6 on Python 3.8.10, where ansible_python_interpreter=/usr/bin/python3 is set and Python2 not installed on the workstation (targeted with ansible_connection=local).
What am I doing wrong?
What I've already tried
Using old + new namings
Ansible 2.9+ is required to install collections with ansible-galaxy, so this should work. In the collection documentation I found this notice:
IMPORTANT The community.kubernetes collection is being renamed to kubernetes.core. As of version 2.0.0, the collection has been replaced by deprecated redirects for all content to kubernetes.core. If you are using FQCNs starting with community.kubernetes, please update them to kubernetes.core.
Altough this seems confusing since the Ansible documentation still refers to community.kubernetes.k8s I tried this too
- name: Create ns
kubernetes.core.k8s:
# ...
And to be sure
$ ansible-galaxy collection install kubernetes.core
Process install dependency map
Starting collection install process
Skipping 'kubernetes.core' as it is already installed
But still throwing the same couldn't resolve module/action 'kubernetes.core.k8s' error. Both directories ~/.ansible/collections/ansible_collections/kubernetes/core/ and ~/.ansible/collections/ansible_collections/community/kubernetes/ exists, so I'd guess that both (old + new naming) should work.
Checking the directory
By calling ansible-galaxy with -vvv switch, I proved that /home/user/.ansible/collections/ansible_collections is used. It also shows that those packages installs two packages under the hood: The old community.kubernetes and the new kubernetes.core:
Installing 'community.kubernetes:2.0.0' to '/home/user/.ansible/collections/ansible_collections/community/kubernetes'
Downloading https://galaxy.ansible.com/download/community-kubernetes-2.0.0.tar.gz to /home/user/.ansible/tmp/ansible-local-1610573465r9kd/tmpz_hw9gza
Installing 'kubernetes.core:2.1.1' to '/home/user/.ansible/collections/ansible_collections/kubernetes/core'
Downloading https://galaxy.ansible.com/download/kubernetes-core-2.1.1.tar.gz to /home/user/.ansible/tmp/ansible-local-1610573465r9kd/tmpz_hw9gza
which seems even more confusing to me, since the old repo says
This repo hosts the community.kubernetes (a.k.a. kubernetes.core) Ansible Collection.
For me this sounds like they're just changing the name. But as we can see, kubernetes.core has its own repo and version (2.1.1 vs 2.0).
To make sure that this directory is used, I added the following to my local ansible.cfg at project scope:
[defaults]
collections_paths = /home/user/.ansible/collections/ansible_collections/
Doesn't made any difference.
Found out that specifying collections_paths works, but without ansible_collections. Ansible expects the collections directory:
collections_paths = /home/user/.ansible/collections/
It's also nice that using ~ as placeholder for the current users home directory works, so we can keep it independent from the current user like this:
collections_paths = ~/.ansible/collections/
Now my playbook runs fine and the namespace is created:
$ kgns | grep myapp
myapp Active 9m42s
Alternatively
It's also possible to install them globally on the entire system by specifying `` as target directory (-p switch) to ansible-galaxy:
sudo ansible-galaxy collection install -r requirements.yml -p /usr/share/ansible/collections
Where requirements.txt contains (for testing purpose both, see next section)
collections:
- community.kubernetes
- kubernetes.core
But as long as there are no good reasons to install packages globally, I'd keep them locally - so imho specifying collections_paths to the local user in ansible.cfg seems the preferable solution - we also avoid executing ansible-galaxy with root permissions this way.
Which package to use now?
For testing purpose, I installed both to isolate the issue of my error. Since community.kubernetes is deprecated, I'd prefer kubernetes.core. This means to change the requirements file to
collections:
- name: kubernetes.core
or use ansible-galaxy collection install kubernetes.core alternatively - but I'd recommend using a requirements.yml which keeps your requirements well documented and makes it easier for others to install them (especially if there are more than one).
In your playbooks/roles, you just have to use kubernetes.core.* instead of community.kubernetes.*. From my first point of view, it seems that not much has changed yet - it still makes sense to follow the new documentation for kubernetes.core.* to avoid issues caused by using an outdated documentation.

What is VERSION NUMBER referencing in this postgres CLI command?

What is version number supposed to represent in this CLI command: pg_ctl -D /usr/local/var/postgres[VERSION NUMBER HERE] start ?
VERSION NUMBER HERE usually refers to the major version number (not patch/minor release).
So if you have version 10.3 installed, it should refer to 10
Note that in 9.6 and earlier, major versions had two digits, followed by a patch version (example: 9.5.4 -- major version is 9.5, minor version is 4).
However, what's more important than the version is what folder exists at /usr/local/var -- You could have a data directory that has no versions (i.e., you can do initdb /tmp/foo and all your data will go into foo, and pg_ctl -D /tmp/foo start should get your database started). A good rule of thumb (though there are situations where this fails) is to look for the directory containing postgresql.conf, and that directory is what you would pass as an argument to pg_ctl -D <dir> start
Disclosure: I am an EnterpriseDB (EDB) employee
By including the version number of the Postgres version in the name of the data directory allows you to have multiple clusters (=instances) running at the same time using the same binaries.
Including the version number in the name of the data directory is not mandatory though, and whether it was done during installation or not depends on which Linux distribution you are using. Every distribution does this slightly differently.
But typically you don't start Postgres through pg_ctl but through the systems "service manager", e.g. systemctl or service depending on the Linux distribution. You can check the service definition where the exact location of the data directory is.

"Unable to find component name" on myodbc-installer of driver

Trying to follow the directions for installing the MySQL ODBC driver.
When I try to run:
myodbc-installer -a -d -n "MySQL ODBC 8.0 Driver" -t "Driver=/usr/local/lib/libmyodbc8w.so"
It says:
[ERROR] SQLInstaller error 6: Unable to find component name
I've found a handful of cases of people reporting this same message, e.g., here and here. Yet there seems to be no resolution.
Noticing the slight variations in the -n name string for the various drivers, I wondered if perhaps the name was something subtly different and the documentation hadn't been updated. But I used a hex editor to look in /usr/local/lib/libmyodbc8w.so and the literal string "MySQL ODBC 8.0 Driver" is in it.
There may be some instances of a name mismatch causing the problem (e.g. in one of the linked-to cases, they use -n "MySQL" instead of the prescribed -n "MySQL ODBC 5.3" from the notes).
However...in my case it was a matter of not using sudo. The error message is not very helpful in indicating that the problem could be a matter of privileges! :-/ But at the very top of the linked instruction page it says (emphasis mine):
To install the driver from a tarball distribution (.tar.gz file), download the latest version of the driver for your operating system and follow these steps, substituting the appropriate file and directory names based on the package you download (some of the steps below might require superuser privileges)
What's going on is that unixodbc has system-wide odbcinst.ini and odbc.ini. It is stated that you should not be editing these files directly, but they are edited via an API that unixodbc provides. That API is called by the MySQL helper utility called myodbc-installer:
The error message is delivered by this print_installer_error() routine
...which is called from add_driver() when the routine SQLInstallDriverExW() returns false
(Note: unixodbc on most platforms provides the (W)ide Character version of SQLInstallDriverEx(), but myodbc-installer defines its own SQLInstallDriverExW() if it is not available via a shim.)
This API apparently doesn't have a way of saying it can't get the necessary privileges to the files (in /usr/local/etc or perhaps just in /etc). So myodbc-installer is just parroting what it got. Sigh.

Call to undefined function mysql_connect after upgrade to Moodle 3.6 [duplicate]

This question already has answers here:
Fatal error: Uncaught Error: Call to undefined function mysql_connect()
(9 answers)
Closed 3 months ago.
I have ran aptitude install php5-mysql (and restarted MySQL/Apache 2), but I am still getting this error:
Fatal error: Call to undefined function mysql_connect() in /home/validate.php on line 21
phpinfo() says the /etc/php5/apache2/conf.d/pdo_mysql.ini file has been parsed.
In case, you are using PHP7 already, the formerly deprecated functions mysql_* were removed entirely, so you should update your code using the PDO-functions or mysqli_* functions instead.
If that's not possible, as a workaround, I created a small PHP include file, that recreates the old mysql_* functions with mysqli_*()-functions: fix_mysql.inc.php
I see that you tagged this with Ubuntu. Most likely the MySQL driver (and possibly MySQL) is not installed. Assuming you have SSH or terminal access and sudo permissions, log into the server and run this:
sudo apt-get install mysql-server mysql-client php5-mysql
If the MySQL packages or the php5-mysql package are already installed, this will update them.
UPDATE
Since this answer still gets the occasional click I am going to update it to include PHP 7. PHP 7 requires a different package for MySQL so you will want to use a different argument for the apt-get command.
# Replace 7.4 with your version of PHP
sudo apt-get install mysql-server mysql-common php7.4 php7.4-mysql
And importantly, mysql_connect() has been deprecated since PHP v5.5.0. Refer the official documentation here: PHP: mysql_connect()
Well, this is your chance! It looks like PDO is ready; use that instead.
Try checking to see if the PHP MySQL extension module is being loaded:
<?php
phpinfo();
?>
If it's not there, add the following to the php.ini file:
extension=php_mysql.dll
If someone came here with the problem of docker php official images, type below command inside the docker container.
$ docker-php-ext-install mysql mysqli pdo pdo_mysql
For more information, please refer to the link above How to install more PHP extensions section(But it's a bit difficult for me...).
Or this doc may help you.
https://docs.docker.com/samples/library/php/
I was also stuck with the same problem of undefined MySQL_connect().I tried to make changes in PHP.ini file but it was giving me the same error.
Then I came to this solution where I changed my code from depreciated php functions to new functions.
$con=mysqli_connect($host,$user,$password);
mysqli_select_db($con,dbname);
//To select the database
session_start(); //To start the session
$query=mysqli_query($con,your query);
//made query after establishing connection with database.
I hope this will help you .
This solution is correctly working for me .
EDIT:
If you upgrade form old php you need to apt-get install php7.0-mysql
Try:
<?php
phpinfo();
?>
Run the page and search for mysql. If not found, run the following in the shell and restart the Apache server:
sudo apt-get install mysql-server mysql-client php5-mysql
Also make sure you have all the following lines uncommented somewhere in your apache2.conf (or in your conf.d/php.ini) file, from
;extension=php_mysql.so
to
extension=php_mysql.so
In php.ini file
change this
;extension=php_mysql.dll
into
extension=php_mysql.dll
My guess is your PHP installation wasn't compiled with MySQL support.
Check your configure command (php -i | grep mysql). You should see something like '--with-mysql=shared,/usr'.
You can check for complete instructions at http://php.net/manual/en/mysql.installation.php. Although, I would rather go with the solution proposed by #wanovak.
Still, I think you need MySQL support in order to use PDO.
The question is tagged with ubuntu, but the solution of un-commenting the extension=mysqli.dll is specific to windows. I am confused here?!, anyways, first thing run <? php phpinfo ?> and search for mysql* under Configuration heading. If you don't see such a thing implies you have not installed or enabled php-mysql. So first install php-mysql
sudo apt get install php-mysql
This command will install php-mysql depending on the php you have already installed, so no worries about the version!!.
Then comes the unix specific solution, in the php.ini file un-comment the line
extension=msql.so
verify that msql.so is present in /usr/lib/php/<timestamp_folder>,
ELSE
extension=path/to/msql.so
Then finally restart the apache and mysql services, and you should now see the mysql section under Configrations heading in phpinfo page
I was getting this error because the project I was working on was developed on php 5.6 and after install, the project was unable to run on php7.1.
Just for anyone that uses Vagrant with ubuntu/nginx, in the nginx directory(/etc/nginx/), there is a directory named "sites-available" which contains a file named like the url configured for the vagrant maschine. In my case was homestead.app. Within this file there is a line that says something like
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
There you can change the php version to the desired for that particular site.
Googled this but wasnt really able to find a simple answer that said where to look and what to change.
Hope that this helps anyone.
Thanks.
If you are getting the error as
Fatal error: Call to undefined function mysql_connect()
Kindly login to the cPanel >> Click on Select Php version >> select the extension MYSQL
For CentOS 7.8 & PHP 7.3
yum install rh-php73-php-mysqlnd
And then restart apache/php.
(Windows mysql config)
Step 1 : Go To Apache Control Panel > Apache > Config > PHP.ini
Step 2 : Search in Notepad (Ctrl+F) For: ;extension_dir = "" (could be commented with a ;). Replace this line with: extension_dir = "C:\php\ext" (please do not you need to remove the ; on the beginning of the sentence).
Step 3 : Search For: extension=php_mysql.dll and remove the ; in the beginning.
Step 4 : Save and Restart You Apache HTTP Server. (On Windows this usually done via a UI)
That's it :)
If you get errors about missing php_mysql.dll you'll probably need to download this file from either the php.net site or the pecl.php.net. (Please be causius about where you get it from)
More info on PHP: Installation of extensions on Windows - Manual
There must be some syntax error. Copy/paste this code and see if it works:
<?php
$link = mysql_connect('localhost', 'root', '');
if (!$link) {
die('Could not connect:' . mysql_error());
}
echo 'Connected successfully';
?
I had the same error message. It turns out I was using the msql_connect() function instead of mysql_connect().

What does the `--fresh` option do in Brew?

While following installation instructions (e.g., for caffe for os x), I run into the --fresh flag for homebrew. For example,
brew install --fresh -vd snappy leveldb gflags glog szip lmdb
However, I see no documentation about what --fresh does, and I don't find it in the source code for homebrew. What does this flag do? (Or what did it used to do?)
I found an old github issue describing the behavior of --fresh.
The flag was meant to ensure packages would be installed without any previously set compile-time options (like --with-python), but it was removed because it didn't do anything:
commit 64744646e9be93dd758ca5cf202c6605accf4deb
Author: Jack Nagel <jacknagel#gmail.com>
Date: Sat Jul 5 19:28:15 2014 -0500
Remove remaining references to "--fresh"
This option was removed in 8cdf4d8ebf439eb9a9ffcaa0e455ced9459e1e41
because it did not do anything.