centos 8 sdkmanager update tools failed - android-sdk-manager

I use the latest commandlinetool
sudo curl https://dl.google.com/android/repository/commandlinetools-linux-7302050_latest.zip -o android-sdk.zip
sudo yum install unzip
sudo unzip android-sdk.zip -d .
while trying to Update tools,failed
sudo ./cmdline-tools/bin/sdkmanager "tools"
ERROR: JAVA_HOME is not set and no 'java' command could be found in
your PATH. Please set the JAVA_HOME variable in your environment to
match the location of your Java installation.
but I have already set JAVA_HOME and JDK
sudo tee /etc/profile.d/jdk1.8.0.sh <<EOF
export JAVA_HOME=/opt/jdk1.8.0_261
export PATH=\$PATH:\$JAVA_HOME/bin
EOF
and can verity PATH and JAVA_HOME
# echo $PATH
/root/.nvm/versions/node/v16.1.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jdk1.8.0_261/bin:/root/bin
# echo $JAVA_HOME
/opt/jdk1.8.0_261
anyone can help?

can not use sudo,because
in file .../cmdline-tools/bin/sdkmanager
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
when I run
sudo which java
which: no java in (/sbin:/bin:/usr/sbin:/usr/bin)
and below code is ok
./cmdline-tools/bin/sdkmanager "tools" --sdk_root=/home/android/android_sdk/sdk

Related

pyenv install with .python-version and .python-virtualen fails on MacOS BigSur

This is only partly related to #1737
I have just upgraded to the new MAC OS BigSur.
I have installed XCode Beta 12.3 and configured it with Command Line Tools 12.3 beta.
If I do:
$ CFLAGS="-I$(brew --prefix openssl)/include -I$(brew --prefix bzip2)/include -I$(brew --prefix readline)/include -I$(xcrun --show-sdk-path)/usr/include" LDFLAGS="-L$(brew --prefix openssl)/lib -L$(brew --prefix readline)/lib -L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib" pyenv install --patch 3.8.0 < <(curl -sSL https://github.com/python/cpython/commit/8ea6353.patch\?full_index\=1)
as per the instructions of this blog: https://dev.to/kojikanao/install-python-3-8-0-via-pyenv-on-bigsur-4oee It works.
However, I started using pyenv after finding a very attractive way of managing many python envs through automatic activation as described in this blog: https://glhuilli.github.io/virtual-environments.html
Since I upgraded, I have not been able to get this to work.
Questions:
When I cd into a directory with .python-version and
.python-virtualenv, the script prompts me to create a new env with
pyenv install. This fails with the ./Modules/pwdmodule.c error. How
can I alter the above script in order to create an environment using
.python-version and .python-virtualenv? I can obviously provide a
different python version in the script, but what about the name of
the virtual environment? How can I include that?
I want the new
virtual environment contents to be located in the directory where
pyenv is called and not /Users/username/.pyenv. How can this be
done? i am sure others are facing similar issues. Will these be
fixed eventually? Ideally, I would like to be able to just do pyenv
install and be done...
Thanks in advance.
So, about question 1: The answer is that pyenv install will not work at the momment. However, as long as the required pyenv version is installed, the script will work like a charm. So you will have to install it in a different way (not with pyenv install).
Example:
Suppose you are given two files:
.python-vesion
.python-virtualenv
respectively encapsulating: 3.8.2 and test-venv. Then just run:
CFLAGS="-I$(brew --prefix openssl)/include -I$(brew --prefix bzip2)/include -I$(brew --prefix readline)/include -I$(xcrun --show-sdk-path)/usr/include"
LDFLAGS="-L$(brew --prefix openssl)/lib -L$(brew --prefix readline)/lib -L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib"
pyenv install --patch \$(head -n 1 .python-version) < <(curl -sSL https://github.com/python/cpython/commit/8ea6353.patch\?full_index\=1)
This should successfully install a pyenv for 3.8.2.
Then just do:
pyenv virtualenv \$(head -n 1 .python-virtualenv)
Then if you run:
\$ pyenv virtualenvs
3.8.2/envs/test-venv (created from /Users/{your-pc-name}/.pyenv/versions/3.8.2)
test-venv (created from /Users/{your-pc-name}/.pyenv/versions/3.8.2)
you will confirm that the new env has been created.
About question 2: Here is the updated script:
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:/usr/local/bin:$PATH
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
# Automatic venv activation
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
export PYENV_VIRTUALENV_DISABLE_PROMPT=1
# Undo any existing alias for `cd`
unalias cd 2>/dev/null
# Method that verifies all requirements and activates the virtualenv
hasAndSetVirtualenv() {
# .python-version is mandatory for .python-virtualenv but not vice versa
if [ -f .python-virtualenv ]; then
if [ ! -f .python-version ]; then
echo "To use .python-virtualenv you need a .python-version"
return 1
fi
fi
# Check if pyenv has the Python version needed.
# If not (or pyenv not available) exit with code 1 and the respective instructions.
if [ -f .python-version ]; then
if [ -z "`which pyenv`" ]; then
echo "Install pyenv see https://github.com/yyuu/pyenv"
return 1
elif [ -n "`pyenv versions 2>&1 | grep 'not installed'`" ]; then
# Message "not installed" is automatically generated by `pyenv versions`
echo 'run "pyenv install"'
return 1
fi
fi
# Create and activate the virtualenv if all conditions above are successful
# Also, if virtualenv is already created, then just activate it.
if [ -f .python-virtualenv ]; then
VIRTUALENV_NAME="`cat .python-virtualenv`"
PYTHON_VERSION="`cat .python-version`"
MY_ENV=$PYENV_ROOT/versions/$PYTHON_VERSION/envs/$VIRTUALENV_NAME
([ -d $MY_ENV ] || virtualenv $MY_ENV -p `which python`) && \
source $MY_ENV/bin/activate
fi
}
pythonVirtualenvCd () {
# move to a folder + run the pyenv + virtualenv script
cd "$#" && hasAndSetVirtualenv
}
# Every time you move to a folder, run the pyenv + virtualenv script
alias cd="pythonVirtualenvCd"

Which profile does sh loads

I am trying to load a specific tool ( nvm ) from within sh.
Installing it as explained in the page for bash it works perfectly and testing it returns the following.
$ bash
$ nvm --version
+ X.XX.X
but if I type
$ sh
$ nvm --version
+ sh: 1: nvm: not found
but still its expected as the default installation modifies the .bashrc.
now i have included the same .bashrc code in my /etc/profile
export NVM_DIR="/opt/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
and tried again.
$ sh
$ nvm --version
+ sh: 1: nvm: not found
$ echo $NVM_DIR
+ /dir/to/nvm
$ [ -s "$NVM_DIR/nvm.sh" ] && echo "it works?"
+ it works?
$ [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
$ nvm --version
X.XX.X
which got me extremely confused. What exactly is happening ? Isn't sh loading the /etc/profile or am I doing something really wrong?
--edit after comments--
also tried to include it in the local profile
$ cat ~/.profile
+ export NVM_DIR="/opt/nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
$ sh
$ nvm --version
+ sh: 1: nvm: not found
The problem was described in this article with the sh blue line.
The solution to the problem was to set the /etc/profile/ in the ENV variable.
for example ENV=/etc/profile sh would load the profile when the sh opens. That solved the problem

Install MongoDB on Debian Buster

How to install the latest MongoDB 3.4 or even 3.6?
They support with Ubuntu, but my server is Debian Buster and I am stuck with MongoDB 3.2.
I don't know if this is a good idea yet, but I just installed it by adding the sid repos and installing using the mongodb-server package. For me this installs version 3.4.18.
I created /etc/apt/sources.list.d/sid.list with:
deb http://deb.debian.org/debian/ sid main
deb-src http://deb.debian.org/debian/ sid main
then did
apt update
apt install mongodb-server
and verified that it's working by connecting with mongo.
I have found the solution for a build script, the description is found here:
https://github.com/patrikx3/docker-debian-testing-mongodb-stable
The description:
Debian Stretch / Buster / Bullseye / Testing MongoDB and MongoDB Tools build stable builder script, what it does as exactly:
It is basically a built for the latest MongoDB for Debian.
The current varsion is the r4.0.x build (release).
Warning It will remove all mongodb* apt packages in ./scripts/build-server.sh and /etc/systemd/system/mongodb-server.service is replaced.
It install the required apt dependencies and generates the SystemD service and makes it enabled.
Check if the build works (building is below). It runs all tests, so if it works, then it really does, actually. If there is an error, of course, you will not deploy on your server. So, if building and testing works, then it puts the binaries as it follow and you are sure and done.
The build as follows build-server.sh:
#!/usr/bin/env bash
# based on https://github.com/mongodb/mongo/wiki/Build-Mongodb-From-Source
# the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# if an error exit right away, don't continue the build
set -e
# some info
echo
#echo "Works like command, use a tag: sudo ./scripts/build-server.sh r4.2.0"
echo "Works like command, use a tag: sudo ./scripts/build-server.sh r4.0.12"
echo
# check if we are root
if [[ $EUID -ne 0 ]]; then
echo "This script must be ran via root 'sudo' command or using in 'sudo -i'."
exit 1
fi
# require mongo branch
#if [ -z "${1}" ]; then
# echo "First argument must be the MONGODB_BRANCH for example 'v4.1'"
# exit 1
#fi
#MONGODB_BRANCH="${1}"
# require mongo release
#if [ -z "${2}" ]; then
# echo "The second argument must be the MONGODB_RELEASE for example 'r4.1.0'"
# exit 1
#fi
#MONGODB_RELEASE="${2}"
# require mongo release
if [ -z "${1}" ]; then
echo "The first argument must be the MONGODB_RELEASE for example 'r4.0.12'"
exit 1
fi
MONGODB_RELEASE="${1}"
# delete all mongo other programs, we self compile
apt remove --purge mongo*
# the required packages for debian
apt -y install libboost-filesystem-dev libboost-program-options-dev libboost-system-dev libboost-thread-dev build-essential gcc python scons git glibc-source libssl-dev python-pip libffi-dev python-dev libcurl4-openssl-dev #libcurl-dev
pip install -U pip pyyaml typing
# generate build directory variable
BUILD=$DIR/../build
# delete previous build directory
rm -rf $BUILD/mongo
# generate new build directory
mkdir -p $BUILD
# the mongodb.conf and systemd services files in a directory variable
ROOT_FS=$DIR/../artifacts/root-filesystem
# find out how many cores we have and we use that many
if [ -z "$CORES" ]; then
CORES=$(grep -c ^processor /proc/cpuinfo)
fi
echo Using $CORES cores
# go to the build directory
pushd $BUILD
# clone the mongo by branch
#git clone -b ${MONGODB_BRANCH} https://github.com/mongodb/mongo.git
# clone the mongo by branch
git clone https://github.com/mongodb/mongo.git
# the mongo directory is a variables
MONGO=$BUILD/mongo
# go to the mongo directory
pushd $MONGO
# checkout the mongo release
git checkout tags/${MONGODB_RELEASE}
# hack to old version python pip cryptography from 1.7.2 to use the latest
sed -i 's#cryptography == 1.7.2#\#cryptography == 1.7.2#g' buildscripts/requirements.txt
# this is only because 4.0.12 uses 1.7.2 and
# https://github.com/pyca/cryptography/issues/4193#issuecomment-381236459
# support minimum latest (2.2)
pip install cryptography
# install the python requirements
#pip install -r etc/pip/dev-requirements.txt
pip install -r buildscripts/requirements.txt
# somewhere in the build it says if we install this, it is faster to build
pip2 install --user regex
# build everything
scons all --disable-warnings-as-errors -j $CORES --ssl
# install the mongo programs all
scons install --disable-warnings-as-errors -j $CORES --prefix /usr
# create a copy of the old config
#TIMESTAMP=$(($(date +%s%N)/1000000))
#cp /etc/mongodb.conf /etc/mongodb.conf.$TIMESTAMP.save
# copy the mongodb.conf configured and the systemd service file
# dangerous!!! removed
# cp -avr $ROOT_FS/. /
MONGODB_SERVICE=etc/systemd/system/mongodb-server.service
cp $ROOT_FS/$MONGODB_SERVICE /$MONGODB_SERVICE
chown root:root /$MONGODB_SERVICE
chmod o-rwx /$MONGODB_SERVICE
# generate mongodb user and group
useradd mongodb -d /var/lib/mongodb -s /bin/false || true
# create the required mongodb database directory and add safety
mkdir -p /var/lib/mongodb
chmod o-rwx -R /var/lib/mongodb
chown -R mongodb:mongodb /var/lib/mongodb
# create the required mongodb log directory and add safety
mkdir -p /var/log/mongodb
chmod o-rwx -R /var/log/mongodb
chown -R mongodb:mongodb /var/log/mongodb
# create the required run socket directory and add safety
mkdir -p /run/mongodb
chmod o-rwx -R /run/mongodb
chown -R mongodb:mongodb /run/mongodb
# add safety to the mongodb config file
chmod o-rwx /etc/mongodb.conf || true
chown mongodb:mongodb /etc/mongodb.conf || true
# reload systemd services
systemctl daemon-reload
# enable the mongodb-server
systemctl enable mongodb-server
# start the mongodb-server
#service mongodb-server start
# exit of the mongo directory
popd
# exit the build directory
popd
# delete current build directory
rm -rf $BUILD/mongo
The build as follows build-tools.sh:
#!/usr/bin/env bash
# based on https://github.com/mongodb/mongo-tools
# the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# if an error exit right away, don't continue the build
set -e
# some info
echo
echo "Works like command: sudo ./scripts/build-tools.sh r4.0.12"
echo
# check if we are root
if [[ $EUID -ne 0 ]]; then
echo "This script must be ran via root 'sudo' command or using in 'sudo -i'."
exit 1
fi
# require mongo release
if [ -z "${1}" ]; then
echo "The first argument must be the MONGODB_RELEASE for example 'r4.0.12'"
exit 1
fi
MONGODB_RELEASE="${1}"
## delete all mongo other programs, we self compile
##apt remove --purge mongo*
## the required packages for debian
##apt -y install gcc python scons git glibc-source libssl-dev python-pip
apt -y install golang libpcap-dev
export GOROOT=$(go env GOROOT)
# generate build directory variable
BUILD=$DIR/../build/src/github.com/mongodb/
# delete previous build directory
rm -rf $BUILD/mongo-tools
# generate new build directory
mkdir -p $BUILD
# find out how many cores we have and we use that many
CORES=$(grep -c ^processor /proc/cpuinfo)
# go to the build directory
pushd $BUILD
# clone the mongo by branch
git clone https://github.com/mongodb/mongo-tools
# the mongo directory is a variables
MONGO_TOOLS=$BUILD/mongo-tools
# go to the mongo directory
pushd $MONGO_TOOLS
# checkout the mongo release
git checkout tags/${MONGODB_RELEASE}
bash ./build.sh
chown root:adm -R ./bin
chmod o-rwx -R ./bin
chmod ug+rx ./bin/*
cp -r ./bin/. /usr/bin
# for PROGRAM in bsondump mongodump mongoexport mongofiles mongoimport mongoreplay mongorestore mongostat mongotop
# do
# go build -o bin/${PROGRAM} -tags "ssl sasl" ${PROGRAM}/main/${PROGRAM}.go
# done
# exit of the mongo directory
popd
# exit the build directory
popd
# delete current build directory
rm -rf $BUILD/mongo-tools
popd
# delete current build directory
rm -rf $BUILD/mongo-tools

mupdf-tools don't install mudraw

When i try to install mupdf-tools, it won't install mudraw.
If i type in terminal "dpkg -L mupdf-tools" i get the following output:
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/mupdf-tools
/usr/share/doc/mupdf-tools/README
/usr/share/doc/mupdf-tools/copyright
/usr/share/doc/mupdf-tools/changelog.Debian.gz
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/mutool.1.gz
/usr/bin
/usr/bin/mutool
As can be seen, mudraw don't appear in the list.
How can i fix this so i will have mudraw?
I'm using ubuntu.
You've not said which version of ubuntu or which version of the mupdf-tools package.
Ubuntu 15.10 (Wily) contains mupdf-tools package 1.7-1, and that contains mudraw:
# dpkg -l mupdf-tools
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-============================-===================-===================-=============================================================
ii mupdf-tools 1.7-1 i386 commmand line tools for the MuPDF viewer
# dpkg -L mupdf-tools
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/mupdf-tools
/usr/share/doc/mupdf-tools/README
/usr/share/doc/mupdf-tools/changelog.Debian.gz
/usr/share/doc/mupdf-tools/copyright
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/mudraw.1.gz
/usr/share/man/man1/mutool.1.gz
/usr/bin
/usr/bin/mutool
/usr/bin/mudraw
For mupdf 1.8 or later, mudraw is now invoked via 'mutool draw' and there is no separate mudraw binary.
Because compatibility issue, I was to mantain the mudraw command temporally in Ubuntu 17.04.
My solution: create a bash file named "mudraw" that performs "mutool draw":
$ echo '#!/bin/sh' >> mudraw
$ echo 'mutool draw "$#"' >> mudraw
$ chmod +755 mudraw
$ sudo mv mudraw /usr/bin
$ mudraw -v

rpm build fails to make build root dir

I am working on making an rpm for a small program used within our enterprise. The %build section of the rpm process works. I'm having trouble with the install section. I've referenced this article response and I believe I am properly referring to the target location with respect to %{_buildroot}.
The program I'm making is to be installed as a system service. So, after the rpm actually is generated for this step, I've got to add the next step in my installation process which is to include the script that is installed to the init.d location and run that install. One step at a time though.
The build errors are as follows (omitting everything but %install):
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.eUDaCK
+ umask 022
+ cd /home/packager/rpmbuild/BUILD
+ '[' /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64 '!=' / ']'
+ rm -rf /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64
++ dirname /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64
+ mkdir -p /home/packager/rpmbuild/BUILDROOT
+ mkdir /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64
+ cd o2arbitord-1.0
+ LANG=C
+ export LANG
+ unset DISPLAY
+ install -m 555 /home/packager/rpmbuild/BUILD/o2arbitord-1.0/o2arbitord /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin
install: cannot create regular file `/home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.eUDaCK (%install)
Now, my rpmbuild directory does not have the directory /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin. While I know that's part of the problem, the rpmbuild process isn't making the directory /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64 either. What I don't understand about that one is: why? Looking at the script output above you can clearly see the line: mkdir /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64. So, why isn't the directory made?
How does the line BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) from whatever the definition of %{_buildroot} is? I thought that was the definition, but it appears to be something different.
For reference, my spec file
Name: o2arbitord
Version: 1.0
Release: 1%{?dist}
Summary: a daemon
Group: Applications/System
License: GPL
URL: http://My.site
Source0: %{name}-%{version}.tar.gz
BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)
BuildArch: x86_64
BuildRequires: libusb1-devel
#Requires:
%description
%prep
%setup -q
%build
make -f o2arbitord.mk
%install
install -m 555 %{_builddir}/%{name}-%{version}/%{name} %{buildroot}%{_sbindir}
%clean
rm -rf %{buildroot}
%files
%defattr(-,root,root,-)
/usr/sbin/o2arbitord
%changelog
You are attempting to install a file into a directory that doesn't exist (yet).
RPM only creates the %{buildroot} for you automatically. Anything under that you need to create yourself.
So when you run
install -m 555 %{_builddir}/%{name}-%{version}/%{name} %{buildroot}%{_sbindir}
where %{buildroot}%{_sbindir} expands to /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin RPM has only created /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64 for you already.
You need to create the /usr/sbin part of that path and then copy the file into it.
You can do that with either
%{__mkdir_p} '%{buildroot}%{_sbindir}'
or
%{__install} -d '%{buildroot}%{_sbindir}'
Where
$ rpm -E '__mkdir_p = %{__mkdir_p}'
__mkdir_p = /bin/mkdir -p
$ rpm -E '__install = %{__install}'
__install = /usr/bin/install