yarn kbn bootstrap failed while trying to prepare kibana dev env - plugins

I am trying to generate the Kibana plugin and for that I already downloaded the .zip file from the github.
However while preparing for the Kibana development environment, I got an error as mentioned below from yarn kbn bootstrap cmd:
ERROR UNHANDLED ERROR
ERROR Error: Command failed with exit code 128: git merge-base HEAD FETCH_HEAD fatal: Not a valid object name HEAD
Note:
I already sudo git init
Steps followed so far
sudo wget https://github.com/elastic/kibana/archive/refs/tags/v7.17.0.zip
sudo apt-get install unzip
sudo unzip v7.17.0.zip
sudo mv kibana-v7.17.0 kibana
sudo chmod -R 777 kibana-7.17.0
sudo yarn add require-in-the-middle
sudo yarn add symbol-observable
sudo yarn add source-map-support
sudo yarn add lodash
sudo git init
sudo yarn kbn bootstrap -- failed
ERROR UNHANDLED ERROR
ERROR Error: Command failed with exit code 128: git merge-base HEAD FETCH_HEAD
fatal: Not a valid object name HEAD

Related

Setting LD_LIBRARY_PATH to miniconda lib dir in docker ubuntu:20.04 breaks libp11-kit

If I run in an ubuntu:20.04 docker image (docker run -it --rm ubuntu:20.04 bash) the following commands:
apt update
apt upgrade -y
apt install -y wget
wget https://repo.anaconda.com/miniconda/Miniconda3-py310_23.1.0-1-Linux-x86_64.sh
bash Miniconda3-py310_23.1.0-1-Linux-x86_64.sh
source /root/.bashrc
export LD_LIBRARY_PATH=/root/miniconda3/lib:$LD_LIBRARY_PATH
then it breaks libp11-kit. For instance when running apt install vim:
/usr/lib/apt/methods/http: symbol lookup error: /lib/x86_64-linux-gnu/libp11-kit.so.0: undefined symbol: ffi_type_pointer, version LIBFFI_BASE_7.0
E: Method http has died unexpectedly!
E: Sub-process http returned an error code (127)
E: Method /usr/lib/apt/methods/http did not start correctly
I tried to add other directories to LD_LIBRARY_PATH (/usr/lib/, /lib/x86_64-linux-gnu/) without result.
Possibly related to conda-build fails to recognise libraries?

Error: Error: Failed to download metadata for repo 'advanced-virtualization': Cannot prepare internal mirrorlist: No URLs in mirrorlist

I am using RHEL 9.1 machine where I am trying to install a package 'python3-distro' but I am getting error:
Error: Failed to download metadata for repo 'advanced-virtualization': Cannot prepare internal mirrorlist: No URLs in mirrorlist
can anyone please help me resolve this issue?
I was expecting 'sudo dnf install python3-distro' to work successfully.
I tried running below cmds:
sudo sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/*
sudo sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/*
and as result after reboot I am getting below error:
Error: Failed to download metadata for repo 'advanced-virtualization': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Output- ls 'yum.repos.d'
$ls /etc/yum.repos.d/
advanced-virtualization.repo nfv-openvswitch.repo redhat.repo
ceph-pacific.repo rdo-release.repo
messaging.repo rdo-testing.repo
Any help would be appreciated.

Gitlab-runner failed to remove permission denied

I'm setting up a CI/CD pipeline with Gitlab. I've installed gitlab-runner on a Digital Ocean Ubuntu 18.04 droplet and gave permissions in /etc/sudoers to the gitlab-runner as:
gitlab-runner ALL=(ALL:ALL)ALL
The first commit to the associated repository correctly build the docker-compose (the app itself is Django+postgres), but following commits are not able to clean previous builds and fail:
Running with gitlab-runner 12.8.0 (1b659122)
on ubuntu-s-4vcpu-8gb-fra1-01 52WypZsE
Using Shell executor...
00:00
Running on ubuntu-s-4vcpu-8gb-fra1-01...
00:00
Fetching changes with git depth set to 50...
00:01
Reinitialized existing Git repository in /home/gitlab-runner/builds/52WypZsE/0/lorePieri/djangocicd/.git/
From https://gitlab.com/lorePieri/djangocicd
* [new ref] refs/pipelines/120533457 -> refs/pipelines/120533457
0072002..bd28ba4 develop -> origin/develop
Checking out bd28ba46 as develop...
warning: failed to remove app/staticfiles/admin/img/selector-icons.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/search.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-alert.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/tooltag-arrowright.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-unknown-alt.svg: Permission denied
This is the relevant portion of the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
stages:
- test
- deploy_staging
- deploy_production
step-test:
stage: test
before_script:
- export DYNAMIC_ENV_VAR=DEVELOP
only:
- develop
tags:
- develop
script:
- echo running tests in $DYNAMIC_ENV_VAR
- sudo apt-get install -y python-pip
- sudo pip install docker-compose
- sudo docker image prune -f
- sudo docker-compose -f docker-compose.yml build --no-cache
- sudo docker-compose -f docker-compose.yml up -d
- echo do tests now
- sudo docker-compose exec -T web python3 -m coverage run --source='.' manage.py test
...
What I've tried:
usermod -aG docker gitlab-runner
sudo service docker restart
The best solution for me was adding
pre_clone_script = "sudo chown -R gitlab-runner:gitlab-runner ."
into /etc/gitlab-runner/config.toml
Even if you won't have permissions after a previous job it'll set correct permissions before cleaning up the workdir and cloning the repo.
I would recommend setting a GIT_STRATEGY to none in the afflicted job.
I have had the exact same problem. Therefore I will explain how it was resolved in details.
Try finding your config.toml file and run the gitlab-runner command with root privileges, since permission denied is a very common UNIX-based operating systems error.
After finding the location of config.toml pass it:
sudo gitlab-runner run --config <absolute_location_of_config_toml>
P.S. You can find all config.toml file easily using locate config.toml command. Make sure you have already installed by executing sudo apt-get install mlocate
After facing to permission denied error, I have tried using sudo gitlab-runner run instead of gitlab-runner, but it has its own problem:
ERROR: Failed to load config stat /etc/gitlab-runner/config.toml: no such
file or directory builds=0
while executing gitlab-runner without root permissions doesn't have any config file problem.
Try implementing the ways and solutions as #Grumbanks and #vlad-Mazurkov mentioned. But they didn't work properly.
It MAY be because you write a file in cloned out codebase. What I do is simply create another directory outside of gitlab-runner directory:
WORKSPACE_DIR="/home/abcd_USER/a/b"
rm -rf $WORKSPACE_DIR
mkdir -p $WORKSPACE_DIR
cd $WORKSPACE_DIR
ls -la
git clone ..................
AND DO whatever
I never faced the issue again.

How to fix ''mongod: command not found " error in AWS Cloud9

I want to install MongoDB in my AWS Cloud9 server. So I followed the instruction as the Cloud9 community page says, but the command to run the MongoDB server in c9 command line i.e.,
$ *./mongod* returns ./mongod: line 1: mongod: command not found.
help me to fix this.
I've tried searching about it on YouTube but it didn't work.
$ *sudo yum install -y mongodb-org*
Loaded plugins: priorities, update-motd, upgrade-helper
1062 packages excluded due to repository priority protections
No package mongodb-org available.
Error: Nothing to do
$ mkdir data
$ echo 'mongod --bind_ip=$IP --dbpath=data --nojournal --rest "$#"' > mongod
$ chmod a+x mongod
$ ./mongod
./mongod: line 1: mongod: command not found
We can start mongodb by running the mongod script on your project root:
command :- ./mongod
From the error given below, it is apparent that the mongo repo wasn't configured in your yum package manager.
vocstartsoft:~ $ sudo yum install -y mongodb-org
Loaded plugins: priorities, update-motd, upgrade-helper 1062 packages excluded due to repository priority protections No package mongodb-org available. Error: Nothing to do
Create /etc/yum.repos.d/mongodb-org-4.0.repo and write the following in it.
[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
Or add the repo directly from the .repo file,
yum-config-manager --add-repo https://repo.mongodb.org/yum/amazon/mongodb-org.repo
Then run,
sudo yum install -y mongodb-org
Reference
https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/
Either try this: sudo apt install mongodb-clients
Or consider the below process:
ubuntu:~/environment $ At the terminal you’ll see this.
Enter touch mongodb-org-3.6.repo into the terminal
Now open the mongodb-org-3.6.repo file in your code editor (select it from the left-hand file menu) and paste the following into it then save the file:
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
* Now run the following in your terminal:
sudo mv mongodb-org-3.6.repo /etc/yum.repos.d
sudo yum install -y mongodb-org
If the second code does not work try:
sudo apt install mongodb-clients
Close the mongodb-org-3.6.repo file and press Close tab when prompted
Change directories back into root ~ by entering cd into the terminal then enter the following commands:
“ubuntu:~ $ “ - Terminal should look like this.
sudo mkdir -p /data/db
echo 'mongod --dbpath=data --nojournal' > mongod
chmod a+x mongod
Now test mongod with ./mongod
Remember, you must first enter cd to change directories into root ~ before running ./mongod
Don't forget to shut down ./mongod with ctrl + c each time you're done working
-if this error pops up while using command mongod
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Then use the code:
sudo chmod -R go+w /data/db
Reference

Minikube installation failing within script

I am installing Minikube on Ubuntu 16.04 LTS (instructions available below). It is working fine when I run each command manually. However, if I put these in a script file install.sh it will fail at the last step giving me an error:
Error
Starting VM...
E0710 20:42:00.618251 20443 start.go:168] Error starting host: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'').
Retrying.
E0710 20:42:00.618595 20443 start.go:174] Error starting host: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Instructions
sudo apt-get -y update
sudo apt-get -y upgrade
#Make sure no prior copy of minikube exists.
sudo rm -rf .minikube/
#Install minikube. Make sure to check for latest version (e.g. current version is 0.28.0)
curl -Lo minikube https://storage.googleapis.com/minikube/releases/$MINIKUBE_VERSION/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
#Install kvm2
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 && chmod +x docker-machine-driver-kvm2 && sudo mv docker-machine-driver-kvm2 /usr/bin/
sudo apt install -y libvirt-bin qemu-kvm
sudo usermod -a -G libvirtd $(whoami)
#Check to ensure libvirtd service is running.
systemctl status libvirtd
minikube start --vm-driver kvm2
Also, when the script fails if I re-run the following command I get the minikube working fine. Just don't know why it fails originally when running within the script.
sudo rm -rf .minikube/
minikube start --vm-driver kvm2
If you're running this script not for the first time, sudo rm -rf .minikube/ will not be enough.
You should also run the below command:
minikube delete
And, just in case, add a shebang to the top of the script:
#!/bin/bash