chef env command line utility is only showing output for --help instead of prompting for license acceptance - command-line

When I try to accept the chef license from the command line, as part of my chef workstation setup, I just see the output for the --help flag
user in ~ > chef env --chef-license accept
Prints environment variables used by Chef Workstation
Usage:
chef env [flags]
Flags:
-h, --help help for env
Global Flags:
--chef-license ACCEPTANCE Accept product license, where ACCEPTANCE is one of 'accept', 'accept-no-persist', or 'accept-silent'
-c, --config CONFIG_FILE_PATH Read configuration from CONFIG_FILE_PATH
-d, --debug Enable debug output when available
-v, --version Show Chef Workstation version information
I have no idea why this could be happening. I'm using a zsh shell on mac if that makes any difference.

Related

Run cloud-init cloud-config yaml file

How do I run, for development purposes, cloud-init yaml file that will be normally run via user-data?
I know how I can re-run cloud-init, but I want to develop complicated cloud-init file and to do that it is rather difficult to continually build new instances.
Sorry to say, you're going to have to run it on a new clean instance (or at least a snapshot of one). Even if you did manually go back and start at different steps, there are potentially side effects.
I think you'll find that if you get used to managing local VMs, you can debug your scripts fairly quickly.
The quickest path for iterating on user-data input to cloud-init is probably via lxd. You can quickly set up lxd on a vm host or a bare metal system. Once set up, launches are very quick.
$ cat ud.yaml
#cloud-config
runcmd:
- "read up idle < /proc/uptime; echo Up $up seconds | tee /run/runcmd.log"
$ lxc launch ubuntu-daily:bionic ud-test "--config=user.user-data=$(cat ud.yaml)"
Creating ud-test
Starting ud-test
$ lxc exec ud-test cat /run/runcmd.log
Up 8.05 seconds
$ lxc stop ud-test
$ lxc delete ud-test
You might be able to get away with just running cloud-init clean and then re-running it.
I'm experimenting with cloud-init and using an Ubuntu box with KVM as a virtualization lab. I made a simple Makefile to build the cloud-init image and launch it in an KVM instance.
You can see my code here:
https://github.com/brennancheung/playbooks/blob/master/cloud-init-lab/Makefile
all: clean build run
INSTANCE_NAME := "vm"
CLOUD_IMAGE_FILE = "bionic-server-cloudimg-amd64.img"
CLOUD_IMAGE_BASE_URL := "http://cloud-images.ubuntu.com/bionic/current"
CLOUD_IMAGE_URL := "$(CLOUD_IMAGE_BASE_URL)/$(CLOUD_IMAGE_FILE)"
download:
wget $(CLOUD_IMAGE_URL)
clean:
#echo "Removing build artifacts"
-#rm -f config.img 2>/dev/null
-#virsh destroy $(INSTANCE_NAME) 2>/dev/null || true
-#virsh undefine $(INSTANCE_NAME) 2>/dev/null || true
-#rm -f $(INSTANCE_NAME).img
build:
#echo "Building cloud config drive"
cloud-localds config.img config.yaml
cp $(CLOUD_IMAGE_FILE) $(INSTANCE_NAME).img
run:
#echo "Spawning instance $(INSTANCE_NAME)"
virt-install \
--name $(INSTANCE_NAME) \
--memory 8192 \
--disk ./$(INSTANCE_NAME).img,device=disk,bus=virtio \
--disk ./config.img,device=cdrom \
--os-type linux \
--os-variant ubuntu18.04 \
--virt-type kvm \
--graphics none \
--network bridge=br0
I am not sure why this answer is not here.. maybe it is not applicable for earlier versions.
All I do to re-run cloud-init for dev testing is (, especially when testing user-data changes):
1 - change the config file/files, usually only /etc/cloud/cloud.cfg
2 - run clean:
cloud-init clean -l
-l cleans the logs also
3 - re-run cloud-init
cloud-init init
of course, this has its limitations, depending on the settings you test, cloud-init clean is not going to revert the previous changes, but maybe you'll be able to figure out ways to test. For example I am testing the creation of new users, and every time I change something in the settings for a user and I want to test it.. I create a new user.
Yes, all this is quick in-development test, if you need to truly verify your changes - you need new instance.
Re-running all of cloud-init without system reboo isn't a recommended approach because some parts of cloud-init are run at systemd generator timeframe to detect new datasource types. That said, the following commands will allow you to accomplish this without reboot on a system.
cloud-init supports a clean subcommand to remove all semaphore files and allow cloud-init to re-run all config modules again. Beware that this will mean SSH host-keys are regenerated and .ssh config files re-written so it could impact your ability to get back into the VM.
To clean all semaphores so cloud-init modules will all re-run on next boot:
sudo cloud-init clean --logs
cloud-init typically runs multiple boot stages in sequence due to systemd service dependencies. If you want to repeat that process without a reboot you can run the following 4 commands:
Detect local datasource (cloud platform) and obtain user-data:
sudo cloud-init init --local
Detect any datasources and user-data which require network up and run cloud_init_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init init
Run all cloud_config_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=config
Run all cloud_final_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=final

How can the terminal in Jupyter automatically run bash instead of sh

I love the terminal feature and works very well for our use case where I would like students to do some work directly from a terminal so they experience that environment. The shell that launches automatically is sh and does not pick up all of my bash defaults. I can type "bash" and everything works perfectly. How can I make "bash" the default?
Jupyter uses the environment variable $SHELL to decide which shell to launch. If you are running jupyter using init then this will be set to dash on Ubuntu systems. My solution is to export SHELL=/bin/bash in the script that launches jupyter.
I have tried the ultimate way of switching system-wide SHELL environment variable by adding the following line to the file /etc/environment:
SHELL=/bin/bash
This works on Ubuntu environment. Every now and then, the SHELL variable always points to /bin/bash instead of /bin/sh in Terminal after a reboot.
Also, setting up CRON job to launch jupyter notebook at system startup triggered the same issue on jupyter notebook's Terminal.
It turns out that I need to include variable setting and sourcing statements for Bash init file like ~/.bashrc in CRON job statement as follows via the command $ crontab -e :
#reboot source /home/USERNAME/.bashrc && \
export SHELL=/bin/bash && \
/SOMEWHERE/jupyter notebook --port=8888
In this way, I can log in the Ubuntu server via a remote web browser (http://server-ip-address:8888/) with opening jupyter notebook's Terminal default to Bash as same as local environment.
You can add this to your jupyter_notebook_config.py
c.NotebookApp.terminado_settings = {'shell_command': ['/bin/bash']}
With Jupyter running on Ubuntu 15.10, the Jupyter shell will default into /bin/sh which is a symlink to /bin/dash.
rm /bin/sh
ln -s /bin/bash /bin/sh
That fix got Jupyter terminal booting into bash for me.

Ocamlfind command not found

I'm running into an issue installing a package that's reliant on ocamlfind but I'm getting an ocamlfind: command not found error when making.
I have installed ocamlfind with the ocaml package manager and have tried reinstalling using "opam reinstall ocamlfind".
I have also tried the 'eval opam config env' command to see if updates my bin.
Has anyone run into a similar issue/know what this might be caused by
The output when running the make:
make
ocamlfind ocamlc -pp "camlp4o -I lib/dcg -I lib/ipp pa_dcg.cmo pa_ipp.cmo" -w usy -thread -I lib -I lib/dcg -I lib/ipp -c semantics.ml
/bin/sh: ocamlfind: command not found
The output when trying ocamlfind
ocamlfind
-bash: ocamlfind: command not found
ocaml is installed
opam install ocamlfind
[NOTE] Package ocamlfind is already installed (current version is 1.5.5).
and when running the eval command
eval 'opam config env'
CAML_LD_LIBRARY_PATH="/home/centos/.opam/system/lib/stublibs:/usr/lib64/ocaml/stub libs"; export CAML_LD_LIBRARY_PATH;
MANPATH="/home/centos/.opam/system/man:"; export MANPATH;
PERL5LIB="/home/centos/.opam/system/lib/perl5"; export PERL5LIB;
OCAML_TOPLEVEL_PATH="/home/centos/.opam/system/lib/toplevel"; export OCAML_TOPLEVEL_PATH;
PATH="/home/centos/.opam/system/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/centos/.local/bin:/home/centos/bin"; export PATH;
I'm on a server running centos 7
This command
eval 'opam config env'
is almost assuredly a typo and was supposed to be
eval `opam config env`
though using $(...) instead is the modern equivalent and avoids this font-fact confusion
eval $(opam config env)
That being said that just sets the environment variables in the current shell session (and exports them for use by processes run by this shell session).
As such that needs to be run in every shell session that needs those set (including each line of the makefile that expects them to be set if the environment that runs make doesn't already have them set and exported).
try
sudo apt-get install ocaml-findlib

AWS EC2 and rvm ssh

I have created user for my AWS ec2 VPS (deployer)
When i am logging with:
ssh -i ~/.ssh/aws/*...*.pem ubuntu#ec2*...*.amazonaws.com
command rvm use 2.0.0 is working correctly
=>
ubuntu#ip-***:~$ rvm list
rvm rubies
=* ruby-2.0.0-p247 [ x86_64 ]
# => - current
# =* - current && default
# * - default
ubuntu#ip-***:~$ rvm which
ubuntu#ip-***:~$
But when i use su - deployer i have got:
deployer#ip***:/home/ubuntu$ rvm
The program 'rvm' is currently not installed. You can install it by typing:
sudo apt-get install ruby-rvm
I would like to understand how correctly write command for ssh login.
I have tried:
ssh -i ~/.ssh/aws/*.pem *ubuntu#ec2***.amazonaws.com -t 'bash --login -c "rvm"'
but received "Connection to ec2-*.amazonaws.com closed".
Within my local machine rvm functioning correctly. I have added
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*
into my ~/.bash_profile
I have spent 3-5 hours studying stackoverflow topics related to this issue, but still not understand what am i doing wrong.
Any help will be highly appreciated! Thanks in advance!
I've run into this problem before and there are 2 ways to solve it.
The first way is to log in directly as the deployer user to the instance. This might mean having to create a ssh keypair (see ssh-keygen -t rsa). Then you can log in with ssh deployer#ec2.instance.address This way the rvm will be loaded directly to the deployed user's shell.
A second way is not to use the dash when su to the deployed user account.
When you use the dash then you load your own bashrc vs that particular user's bashrc.
So sudo su deployer
you nee to use:
su - deployer
it will ensure you use login shell

java_home environment variable in linux not found

I'm trying to add java_home in linux machine (centos 5.8)
I'm adding this part to setting JAVA_HOME and PATH for all users in my machine
vi /etc/profile
export JAVA_HOME=/opt/jdkx.x.x_xx
export PATH=$PATH:$JAVA_HOME/bin
after seting it up, i try to verify it by using echo command
echo $JAVA_HOME
but it does not give me any path.Is there something wrong with my configuration?
This method will persist OS updates
echo "export JAVA_HOME=/usr/java/default/" > /etc/profile.d/java_home.sh
If you have more then on version of java install there may be trouble.
The JAVA_HOME path is different because we often install different version JDK and maybe different places. Once a I try to find the general way. There is the result.
Firstly, to query the installed jdk package name: rpm -qa|grep java
my result is:java-1.6.0-openjdk
Secondly, to query the installed place of this package
rpm -ql java-1.6.0-openjdk
Most files is under: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/
go there to confirm it is a real JDK directory
Thirdly, execute export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/ in terminal. Or add it /etc/bashrc for all user.
Question has been edited, answer doesn't make sense anymore.
Left as a placeholder for comments.
echo $SHELL will tell you which shell you are using. Most likely /bin/bash.
Assuming bash, /etc/profile is only read for a login shell (bash --login), not just a new interactive shell.
i.e. if you
sh1% vi /etc/profile
sh1% bash # /etc/profile not read
sh2% echo $JAVA_HOME
sh2%
sh1% vi /etc/profile
sh2% bash --login # /etc/profile should be read.
sh2% echo $JAVA_HOME
/opt/blah/blah/blah
sh2%