vtctlclient: command not found - kubernetes

I am trying to run Vitess on Minikube and I'm going through the 'Getting Started' steps found here: http://vitess.io/getting-started/#set-up-google-compute-engine-container-engine-and-cloud-tools
I have installed everything I need to including 'vtctlclient'. I have verified that all the correct directories were created when I did this.
However, there is a script in my directory '/go/src/github.com/youtube/vitess/examples/kubernetes' called 'kvtctl.sh' which uses kubectl to discover the pod name and set up the tunnel and then runs 'vtctlclient'. When I run this script, this is what is returned:
'Starting port forwarding to vtctld...
./kvtctl.sh: line 29: vtctlclient: command not found'
I am totally lost as to why the vtctlclient command is not found because I just installed it using Go.
Any help on this matter would be much appreciated.

Maybe the go install directory is not in your path. Have you tried running vtctlclient manually (just like kvtctl.sh does)?
PS: You may want to join our Vitess Slack channel where you may get more prompt answers for your questions. Let me know if you need an invite.

Related

Following Kubernetes using Katacoda

I am trying to follow tutorial of Kubernetes but I am kinda lost on first steps when trying to use Katacoda... When I just try to open minikube dashboard I encounter error:
failed to open browser: exec: "xdg-open": executable file not found in $PATH
and dashboard itself remains unavailable when I try to open it through host 1.
Later steps like running hello-world work fine and I am able to run it locally using my own minikube instance but I am a bit confused with this issue. Can I debug it somehow to access dashboard during course? This is particularly confusing because I am a bit afraid that I might encounter same or similar issue during potential exam that also runs online...
Founder of Katacoda here. When running locally, then xdg provides the wrapper for opening processes on your local machine and installing the package would resolve the issue. As Katacoda runs everything within a sandbox, we cannot launch processes directly on your machine.
We have added an override for xdg-open that displays a friendly error message to users. They'll now be prompted to use the Preview Port link provided. The output is now:
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening %s in your default browser...
Minikube Dashboard is not supported via the interactive terminal experience.
Please click the 'Preview Port 30000' link above to access the dashboard.
This will now exit. Please continue with the rest of the tutorial.
X failed to open browser: exit status 1
Looks like this command works:
apt install xdg-utils
I have been following the same tutorial in Katacoda and had the same issue.In my case, using these commands helpt me to solve the problem :
apt-get update
apt install xdg-utils

Why won't my Telescope app start with Upstart?

I've followed instructions online to set up a Telescope instance on my DigitalOcean droplet, but it won't start with Upstart.
I'm able to run the server successfully manually, but the Upstart task doesn't fire when the server boots. I'm sure I should be looking at a log file somewhere to discover the problem, but I'm not sure where.
I've looked for the location of upstart logs, but I'm not having any luck. Either you have to add something to your script to make it log, or it just does it according to accounts online, but neither of those seem to be the case for me.
When I try to search for help on Upstart, I'm also seeing people saying I should be using systemd instead, but I can't figure out how to install it on CentOS 6.5.
Can anyone help me figure a way out of this labyrinth?
I use Ubuntu server 14.04, and my upstart logs are located in /var/log/upstart
The log usually contains stdout from the job, and it should help you understand what's wrong.
My guess is that when the server boots and tries to run your job, MongoDB is not yet ready so it fails silently.
Try installing the specific MongoDB version that Meteor is using at the moment (2.4.9) using these docs :
http://docs.mongodb.org/v2.4/tutorial/install-mongodb-on-ubuntu/
The most important thing is to get upstart support for MongoDB, this will allow us to catch mongod launch as an event.
You can then use this syntax in your upstart script :
start on started mongodb
This will make your node app start when mongo is ready.
I've created a gist with the scripts I wrote to setup a server ready for Meteor app deployment, it's a bit messy and probably specific to Ubuntu but it might help you.
https://gist.github.com/saimeunt/4ace7975b12df06ee0b7
I'm also using demeteorizer and forever which are two great tools you should probably check.

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.

Using MPI with two RaspberryPi

I am trying to make a 'dual core' RaspberryPi for a project I am working on. I had followed this tutorial by Simon Cox. Unfortunately I could not get the two RasPi to talk to each other. (This was using Hydra as the process manager)
After looking more carefully at the MPICH installers guide, which can be found here, I tried to use the -phrase to pass the passphrase I had created. However I could not find it as part of the hydra commands. So I re-installed with smpd and after many compiling attempts. I configured with:
/configure -prefix=/home/pi/mpich-install --with-pm=smpd --with-pmi=smpd
I also had to install libbsl-dev to get the MD5 that smpd requires. I also exported the path that the commands mpiexec and mpicc are in. After setting the passphrase I copied the image to a second SD card and put it in a second RasPi. I then set up the passphrase using ssh-keygen.
I was able to run the cpi program on the master Pi and the slave Pi individually but when I tried to run multiple processes on both at the same time I got the error
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init``_thread(392).................:
MPID_Init(139)........................: channel initialization failed
MPIDI_CH3_Init(38)....................:
MPID_nem_init(196)....................:
MPIDI_CH3I_Seg_commit(366)............:
MPIU_SHMW_Hnd_deserialize(324)........:
MPIU_SHMW_Seg_open(863)...............:
MPIU_SHMW_Seg_create_attach_templ(637): open failed - No such file or directory
Can someone please suggest how I can either fix this problem or get the RaspberryPis to communicate using MPICH?
Thanks
E.Lee
If anyone else has this problem make sure your hosts don't have the same name!
You can change it by following this tutorial http://raspi.tv/2012/how-to-change-the-name-of-your-raspberry-pi-new-hostname

Mapping CentOS NFS to another CentOS Server

CentOS 5.5
I have a web application running on a server and it needs access to another CentOS server's file system running in the same network (via private IP). After doing a bunch of googling it looks like mounting the drive via NFS is a good way to go, but I'm not finding any good step by step instructions on how to go about it. I've read the man docs on the mount command and read some docs on the CentOS wiki as well but I feel like I'm missing something. Here is what I'm trying
mount -t nfs my.ip.address:/somePath /somePath/mount
I keep getting a 'no route to host' error but I can ping the server just fine. I'm guessing that I am possibly missing a port I need to open or something, but again, can't find information that makes sense to a non-sysadmin like myself.
Thanks for any help.
I ran across this, followed it step by step, and now I'm up and running!
http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/