I am messing around with and testing Hashicorp Vault. I am starting a vault server in dev mode within the mac terminal:
vault server -dev
Accordingly, I get some data if I do
vault status
However, if I want to turn the server off and start it again, I have to restart the computer. Closing the mac shell will not do the trick, as I assume that the server continues to live in the mac memory. I have googled a lot to see if there is a simple command to stop the dev server but found none.
Help is much appreciated.
If you ctrl+c the proccess, vault will terminate and you will lose all the data you stored in there. No need to restart the computer.
If that doesn’t work, you can kill it with pkill -9 vault
Related
We have implemented hasicorp open source vault as single node with consul backend.
We need help regarding implementing vault 3 node cluster for HA in single datacenter and also well as in multidatacenter.
Could you please help me on this.
The Vault Deployment guide has more on this.
https://learn.hashicorp.com/vault/operations/ops-deployment-guide#help-and-reference
Combine it with this guide: https://learn.hashicorp.com/vault/operations/ops-vault-ha-consul
I shall assume, just based on the fact that you've already gotten a single node up with a Consul backend, that you already know a little about scripting, Git, configuring new SSH connections, installing software, and Virtual Machines.
Also, these are hard to explain, and have much better resources elsewhere.
Further if you get stuck with the prerequisites, tools to install, or downloading the code, please have a look at the resources on the internet.
If you get an error with Vault working improperly, though, make a Github issue ASAP.
Anyway, with that out of the way, the short answer is this:
Step 1: Set up 3 Consul servers, each with references to each other.
Step 2: Set up 3 Vault servers, each of them independent, but with a reference to a Consul
address as their Storage Backend.
Step 3: Initialize the Cluster with your brand new Vault API.
Now for the long answer.
Prerequisites
OS-Specific Prerequisites
MacOS: OSX 10.13 or later
Windows: Windows must have Powershell 3.0 or later. If you're on Windows 7, I recommend Windows Management Framework 4.0, because it's easier to install
Vagrant
Set up Vagrant, because it will take care of all of the networking and resource setup for the underlying infrastructure to use, here.
Especially for Vagrant, the Getting Started guide takes about 30 minutes once you have Vagrant and Virtualbox installed: https://www.vagrantup.com/intro/getting-started/index.html
Install Tools
Make sure you have Git installed
Install the latest version of Vagrant (NOTE: WINDOWS 7 AND WINDOWS 8 REQUIRE POWERSHELL >= 3)
Install the latest version of VMWare or Virtualbox
Download the Code for this demonstration
Related Vendor Documentation Link: https://help.github.com/articles/cloning-a-repository
git clone https://github.com/v6/super-duper-vault-train.git
Use this Code to Make a Vault Cluster
Related Vagrant Vendor Documentation Link: https://www.vagrantup.com/intro/index.html#why-vagrant-
cd super-duper-vault-train
vagrant up ## NOTE: You may have to wait a while for this, and there will be some "connection retry" errors for a long time before a successful connection occurs, because the VM is booting. Make sure you have the latest version, and try the Vagrant getting started guide, too
vagrant status
vagrant ssh instance5
After you do this, you'll see your command prompt change to show vagrant#instance5.
You can also vagrant ssh to other VMs listed in the output of vagrant status.
You can now use Vault or Consul from within the VM for which you ran vagrant ssh.
Vault
Explore the Vault Cluster
ps -ef | grep vault ## Check the Vault process (run while inside a Vagrant-managed Instance)
ps -ef | grep consul ## Check the Consul process (run while inside a Vagrant-managed Instance)
vault version ## Output should be Vault v0.10.2 ('3ee0802ed08cb7f4046c2151ec4671a076b76166')
consul version ## Output should show Consul Agent version and Raft Protocol version
The Vagrant boxes have the following IP addresses:
192.168.13.35
192.168.13.36
192.168.13.37
Vault is on port 8200.
Consul is on port 8500.
Click the Links
http://192.168.13.35:8200 (Vault)
http://192.168.13.35:8500 (Consul)
http://192.168.13.36:8200 (Vault)
http://192.168.13.36:8500 (Consul)
http://192.168.13.37:8200 (Vault)
http://192.168.13.37:8500 (Consul)
Start Vault Data
Related Vendor Documentation Link: https://www.vaultproject.io/api/system/init.html
Start Vault.
Run this command on one of the Vagrant-managed VMs, or somewhere on your computer that has curl installed.
curl -s --request PUT -d '{"secret_shares": 3,"secret_threshold": 2}' http://192.168.13.35:8200/v1/sys/init
Unseal Vault
Related Vendor Documentation Link: https://www.vaultproject.io/api/system/unseal.html
This will unseal the Vault at 192.168.13.35:8200. You can use the same process for 192.168.13.36:8200 and 192.168.13.37:8200.
Use your unseal key to replace the value for key abcd1430890..., and run this on the Vagrant-managed VM.
curl --request PUT --data '{"key":"abcd12345678..."}' http://192.168.13.35:8200/v1/sys/unseal
Run that curl command again. But use a different value for "key":. Replace efgh2541901... with a different key than you used in the previous step, from the keys you received when running the v1/sys/init endpoint.
curl --request PUT --data '{"key":"efgh910111213..."}' http://192.168.13.35:8200/v1/sys/unseal
Non-Vagrant
Please refer to the file PRODUCTION_INSTALLATION.md in this repository.
Codified Vault Policies and Configuration
To Provision Vault via its API, please refer to the
provision_vault folder.
It has data and scripts.
The data folder's tree corresponds to the HashiCorp Vault API
endpoints, similar to the following:
https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration#layout-and-design
You can use the Codified Vault
Policies and Configuration
with your initial Root token, after
initializing and unsealing Vault,
to configure Vault quickly via its API.
The .json files inside each folder
correspond to the payloads to send to Vault
via its API, but there may also be .hcl,
.sample, and .sh files for convenience's sake.
Hashicorp have written some guidance on how to get started with setting up a HA Vault and Consul configuration:
https://www.vaultproject.io/guides/operations/vault-ha-consul.html
I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.
I've followed instructions online to set up a Telescope instance on my DigitalOcean droplet, but it won't start with Upstart.
I'm able to run the server successfully manually, but the Upstart task doesn't fire when the server boots. I'm sure I should be looking at a log file somewhere to discover the problem, but I'm not sure where.
I've looked for the location of upstart logs, but I'm not having any luck. Either you have to add something to your script to make it log, or it just does it according to accounts online, but neither of those seem to be the case for me.
When I try to search for help on Upstart, I'm also seeing people saying I should be using systemd instead, but I can't figure out how to install it on CentOS 6.5.
Can anyone help me figure a way out of this labyrinth?
I use Ubuntu server 14.04, and my upstart logs are located in /var/log/upstart
The log usually contains stdout from the job, and it should help you understand what's wrong.
My guess is that when the server boots and tries to run your job, MongoDB is not yet ready so it fails silently.
Try installing the specific MongoDB version that Meteor is using at the moment (2.4.9) using these docs :
http://docs.mongodb.org/v2.4/tutorial/install-mongodb-on-ubuntu/
The most important thing is to get upstart support for MongoDB, this will allow us to catch mongod launch as an event.
You can then use this syntax in your upstart script :
start on started mongodb
This will make your node app start when mongo is ready.
I've created a gist with the scripts I wrote to setup a server ready for Meteor app deployment, it's a bit messy and probably specific to Ubuntu but it might help you.
https://gist.github.com/saimeunt/4ace7975b12df06ee0b7
I'm also using demeteorizer and forever which are two great tools you should probably check.
I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.
I need to download files from GCS to local machine. I tried gsutil config and download in my machine. In my win 8 64 bit, all worked fine. However when I try to setup the same in another dedicated machine which is win vista 23 bit, entering authentication code just shows Failure.
Python gsutil.py config -b pops up the browser with request url, I got the authentication code in browser...pasting that gives Failure.
I am new to Python and could not trace back the problem. Does gsutil have any limitation on number of machines we can configure for same login/project?
Appreciate any help in debugging this.
Also, is it possible to move GCS data into Google SQL Cloud. I am assuming running a script on Google App engine that reads from GCS, parse the data and insert to Google SQL cloud is possible. Are their any documentations/tools to do this?
Thanks
Dhurka