Can't create working meteor.js project on a vagrant box - mongodb

I cannot start up a new Meteor application on a Vagrant linux box (running on a Mac). It fails every time with a 'unspecified uncaught exception' in Mongo. I have tried a bunch of things to get this going, but even with the simplest set-up, I cannot get the project running. I would be grateful for any suggestions.
My steps are:
create a completely clean Vagrant box ("ubuntu/trusty64");
install Meteor on the new box (curl https://install.meteor.com/ | sh);
choose a location to create the project;
create a new Meteor project (meteor create app);
start up the project (cd app; meteor)
I know that the permissions on the vagrant shared folder are quirky, so for step #3 above I have tried putting the project:
in the shared guest/host folder, /vagrant,
in a subdirectory of the Vagrant home folder (/home/vagrant),
in a subdirectory of / (with permissions set to vagrant:vagrant), and
in a subdirectory of / with permissions set to root:root, the project created with sudo meteor create app and run with sudo meteor
In all cases, I see this error:
=> Started proxy.
Unexpected mongo exit code 100. Restarting.
Unexpected mongo exit code 100. Restarting.
Unexpected mongo exit code 100. Restarting.
Can't start Mongo server.
MongoDB had an unspecified uncaught exception.
This can be caused by MongoDB being unable to write to a local database.
Check that you have permissions to write to .meteor/local. MongoDB does
not support filesystems like NFS that do not allow file locking.
I cannot tell if this is a Vagrant issue (though I think not, given what I've tried) or a Meteor issue, but I suspect it is Meteor (or one of its many dependencies). I doubt it is a permissions issue, since it failed when running as root. I've tried building meteor from scratch and the build fails and I've tried creating the project with --release 0.9.0 and --release 0.9.2-rc1 and the download is simply killed without explanation.

(1) After step 2 'install Meteor on the new box (curl https://install.meteor.com/ | sh)'
user$ cd /vagrant
user:/vagrant$ meteor create myApp
You should see the myApp folder on your Mac host (the same folder for the vagrantfile)
(2) Insides the myApp folder, you will see the default .meteor folder, make a folder called local if it is no there
user:/vagrant$ cd myApp/.meteor
user:/vagrant/myApp/.meteor$ mkdir local
(3) Create the same folder structure in the /home/vagrant
user:/vagrant/myApp/.meteor$ cd ~
~$mkdir -p myApp/.meteor/local
(4) Link or mount the /vagrant/myApp/.meteor/local to /home/vagrant/myApp/.meteor/local
sudo mount --bind /home/vagrant/myApp/.meteor/local/ /vagrant/myApp/.meteor/local/
or make it permanently
echo “sudo mount --bind /home/vagrant/myApp/.meteor/local/ /vagrant/myApp/.meteor/local/” >> ~/.bashrc && source ~/.bashrc
(5) Now you can start the meteor
~$cd /vagrant/myApp
user:/vagrant/myApp$meteor
The reason why I mount the local folder rather than the <.meteor> folder is that you can still edit the files insides the <.meteor> folder on your Mac host. You can replace myApp with whatever name you want
Hope this help

I'm working with a Windows host, but maybe this will apply to your situation as well.
The only folder which causes the issue is ./meteor/local. If you relocate this with a symlink to be outside of the shared /vagrant folder you should be able to run the meteor app okay.
But, to put a symlink in the shared folder you need to enable symlinks in the VM... which requires starting Vagrant as an admin.
I put together an Vagrantfile with some scripts and instructions here:
https://github.com/ElectronVector/vagrant-meteor

I ran into similar issues trying to run meteor on windows. It seems that mongodb is not able to write in the /vagrant folder. I solved this by doing
sudo mount --bind /home/vagrant/meteorapp/.meteor/ /vagrant/meteorapp/.meteor/
(got that from https://gist.github.com/gabrielhpugliese/5855677)

Here is an answer that solved my problem. Launching meteor project from a shared folder on Debian VMware virtual machine(running on a Windows).
The issue is that mongodb can't create data files inside a shared folder, so in this case just use an existing mongodb for meteor project:
export MONGO_URL=mongodb://localhost:27017/your_db

Doing
vagrant reload --provision
solved my problem.
I think the reason might be some files got corrupted or deleted.

Related

are there ways to use VS code plugins in google cloud shell?

I have a few quick navigation plugins such as "block travel" I use all the time. Is there a way to use these in cloud shell?
I imagine there are some restrictions, but even some simple editor plugins can be huge timesavers.
While I'm at it - alt-D to duplicate a line, or transpose lines - some of those seem to be missing and hard to use key remapping to get working, at least within the shell. In general maybe keyboard shortcuts seem to get trapped by the browser or PWA wrapper. I'm using cloudshell as a webapp on a chromebook FWIW, for various secure projects.
I have come up with a solution that covers both aspects of your question
To get Unlimited Persistent Disk:
You can use Google Cloud Storage FUSE
Google Cloud Storage FUSE lets you mount a GCS bucket as a folder to your linux instance. By doing that you get an “unlimited '' persistent disk and it is super simple to set up since gcsfuse is already installed in cloud shell.
1. Create a GCS bucket (you only need to run this once) -- replace BUCKET_NAME with any name:
gsutil mb "gs://BUCKET_NAME/"
2. Create a local directory for mounting -- replace FOLDER_NAME with the chosen directory name:
mkdir /home/[USER]/[FOLDER_NAME]
chmod 777 /home/[USER]/[FOLDER_NAME]
3. Mount the bucket onto the local filesystem (note: you need to re-run this every time Cloud Shell starts)
gcsfuse -o nonempty -file-mode=777 -dir-mode=777 --uid=1000 --debug_gcs [BUCKET_NAME] /home/[USER]/[FOLDER_NAME]
To use third party plugins in cloud shell:
You can use an environment customization script (.customization_environment) as mentioned in the public documentation. It allows you to install additional packages into your Cloud Shell environment when it starts.
For reference, below are the steps to install VS Code plug-in.
Step 1:
To install the VSCode server, run the script named visual_studio_code.sh as below, in the root directory workspace of Cloud Shell Editor.
visual_studio_code.sh file:
export VERSION=`curl -s https://api.github.com/repos/cdr/code-server/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`
wget https://github.com/cdr/code-server/releases/download/$VERSION/code-server-3.10.2-linux-amd64.tar.gz
tar -xvzf code-server-3.10.2-linux-amd64.tar.gz
cd code-server-3.10.2-linux-amd64
Run the script using the below command in shell,
./visual_studio_code.sh
if getting permission denied error then run this following command in shell,
chmod +x visual_studio_code.sh
./visual_studio_code.sh
Step 2:
Make a customization script in the root directory workspace in Cloud Shell Editor to start VS Code Server on boot with the below commands :
.customization_environment file :
#!/bin/sh
#.customize_environmnet run in background as root, wait for your user to initialize
sleep 20
sudo -u [USER] /home/[USER]/code-server-3.10.2-linux-amd64/code-server --auth none --port 9090
Step 3:
To view Visual Studio Code server on port 9090 :
Click on Web Preview > Change Port > 9090
If getting a 404 error, remove ‘?authuser=0’ from the url.
Visual Studio Code Server will now be running on the browser!!!
Block travel navigation plugin:
To have the block travel navigation plugin in cloud shell,follow the below commands and run them in shell in root directory:
wget https://github.com/efatsi/block-travel/archive/refs/tags/v1.0.0.tar.gz
tar xzvf v1.0.0.tar.gz
ls
#You will see block-travel-1.0.0
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
#You might get Permission denied if yes, then follow the next two commands else go to webport view in 9090
chmod +x block-travel-1.0.0/keymaps/block-travel.cson
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
Open the webport view in 9090, you will be able to navigate through the vs code files using :
Alt+up for block-travel.jumpUp
Alt+shift+up for block-travel.selectUp
Alt+down for block-travel.jumpDown
Alt+shift+down for block-travel.selectDown
WARNING: This should not be considered a long term solution, just a stop gap until this is supported in an easier fashion.
This might not be the greatest idea but it does seem to work for the vim extension I tried in my environment. Probably best to make a request through the in product feedback to get it officially added but until then you can follow these steps.
Upload the .vsix package to your $HOME directory.
Unzip the package into the /google/devshell/editor/theia/plugins directory. This action will not persist so you'll want to add the command to the .customize_environment script actions.
e.g.
sudo unzip vscodevim.vsix -d /google/devshell/editor/theia/plugins/vscode-vim
Now for the questionable part. You'll want to install the pslist package to make life easy so you have access to the rkill command. You probably also want to add this to the .customize_environment file as well since it also will not persist.
sudo apt install pslist
Now we need to get the process id for the editor. Currently this seems to be spawned by a supervisord command, which also spawns the tmux section so we're going to grab the process id that is from the runuser command it spawns (and filter for the theia one just in case).
ps ax | grep runuser | grep "theia start"
Then we can use rkill to kill the process and all of the its children, which will cause supervisord to restart it for us and the plugin should be available.
sudo rkill PID_OF_GREP_OUTPUT
I'm not sure the best way to script the rkill command yet, since I don't know the timing of when it's up vs the .customize_environment execution, so right now I run this each time I start up a new VM.
If anything goes horribly wrong, you should be able to request a restart of the VM from the menu options and get a fresh one.
Cloud Shell offers VS Code editor experience through Theia. Did you try cloud Code editor in the cloud shell? that is exposed through "Open Editor" button on the top right, this will open cloud code editor that gives you VSCode experience. You have all the navigation keys that are available in the editor.

mongodb installation. service failed to start

Installing MongoDB as a service is failing for me. The install gets to the point where it tries to start the service and then fails:
Service 'MongoDB Server' (MongoDB) failed to start. Verify that you have sufficient privileges to start system services
This in on a freshly updated new install of Windows 2016
Near default MongoDB 4.2 community install.
Install MongodB as a Service
- Run service as Network Service user.
Directories are not default.
Data Directory : C:\Database\Data
Log Directory : C:\Database\Log
I've granted Network Service full permissions on C:\Database
.net framework 4.6 is installed.
Am I the first person to install MongoDB as a service or something?
Hard to believe someone didn't catch this before.
Update:
Installing to the default directories works. Brutal QA. Any fix to this?
Well, in case someone else comes across this...
One solution is to just install to the default directories, then after the install is done, stop the service, change the cfg to point to the directories you want and copy the files over. Then start it up.
Check path of the MongoDB service by runninng Win+R, type services.msc in opened window find MongoDB server double click it. Here what I see when installed MongoDB to custom folder C:\mongodb MongoDB server path
You probably need to install it into default folder or change path to executable in services.
In latter case run Win+R type there regedit.exe go to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MongoDB and change ImagePath key.
In my case key was "C:\Program Files\MongoDB\Server\4.2\bin\mongod.exe" --config "C:\Program Files\MongoDB\Server\4.2\bin\mongod.cfg" --service
and I changed it into "C:\mongodb\bin\mongod.exe" --config "C:\mongodb\bin\mongod.cfg" --service
restart

How to run mongodb on AWS

I'm looking for a little direction on how to set up services on AWS. I have an application that is build with Node.js and uses mongodb (and mongoose as the ODM). I'm porting everything over to AWS and would like to set up an autoscaling group behind a load balancer. What I am not really understanding, however, is where my mongodb instance should live. I know that using DynamoDB it can be fairly intuitive to set up to work with that, but since I am not, my question is this: Where and how should mongo be set up to work with my app? Should it be on the same ec2 instance with my app, and if so, how does that work with new instances starting and being terminated? Should I set up an instance dedicated only for mongo? In addition, to that question, how do I create snapshots and backups of my data?
This is a good document for installing MongoDB on EC2, and managing backups: https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
If you aren't comfortable doing all this yourself you might want to also look into MongoLab which is a MongoDB as a Service that can run on AWS.
Your database should definitely be in a separate instance than your app, from all aspects.
A very basic tier based application should comprise of the app server cluster in a scaling group behind a load balancer - in a public subnet, and a separate cluster (recommended in a different subnet which is not publicly accessible), which your app cluster will speak to. whether to use an ELB for Mongo or not actually depends on your mongo config (replica set).
In regards to snapshots (assume this will only be relevant for your DB), have a look at this.
You can easily install MongoDB in AWS Cloud 9 by using the below process
First create Cloud 9 environment in AWS then at the terminal
ubuntu:~/environment $ At the terminal you’ll see this.
Enter touch mongodb-org-3.6.repo into the terminal
Now open the mongodb-org-3.6.repo file in your code editor (select it from the left-hand file menu) and paste the following into it then save the file:
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
* Now run the following in your terminal:
sudo mv mongodb-org-3.6.repo /etc/yum.repos.d
sudo yum install -y mongodb-org
If the second code does not work try:
sudo apt install mongodb-clients
Close the mongodb-org-3.6.repo file and press Close tab when prompted
Change directories back into root ~ by entering cd into the terminal then enter the following commands:
“ubuntu:~ $ “ - Terminal should look like this.
sudo mkdir -p /data/db
echo 'mongod --dbpath=data --nojournal' > mongod
chmod a+x mongod
Now test mongod with ./mongod
Remember, you must first enter cd to change directories into root ~ before running ./mongod
Don't forget to shut down ./mongod with ctrl + c each time you're done working
-if this error pops up while using command mongod
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Then use the code:
sudo chmod -R go+w /data/db
Reference

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

Moved Mongo Database to Different Drive: Unable to acquire lock for lockfilepath

I am in the process of moving my mongo data to a different drive. All of the data I want to move is stored in /data/db and I am moving it to a NAS (Network attached storage).
First step:
mongodump -d mydb -c mycollection -o nas/mongo-temp
This created a file tree in mongo-temp/ like so:
dump
`-- mydb
`-- mycollection.bson
1 directory, 1 file
I then stopped the mongod service and created a new /data/db directory:
/etc/init.d/mongod stop
mkdir mongo-temp/data/db
...and changed the dbpath line in /etc/mongodb.conf
dbpath=.../mongo-temp/data/db
I successfully restarted the mongo server using /etc/init.d/mongod start.
When I try to connect:
mongo
MongoDB shell version: 1.6.4
Thu May 3 09:53:23 *** warning: spider monkey build without utf8 support. consider rebuilding with utf8 support
connecting to: test
Thu May 3 09:53:24 Error: couldn't connect to server 127.0.0.1 (anon):1154
exception: connect failed
I've tried to start mongod with the command mongod --dbpath .../mongo-temp/data/db but I get an error that says:
Thu May 3 09:57:26 exception in initAndListen std::exception: Unable to acquire lock for lockfilepath: /home/dlpstats/nas-mnt/mongo-temp/data/db/mongod.lock
Removing the lockfile doesn't help. If I run the mongod command without --dbpath, the server starts fine and I am able to make queries on my old database.
First, you mentioned that you used mongodump to populate the new drive - was this just a method of backing things up or did you intend that to be the new database files? That is not how it works - mongodump output is not the same as a database file - it needs to be re-imported with mongoresore in fact. If you do a straight data file copy then the transfer will be seamless.
Then, as well as the permissions suggested by Wes in his answer, a few more things to check:
That you have shut down the old server successfully and completely - it's possible it's mis-reported error and you are getting it because it is trying to grab a port that is already open
You are using version 1.6.4 according to the mongo shell output, my guess is that you installed from the Ubuntu repo for 11.04 or similar, that is not a good option - 1.6 is very old at this point. Use the 10gen repos (http://www.mongodb.org/display/DOCS/Ubuntu+and+Debian+packages) or download the binaries and get a more recent version
Last but not least, when you start the mongod manually, make sure all the arguments are the same, like the port. When you connect via the mongo shell, specify the port you started the mongod on - don't rely on defaults when running into issues like this, be explicit.
I faced this problem and issuing following command solved my problem:
rm /var/lib/mongodb/mongod.lock
And then restart the mongod.
But I'm not sure is it a good solution or not.
Check the permissions for the directory and parent directories of mongo-temp. Presumably it's running as the mongodb user?
You need execute permissions on the directory (and parent directories) in order to create files there. Execute permissions on a directory allow you to list the files there, which is needed to be able to open the file for writing.