I am using EclipsePHP in Ubuntu 10.10 and try to use Mercurial (HG) to work with a repository that's located on my network-connected staging server (samba share).
When trying to refresh the repository from within Eclipse (hg status really) , I get the following error thrown in my face: abort: Operation not permitted: /media/sharename/myrepository/.hg/.dirstate* .
Whilst trying to find out what's wrong, I went to the network share from terminal and wrote hg status - the same error occurs, so it's not only occuring from within Eclipse. I tested to CHMOD the files from both my computer as well as the server - chmod 777 /media/sharename/myrepository/ -R - nothing changes.
But when I accidentally ran sudo hg status from the repo directory, Mercurial started the fireworks and worked like a charm.
What on earth is going wrong with my computer? Why can't i run my hg commands without being root?
You can mount your network drive like this. Open your /etc/fstab/, and then enter the following line.
//IP_OR_HOSTNAME/DIRECTORY_NAME /MOUNT_DIRECTORY cifs user=sambauser,pass=sambapassword,auto,exec,umask=002,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0
Hope it works.
chmod will not help you here I guess. The ownership and permissions on the server side are not replicated to the client (no unix extensions on server) or your UID/GID differ between both machines. You can override file ownership when mounting via:
mount -t cifs //SERVER/SHARE /MOUNTPOINT -o uid=USERNAME
This is from memory though, check man mount.cifs for details. Also, alternative networked filesystems like NFS might serve you better in this case, or try sshfs.
Related
I'm trying to automate deployment via ftp via bitbucket pipelines.
Path is:
/var/www/vhosts/maindomain.com/subdomain.maindomain.com
Tried it with and without the first forward slash in there. Also checked the default path when you connect and its maindomain.com/subdomain.maindomain.com -- tried that too but same error.
Code looks like this:
image: node:9.8.0
pipelines:
default:
- step:
name: Deployment
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT dist/*
- echo Finished uploading /dist files to $FTP_HOST$FTP_SITE_ROOT
But the problem is ncftp doesn't like the file path to upload no matter what. I've been using the one showing up in filezilla after navigating to that folder whilst connecting with the exact same credentials.
How could I trackdown the right path or troubleshoot this forward?
I think the problem lies with my server only accepting SFTP connections and can't set port to 22 as NCFTP does not support SSH. I'm currently look at lftp as an alternative, will post here the syntax if I figure it out.
Edit: Does not scale well, will be pursuing different avenues for continuous deployment.
Don't need to add the full path of FTP site just put the path as below.
-R /maindomain.com/subdomain.maindomain.com dist/*
for check the physical path of site, site->manage ftp site-> advance setting.
where you find the physical path that we don't need to include when we use cli.
I have an issue that I can't seem to be able to solve...
I want to use GutHub to develop a web-app with joomla locally and push my changes to the server everytime I feel like it.
It works great but after I log off the server I get an error that remains, even if I raise the memory limits per config:
fatal: Out of memory, realloc failed
I'm not an expert since I'm still starting to use GitHub, but these are my steps and maybe you have an advice for me...
(on the 1and1.com server)
I start on the server, installing a fresh joomla 3.7.4 and copy the .gitignore from git//joomla/joomla-cms into my webroot-directory, ignoring all core files.
(server)
git init
git config receive.denyCurrentBranch false
cat << EOF >> webrootdir/.git/hooks/post-receive
#!/bin/sh
GIT_WORK_TREE=webrootdir git checkout master -f
EOF
chmod 755 webrootdir/.git/hooks/post-receive
(local computer, cloning into local dir called webroot)
git clone ssh://password#account.1and1-data.host/homepages/11/123456789/htdocs/webroot webroot
(local)
do some work
(local)
git add . && git commit -m "Joomla 3.7.4"
git push
(server)
git checkout -f
I repeat steps 5 and 6 all day long, it works as expected and the files on the server are being updated everytime I repeat the steps.
When I logoff the server or being logged of after some time, the server locks into the error, no matter if I try to checkout or just call git status.
My biggest files are around 250kB (jpegs), I don't manage databases over git, it's just a template folder I'm working on that's being updated, so I can't really figure out what I'm doing wrong :-/
Any advice would be very much appreciated,
Thanks!
I was able to fix the issue with
git config --global core.preloadIndex false
On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.
I cannot start up a new Meteor application on a Vagrant linux box (running on a Mac). It fails every time with a 'unspecified uncaught exception' in Mongo. I have tried a bunch of things to get this going, but even with the simplest set-up, I cannot get the project running. I would be grateful for any suggestions.
My steps are:
create a completely clean Vagrant box ("ubuntu/trusty64");
install Meteor on the new box (curl https://install.meteor.com/ | sh);
choose a location to create the project;
create a new Meteor project (meteor create app);
start up the project (cd app; meteor)
I know that the permissions on the vagrant shared folder are quirky, so for step #3 above I have tried putting the project:
in the shared guest/host folder, /vagrant,
in a subdirectory of the Vagrant home folder (/home/vagrant),
in a subdirectory of / (with permissions set to vagrant:vagrant), and
in a subdirectory of / with permissions set to root:root, the project created with sudo meteor create app and run with sudo meteor
In all cases, I see this error:
=> Started proxy.
Unexpected mongo exit code 100. Restarting.
Unexpected mongo exit code 100. Restarting.
Unexpected mongo exit code 100. Restarting.
Can't start Mongo server.
MongoDB had an unspecified uncaught exception.
This can be caused by MongoDB being unable to write to a local database.
Check that you have permissions to write to .meteor/local. MongoDB does
not support filesystems like NFS that do not allow file locking.
I cannot tell if this is a Vagrant issue (though I think not, given what I've tried) or a Meteor issue, but I suspect it is Meteor (or one of its many dependencies). I doubt it is a permissions issue, since it failed when running as root. I've tried building meteor from scratch and the build fails and I've tried creating the project with --release 0.9.0 and --release 0.9.2-rc1 and the download is simply killed without explanation.
(1) After step 2 'install Meteor on the new box (curl https://install.meteor.com/ | sh)'
user$ cd /vagrant
user:/vagrant$ meteor create myApp
You should see the myApp folder on your Mac host (the same folder for the vagrantfile)
(2) Insides the myApp folder, you will see the default .meteor folder, make a folder called local if it is no there
user:/vagrant$ cd myApp/.meteor
user:/vagrant/myApp/.meteor$ mkdir local
(3) Create the same folder structure in the /home/vagrant
user:/vagrant/myApp/.meteor$ cd ~
~$mkdir -p myApp/.meteor/local
(4) Link or mount the /vagrant/myApp/.meteor/local to /home/vagrant/myApp/.meteor/local
sudo mount --bind /home/vagrant/myApp/.meteor/local/ /vagrant/myApp/.meteor/local/
or make it permanently
echo “sudo mount --bind /home/vagrant/myApp/.meteor/local/ /vagrant/myApp/.meteor/local/” >> ~/.bashrc && source ~/.bashrc
(5) Now you can start the meteor
~$cd /vagrant/myApp
user:/vagrant/myApp$meteor
The reason why I mount the local folder rather than the <.meteor> folder is that you can still edit the files insides the <.meteor> folder on your Mac host. You can replace myApp with whatever name you want
Hope this help
I'm working with a Windows host, but maybe this will apply to your situation as well.
The only folder which causes the issue is ./meteor/local. If you relocate this with a symlink to be outside of the shared /vagrant folder you should be able to run the meteor app okay.
But, to put a symlink in the shared folder you need to enable symlinks in the VM... which requires starting Vagrant as an admin.
I put together an Vagrantfile with some scripts and instructions here:
https://github.com/ElectronVector/vagrant-meteor
I ran into similar issues trying to run meteor on windows. It seems that mongodb is not able to write in the /vagrant folder. I solved this by doing
sudo mount --bind /home/vagrant/meteorapp/.meteor/ /vagrant/meteorapp/.meteor/
(got that from https://gist.github.com/gabrielhpugliese/5855677)
Here is an answer that solved my problem. Launching meteor project from a shared folder on Debian VMware virtual machine(running on a Windows).
The issue is that mongodb can't create data files inside a shared folder, so in this case just use an existing mongodb for meteor project:
export MONGO_URL=mongodb://localhost:27017/your_db
Doing
vagrant reload --provision
solved my problem.
I think the reason might be some files got corrupted or deleted.
I am using HgEclipse from here: http://www.javaforge.com/project/HGE
I have created a new repository on my server to test the plugin. I cloned the repository, added some files, committed and attempted to push but received the following error message...
abort: HTTP Error 500: Internal Server Error. Command line:
/home/james/workspace/project:hg -y push http://***#[repository location],
error code: 255
From some Googling I can find that the 255 error is to do with Authentication, but the password is correct, otherwise I wouldn't be able to clone in the first place.
Any help or suggestions would be much appreciated.
Thanks
EDIT:
After updating my system to the latest versions I am now also getting this from the command line when pushing (which was previously working):
abort: HTTP Error 500: Permission denied: .hg/store/data/path-to-file.i
Your webserver can't write into the repository. You can either
change the permissions in the local repo so that the webserver get write permissions there (which means you need to set up write permissions with chmod for all files and directories under (and including) .hg, also you need to set the sticky-bit to all directories)
give the webserver an own repo, which is owned by the server.
Giving the web serve a repo of its own looks like this:
$ sudo bash
# mkdir /srv/repo-base
# chown www-data /srv/repo-base
# cd /srv/repo-base
# su -c "hg clone /path/to/current/repo web-repo-name" www-data
# vi /etc/apache2/sites-available/$SITE_CONFIG_FILE # change the repo path to /srv/repo-base/web-repo-name
# /etc/init.d/apache2 reload
A drawback of this method is that you need to push via http even on the machine with the webserver, since as a normal user you don't have write permissions to the webserver repo.
This answered it for me, although it's a different system set-up: TortoiseHg.
In the Repository Settings -> Server, I set Allow Push to *
This was on a private network so secured behind a firewall.