Remove old file checkouts from CVS repository - version-control

While checking out some files from our CVS repository the other day I noticed that someone else that no longer works here already had the file checked out. It seems that he didn't undo his checkouts before he left so they are now left stuck in the repository. His user accounts and files have already been removed so there's no way I could just log onto his machine and undo the checkouts.
Is there a way to remove these checkouts on the server?
Thanks

Judging by your mention of fileattr.xml in your comments it sounds like you're using CVSNT. In that case, removing those files from the repository is indeed a feasible, but rather brute-force approach. If your VS-plugin does indeed use CVSNT's reserved edit command to implement the exclusive checkouts then the following command should have removed the edit locks as well:
cvs unedit -u <username> <files...>
(where <username> is the name of the user currently holding the "lock")
This requires that your user account is regarded as an admin by the CVS server, though. There are various ways to do this, the simplest being listing your user name in the CVSROOT/admin file. If I remember correctly, CVSNT also supports repo admins via a special windows user group. You'd have to check the documentation for specifics.

Related

three ways to let PHP and a regular user edit the same files

I am a web developer, and for some upcoming projects I would like to use a file-based CMS. This means that many of the files I create at the start must be editable by the PHP user later, but also remain editable for my user (and also the other way around). My PC runs Debian 9, which I love but am not super knowledgeable about, and I have also just set up a local network server with Debian 9 for backups and possibly file sharing. (I'm using Webmin to configure this, which reflects my level of command line skills).
On my online shared hosting server, the PHP user and the FTP user seem to be the same, and 644/755 permissions work fine, this is also recommended by the CMS I'm using. I would like to mimic this on my computer so I don't have to fiddle with permissions all the time. But how do I do this? Currently, my regular user (anna) does not have access to www-data's files and vice versa. Putting them in the same group still means changing file permissions. Making anna the PHP user is a Bad Idea (as far as I understand it) because anna has sudo permissions.
So far I have researched three possible solutions that I don't really know very much about, and I would like to know which is the best route to take.
Develop locally on my computer and use apache-mpm-itk or suPHP to let PHP edit the files (I got that idea from this question on ServerFault).
Develop locally on my computer and rsync the files to my server with grunt-rsync, and somehow get rsync to set the ownership to www-data (another ServerFault thread helping here).
Mount the project's server directory, which is owned by www-data, on my computer with SSHFS and then either edit the files on the server directly or copy them over from my local directory with grunt-copy.
What do you think: from a security and ease of use perspective, which is the best way? Or do you know an even better one?
Thank you for taking the time to read and think about this!
Anna~
I figured it out! I finally ended up reading about running PHP as CGI instead of as an Apache module, and that this would solve my permissions problem. Plus, as far as I understand it, there are no extra security precautions to take when I'm the only one working with it on my local computer.
In case someone comes across this who might find it helpful, here's what I did (basically following these instructions):
I installed php7.0-fpm
Edited /etc/apache2/sites-enabled/000-default.conf and put the following just before </VirtualHost>:
DirectoryIndex index.php
<LocationMatch "^(.*\.php)$">
ProxyPass fcgi://127.0.0.1:9000/var/www/html
</LocationMatch>
I activated the Apache module proxy_fcgi (via Webmin, which apparently does an automatic Apache restart)
In /etc/php/7.0/fpm/pool.d/www.conf I commented out a listen line and put another below like this:
; listen = /run/php/php7.0-fpm.sock
listen = 127.0.0.1:9000
I then restarted PHP-FPM with this command: /etc/init.d/php7.0-fpm restart (a little different from the instructions, I'm on Debian 9). After that, phpinfo() gave me the Server API "FPM/FastCGI".
And finally, I changed the user and group from www-data to anna in three places, twice in /etc/php/7.0/fpm/pool.d/www.conf and then once more in /usr/lib/tmpfiles.d/php7.0-fpm.conf (this last bit may be Ubuntu/Debian specific, my thanks go to Keith for a comment on StackExchange).
And that was it! :-)

GitLab - Cannot push or pull. It seems to be a permission issue

Hope someone will be be able to help: I've installed GitLab and for a few days it seems that worked ok (I could push and pull only from a client but not from the machine that runs GitLab itself), however that's no longer the case. I have been working on the server (its my own server that I've setup for development/learning/personal stuff but I don't believe I've changed anything that could affect Gitlab, so I'm don't know what to do.
At the moment I can't push or pull from either my local machine (OS X 10.8.3) not from my server (Ubuntu 12.0.4). I've run the test several times and all is green. When I do git config user.name or git config user.email it comes back with my name and email respectively. I've also searched online but couldn't find anyone in exactly the same situation, however I did try many of the approaches suggested: I've deleted and generated more ssh keys, changed config in /home/git/gitlab/config.yml to reflect my setup (I'm running apache). My GitLab is 5.2 and I've followed the instruction on GitLab's homepage. In order to make it working with apache instead of nginx I've followed the instructions here:. This question seems the closest to describe my problem, however the solution is not clearly described, so I couldn't follow. The web ineterface works fine and I can commit either from my local machine (using sshfs) and my server. I just can't push or pull. The error I get is:
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
I'd appreciate any help. I've been struggling with this for days now and I'm on the brink of give GitLab up...
Many thanks
EDIT: On my server I've got three accounts: user1 (main, first user, root), user2 a sudoer that also has admin privileges and git which also is a sudoer. After more investigating, I'm pretty sure this is a problem of me messing up with permission and the ssh key. Can someone point me out: when I generate the ssh key, which user should I be logged in as? In which computer should I generate this key? On my server or my Mac? Also, when I've tried push from my server directly (I was physically logged in the server rather than sshed to server via my Mac) GitLab was asking for git's password. I then generated a key logged as git on the server and added to GitLab through the web interface and the error appeared again (the same as before). Still not fixed.
The problem in my case was that I changed the git credentials on my local machine (when you create a new repo, you set the user name and email Git and git#localhost respectively) that I had changed and didn't realise. That's why every time I was trying to either push or pull I got the error. Once that was changed back to the correct settings, Gitlab started working again. Leaving as it might be helpful to someone.

Only show GIT repo's to which user has access with gitweb

I currently am experimenting with setting up a GIT repository server so we can switch from SVN to GIT. I've got almost everything covered, but am left with an issue.
The current setup is as follows:
All developers (and non-developers) have a user accounts & correct groups because the server is a NIS client
All repos are made in /var/git/
All pulling/pushing is done over ssh
This works perfectly so far, and eliminates the need for gitosis or gitolite.
Because I would like to have a browsable overview of the repositories I've set up gitweb including pathinfo. Because the repos are private I've set up authentication through Perl AuthenNIS and this works, but here I encounter a problem.
It is undesired that all developers have access to all repositories, but gitweb just shows every repository it (the apache user) can read.
So my question is: is it possible to make gitweb only show the GIT repo's the currently logged in user has access to?
Possible solutions:
Further access control through .htaccess. The pathinfo would enable this but it wouldn't prevent the repo's from being accessed through non-pathinfo URLs (e.g. /repo.git/ wouldn't work but /gitweb.cgi?p=repo.git would)
Setting up a full gitosis/gitolite environment and integrating it into gitweb (essentially this). I would like to prevent this because the overhead is undesirable
Making gitweb run as the authenticated HTTP user. This would fix all the access control problems but I don't know how to do this
gitweb's $export_auth_hook in combination with $cgi->remote_user seems promising, but my understanding of perl is too limited to use it (the hook would need to verify that the user has permission to access the repo directory before showing/exporting it)
Is there anyone who knows how to make 3 or 4 work or has another solution?
If developers are pushing/pulling from the repository server using ssh under (I presume) their own user names, then perhaps the easiest way to accomplish what this is to find a way to run gitweb or git under that user's identity.
For instance, find a way to add an authentication hook before gitweb is executed. Then add a wrapper around gitweb that executes sudo -u $user gitweb.real where $user is the authenticated user name.
Or, you could just wrap the git command, i.e. have gitweb execute a wrapper which does a sudo -u $user {real-git-path}.
For implementing authentication against NIS/PAM in Apache, have a look at mod_auth_external

Error connecting to online fossil repository after changing password

I set up a fossil repository on a shared hosting account I have. I created a perl script fossil.pl that points to a cloned repository that I put up on the webspace. I set all the correct permissions (755). When I go to fossil.pl I get the web ui. Everythings cool. However I'm having a problem with pushes and hoping someone could point me to a solution.
When I clone a repository it sets a new password for me (Toby) in the new cloned repository. If I push to this repository online without changing the password it works fine, I can push up changes from my local machine to the online repository. However once I change the password for Toby (to something more easily remembered by me) I get the following error.
Bytes Cards Artifacts Deltas Send:
1810 9 0 2
1Server Error: not authorized to write
fossil: server says: not authorized to
write
Anyone know why this is happening? Anyone know how to fix it?
Fossil recently changed the details of how it saves passwords, which impacted the way authentication is done during clone, push, pull and sync.
One result of that change is that the initial password for the first user account created for you by the clone stores the password the old way, but changing any password updates it to the new way. To force all password records in a repository to use the new method use "fossil test-hash-passwords".
I would verify that both copies of fossil are after that revision, upgrading both ends as needed.
Note that if upgrading to the most recent versions available, you must do "fossil rebuild" on the server (and locally too for any clones) due to changes in the database schema. Since that is always safe to do, it is wise to do it after any upgrade.
Up until recently users and passwords were never cloned across. It's generally a good idea, when you clone, to make sure password on local and your remote are identical, and test it with a sync.

How to force a confirmation step before certain perforce command?

I am new to Perforce.
Is it possible in P4 to have a confirmation step before using some deletion command.
E.g.:
deleting a workspace has no confirmation step
( P4 client -d workspace_name )
deleting label has no confirmation step
( P4 label -d label_name)
Which I found dangerous.
Thanks,
Thomas
I'm not sure of the real danger - if Perforce will wipe out something that you can't get back, then that is generally why you have the -f flag. The one truly dangerous command - p4 obliterate - does require an explicit -y flag before it will do anything.
If you are concerned about modifications of the server meta data (client specs, labels, permissions tables, jobs etc), then I strongly recommend you set up a "Specs" depot. This creates a special depot in Perforce that version controls any changes users make to thing slike labels specs, branch specs, client specs etc. Can be really useful, and is the first thing I do on any new Perforce installation.
It's all in the docs. Try this KB entry for starters.