I have some files under VSS - If I copy an updated version of a file into its location and overwrite it , will VSS update iteself / reflect the updated file ?
Seems that when I do this, the updated code is not being reflected within the file in VSS and also when somebody else pulls down the file they dont get the updates ...
Changing a VSS-controlled file locally on your hard drive (by copying/pasting) requires that you:
Check out the file first. (If you don't do that and try only to remove the read-only flag, your changes won't be reflected on the VSS database upon check in)
Paste your desired copy of the file (overwriting the previously checked-out file)
Then check in the file, which would overwrite the file version in the VSS database.
So, when other users try to get the latest version, they'd get your checked-in version.
Related
I have
A git repository
A folder on this repository
To this folder I upload .SQL files. A new DDL is a new .SQL file and it is uploaded to the same folder as this is the place from which a CICD process kicks off to act upon this new file. I do change the sql code now and then but have no use for them after a certain point as it gets executed to the ultimate database via Liquibase
The Problem
Over time this folder now has close to 5000 .SQL files and growing everyday
Its getting cumbersome to navigate and find anything in this folder
The CICD build out of this folder is taking a lot of time it zips the entire folder
I want to
Archive/Remove everything more than 3 months old from the main folder
Move the old files to an Archived location so that I can refer to them
Get the file count down to a manageable level
Possibly do the archiving in a automated way without manual intervention
I Do not want to
Delete the files as I have to maintain a history
Change the process as may be have only one sql file and keep changing it.
I do not want Delete the files as I have to maintain a history
And yet, this is the simplest solution, as you still list the history of a deleted file.
# filter the deleted file to find one:
git log --diff-filter=D --summary | grep pattern_to_search
# Find the log of a deleted file:
git log --all -- FILEPATH
That means your process would simply:
list all files older than a certain date
add their name to a "catalog" file (in order to query them easily later on)
delete (git rm) them (their history is still there
For any file present in the "catalog" file, you still can check their log with:
git log -- a/deleted/file
The "created" date is lost after I push my files to git. When I clone my repository the "created" date is the current date. Is that normal?
When you clone a repository, Git does not check files out using the original time specified in your commits. Instead, it creates the files as normal, using the current time.
This is in fact normal, and it also has the nice benefit of working properly with Make, which uses the file time to determine whether a file is in need of being rebuilt. Since Git always uses the current time, and files Git has checked out will be considered as changed, and Make will build any products that depend on them.
Yes that's normal.
As far as file metadata (created, last modified, executable or not, etc.) goes git only saves if the file is executable or not. The other values like when it was created are completely managed by your filesystem independent from git.
When you clone the repository the files are created now on your filesystem - so the created metadata of the file is the current date.
I opened a file (p4 edit) and made few changes to it. I deleted (rm -rf) the directory which contained this file followed by a p4 sync -f to bring back the depot files (in hopes of getting rid of a lot of untracked/generated files in the directory).
However, it helped me only partially. While I was able to get rid of of the undesired files, the sync step could not bring back the edited file.
I can see that the file is in opened state with p4 opened, but I can't seem to find a way to bring back this opened file along with my changes.
Any clues?
Edited files are not stored on the server; they are only stored locally. Since you removed the modified file with rm -rf you cannot get it back (unless the file was backed up by another process, such as a netapp .snapshot directory).
The server keeps track of the state of files but the changes are not stored until you submit.
I copied a bunch of java classes into another java package outside of perforce by accident and made a bunch of changes to them. I now realised that the revision history of those files has been lost as I didn't use perforce to copy the files over.
Example:
original file - dir1/Class1.java
copied file - dir2/Class1.java
The original file still exists.
If I want to restore the revision history of the files what would be the appropriate command to run in order to do this?
You should have branched the file in Perforce rather than copied it outside of Perforce, but that can be remedied.
Copy dir2/Class1.java to another location then delete the original
Branch dir1/Class1.java to dir2/Class1.java
Check out dir2/Class1.java
Copy the backup of the file you made in step 1. to dir2/Class1.java
Check in dir2/Class1.java
You will then have your recent modifications to the file dir2/Class1.java in version control and the file will be linked to its original via the branch history.
I have a configuration file in my project which needs to be in the repository (so new developers get it when they checkout the project). Each developer might change some values in the file locally - these changes should not be committed and I don't want them showing in the synchronization menu (I'm using eclipse and subversive if it matters).
Note that I can't just set the svn:ignore property since it only works on files that aren't under version control - but I do want to keep a base version of the file in the repository. How can I avoid the file showing in synchronization without deleting it from repository?
EDIT: A better description - what I actually need is to be able to set a "read-only" property on the config file, so it can't be changed in the repository as long as the property is on. Do you know anything like this?
Thanks
I do this by having a base version of the file checked-in as foo.base, and svn lock this so that it's read-only on checkout. Then have each developer copy this file to their own personal foo file, which is ignored by svn-ignore.
You can't ignore files which are already under version control. The only way to ignore such files is first delete those files do a commit and after that change the svn:ignore property and do a second commit.
If you like to have a kind of Base-Version use a template which has a different name.
You can version template under different name
OR
Read this answer
once u check out, u can lock it, and once it is locked, others will not be able to commit(make changes to svn) that file. see image below
My solution is that a compile time script creates a copy from the original template file if it does not exist. This copy can be put on the ignore list. I feel that locking a file for this purpose abuses the locking feature (it was not created for this).