Design Sync won't revert local changes - checkout

I am using the command
dssc pop -get -unify path_to_file
to locally modify a file and when i try to revert the changes with
dssc cancel -force path_to_file
I get an error "Error: path_to_file - Object does not exist"
Same issue exists without -force flag

Here's something that might help:
If you take a look at any file under control by the dssc (ls -l) that you haven't yet checked out, you can easily discover that the file is actually a link to the vault.
So, when you actually populate a file by using dssc pop -get -uni, what happens is that the tool goes to the vault and fetches a local copy for you.
Now, the above sentence has the actual answer to your question: all you need to do is to actually use dssc pop -get -uni one more time... Well, the tool will probably disagree, recognizing that you've tampered with the file and prompting you to use the -force switch if you really want to revert (repopulate) your file.
Hope this does the trick.

Related

`entr`: How is update ID'd? noatime troubles? &, why -r not work with -d?

I have a script regularly appending to a log file. When I use entr (discovered here) to monitor that log file, and I then touch the log, everything works fine, but when the script appends to the file, entr fails. This may be because I have noatime set in my fstab - but that only stops the updating of the access time not the modify time, so this confuses me.
I've checked and while atime is not updating, ctime (ls -lc) definitely is. Could entr really be depending on atime? I use noatime because I have an SSD. So what should I do? I just stumbled on lazytime. Would that solve the problem?
Since monitoring the log file was not working, I tried entr -cdr on the directory of files that are updated (a new file is created) at the same time as the log (the log is in a different directory). entr recognizes when the directory contents change, but the -r does not work. The entr process just ends, saying "entr: directory altered".
Any idea how to fix this or whether I should just go back to inotify, would be appreciated.
Edit: I have written it with inotify now, and the event reported when the log file is written to is, sensibly enough, "MODIFY."
It turns out that entr does not respond to IN_MODIFY events, but only to these (in Linux):
IN_CLOSE_WRITE|IN_DELETE_SELF|IN_MOVE_SELF|IN_CREATE
Also, IN_ATTRIB, but only if the file-mode or inode numbers change.
In BSD/OSX, it's:
NOTE_DELETE|NOTE_WRITE|NOTE_RENAME|NOTE_TRUNCATE|NOTE_ATTRIB
Also, the option -r has no effect in the context of the -d option. It only works when entr is monitoring files.
See the developer's comments. Also, more info on entr.

Powershell: Copy-Item -Recurse -Force is not copying all sub files

I have a one liner that is baked into a larger script for some high level forensics. It is just a simple copy-item command and writes the dest folder and its contents back to my server. The code works great, BUT even with the switches:
-Recurse -Force
It is not returning the file with an extension of .dat. As you can guess what I am trying to achieve, I need the .dat file for analysis. I am running this from a privileged account. My only thought was that it is a read/write conflict and the host file was currently utilizing it (or other sys file). What switch am I missing? The "mode" for the file that will not copy over is -a---. Not hidden, just not copying. Suggestions elsewhere have said to use xCopy/robocopy- if possible I do not want to call another dependancy- im already using powershell for the majority of the script, id prefer to stick with it....Any thoughts? Thanks in advance, this one has been tickling my brain for a little...
The only way to copy a file in use is to find the locking handle close it then retry the copy operation(handle.exe).
From your question it looks like you are trying to remotely copy user profiles which includes ntuser.dat and other files that would be needed to keep the profile working properly. Even if you did manage to find a way to unload the dat file(s), you would have to consider the impact that would have on the remote system.
Shadow copy is typically used by backup programs to copy files in use so your best bet would be to find the latest backup of each remote computer and then try to extract the needed files from the backed-up copies or maybe wait for the users to logoff and then try.

Update ClearCase view config spec with command line with changed load rules

I have a base ClearCase snapshot view that being updated automatically overnight based on config spec file using this command
cleartool setcs -overwrite -ptime d:\CS.cs
The problem is that the config spec load rules are being changed and if I run the command it ask for confirmation to update load rules
R:\>cleartool setcs -overwrite -ptime d:\CS.cs
cleartool: Warning: 1 objects were eliminated from the new config spec's load rules:
"\QA\QTP"
Continue, and unload these objects? [no]
So is there a way to tell ClearCase using command line to automatically continue with the update without getting confirmation ?
As mentioned in "Batch Script to Automate a DOS Program with Options", you could write the right answer in a file, and redirect it to your command.
cleartool setcs -overwrite -ptime d:\CS.cs < yes.txt
That way, if the command stops for getting an input, it will have it immediately.
You find a similar approach in "how to userinput without typing to a batch file".
You should use the "-force" option

Is it possible to search though all xcodes logs

XCode now keeps the logs from the previous runs handy which is great.
Is there a way to search though all of the logs.
My use case is I have seen a particular error but cant remember which run it was in. I need to find the error URL from the logs.
Xcode stores debug logs at
~/Library/Developer/Xcode/DerivedData/<YOURAPP>/Logs/Debug/
The .xcactivitylog files are actually just gz archives. Decompress them:
cd ~/Library/Developer/Xcode/DerivedData/<YOURAPP>/Logs/Debug/
EXT=".xcactivitylog"
for LOG in *.xcactivitylog; do
NAME=`basename $LOG $EXT`
gunzip -c -S $EXT "${NAME}${EXT}" > "${NAME}.log"
done
Now you can easily search them using grep or Spotlight or what your prefer.
To add onto #DrummerB answer. Once the files are unziped you can do a search with custom scope from within XCode. I prefer this to grep or spotlight.
The folder where these logs are is
~/Library/Developer/Xcode/DerivedData/[YOURAPPID]/Logs/Debug/
You can open/read/search them for example in TextWrangler.

How can I resume downloads in Perl?

I have a project that depends upon some other binaries to be downloaded from web at install time.For this what i do is:
if ( file-present-in-src/)
# skip that file
else
# use wget to download the file
The problem with this approach is that when I interrupt a download in middle, and do invoke the script next time, the partially downloaded file is also skipped (which is not desired), also I want wget to resume the download of the partially downloaded file.
How should I go about it:
Possible Solutions I could think of:
Let the file to be downloaded to some file say download_tmp. Move to original file
if successful.
Handle SIG{'INT'} to write proper cleanup code.
But none of these could help resume the partial file download,
Any insights?
Fist, I don't understand what this has to do with Perl, since you're using wget to do the dowloading ... You could use libwww-perl (perldoc LWP) and have more control about the download process.
Then I second your idea of downloading to a "tmp" filename and move the file on success.
However I think you need to go further and verify the integrity of the files. Doing an MD5 or SHA hash is very easy, and match the downloaded one with what you're expecting. You can have a short file on server containing the checksum (filename.md5). Determine success only when you have a match.
Note that catching all the signals and generally trying to make the process unkillable, and then expecting it to have worked is bound to fail at one point or another. There could be a network timeout, a crash, power failure, configuration problem on the server ... you should instead assume downloads can fail, because they will, and code so that your process can recover.
Finally you're not telling us what kind of binaries you're downloading and what you're doing with them. Since you use wget I'm going to assume you're on Unix; you should consider using RPM+Yum or the likes, they handle all this for you. RPM are easy to write, really.
use your first approach ..
download to "FileName".tmp
move "FileName".tmp to "FileName" move! not copy
once per diem clean out all .tmp files (paranoia rulez)
You could just use wget's -N and -c options and remove the entire "if file exists" logic.