In Clear Case Remote Client, we use to create new VOB based on VOB selection rule. I checked out a couple of files, but when trying to checkin, I obtain following error -
CRVAP0087E CCRC command 'checkin' failed:
/bin/sh: /vob/cspecs/triggers/scripts/checkin.sh: No such file or
directory ClearCase CM Server: Warning: Trigger "checkin_SomeOtherBranch" has
refused to let checkin proceed.
Please note, as per my vob selection rule, remote client should trigger, checkin_MyBranch for checkin.
As per this SO post, we can redefine existing trigger with mktrtype, Since command line is not available in CCRC. Couldn't try this command to resolve my issue.
Have you come across this situation, I am not precisely clear what is the purpose of trigger in CCRC.
Thank you for any help.
This would be best debug on the CCRC server side (which has full access to all the base ClearCase commands, like mktrtype), like this trigger example for limiting the delete command.
You wouldn't be able to modify it from a client (ie from a CCRC web view)
Check however that, on the CCRC server, the path /vob/cspecs/triggers/scripts/checkin.sh is there (and the vob cspecs is mounted). It should be available though, or you would have error message about "interactive session" as well (see "Non-interactive triggers fail with warning about interactivity using CCRC or CCWeb")
This looks like a custom trigger, put in place on the ClearCase server side. I don't know what its purpose would be.
Related
I am using eclipse with team foundation server. While saving a file or building the project "Saving server item information" operation automatically starts in background and makes all other process in waiting state.
What can be done to prevent this from happening?
If the "Saving server item information" always operate ,this maybe a network problem. To narrow down the issue first check if you can use any TFS features from the plug-in (is "Refresh Server Item Information" the only thing that fails)? If you can't use any TFS features from the plug-in, check that Eclipse's HTTP proxy configuration is correct.
If you can use some TFS features, but the server item information
refresh feature is failing, check that you don't have an HTTP proxy,
firewall, or NAT device between your client and TFS that is dropping
TCP sockets after a short time.
Refreshing/saving information for a large number of files, or from a
heavily loaded server, may take a while (perhaps minutes). A network
device that "drops" the active TCP socket without notifying the client
would also cause this behavior.
Beside, also recommend that you try the latest TEE release if above is not work.
I had (by mistake) a copy of an update site on one server. I had the correct copy on another server so I replicated it to the first server and deleted the original bad update site DB. I found the entry OSGI_HTTP_DYNAMIC_BUNDLES= in the Notes.ini that was pointing to the update DB that I deleted, so I changed it to the new database name, shut down the server and then restarted it. and I get the following error
08/05/2014 12:41:38 PM HTTP JVM: NotesException: Invalid replica id (WFSUpDat.nsf)
where WFSUpDat.nsf is the old (wrong) update site. So Domino is storing this information somewhere else. Can someone give me a pointer as to where it is.
Also if I use the command line
tell http osgi ss com.ibm,xsp.extlib
I get a list of the installed extension library, I have the debug toolbar 4.01 installed in the update site what would the command line be to get the same thing to confirm the version of the toolbar?
Thanks
Make the change in the Server Configuration document in the Server Names.nsf, not directly in the Notes.ini
Should know better, but .......
I have an SQL Server Analysis Service (SSAS) cube (developed with BIDS 2012) and I would like to give the opportunity to the users (that use cube through PowerPivot) to process the cube in their local machines.
I found some material on how to make a scheduled job on the server through Powershell or SQL Agent or SSIS but no material on remotely process the cube. Any advice?
There are several possibilities to trigger a cube processing. The low level method is issuing an XMLA statement to the database containing the cube. To see how this looks like, open SQL Server Management Studio, connect to the AS instance, right-click on an AS database, and select "Process". Configure the processing settings, but instead of hitting OK, select "Script from the top toolbar to have the XMLA process command be generated for you. Leave the dialog with Cancel.
All methods that process a cube end in some way or the other in sending a command like this to the AS database.
There are several options to trigger a cube processing:
In Management Studio, by clicking OK in the above mentioned dialog.
In PowerShell (see http://technet.microsoft.com/en-us/library/hh510171.aspx).
In Integration Services, there is an Analysis Services processing task (http://msdn.microsoft.com/en-us/library/ms141779.aspx).
You can set up a SQL Server Agent job, job steps could either be a direct XMLA step, or an Integration Services step containing the process task (among possibly other tasks).
The question, however, is how the setups described above can be accessed by end users. An important issue here is of course that the user executing the process task needs to have the permission to process the cube. As you might not want to give this permission directly, it might make sense to use some impersonation on the way of calling it. With Management Studio - and as far as I am aware with PowerShell - this cannot easily be achieved.
Integration services and Agent jobs offer the possibility of impersonations. Integration services packages are executed by the dtexec command line tool (part of the SQL Server client tools), there is also a tool called dtexecui (available as "Execute Package Utility" in a standard SQL Server client tool installation), which lets you use a dialog to configure all settings, and then execute a package, but it also can display the command line for dtexec, according to your settings.
And to call a SQL Server Agent job, an easy interface are the stored procedures (http://msdn.microsoft.com/en-us/library/ms187763.aspx), especially sp_start_job (Note this is asynchronous, you call it, it starts the job and returns. It does not wait for the job to complete before returning.) and sp_help_jobactivity to ask for job status as well as sp_help_jobhistory for details of jobs that were running.
All in all I think there is no final solution available, but I mentioned some building blocks that you could use to code your own solution, depending on the preferences in your environment.
I am trying to run the following command against my TFS 2008 server:
TF history /server:MyTFSServer /recursive “$/MyTFSProject/Folder”
When I run I get this:
Ignoring the /server option
It then complains about workspace. The workspace part I get (it is trying to use my current folder to establish the TFS Server. Where I am running from is not mapped so it can't connect. For my needs going tot he right folder will not help.)
But WHY WHY WHY does it not like my /server option?
I have tried /s, /server and -s. None of them work. I have checked and double checked the spelling of my server name. I have checked to make sure that the tf.exe I am running is the TFS 2008 version.
I am so confused and getting a bit frustrated.
(The sad thing is I had this working last week. I ran several history commands without any issues. I don't have the text from those commands, so I don't know what I did different, but I know it CAN work.)
Any help would be great!
Usually when you get this message it's because the /server parameter is unnecessary - that is, the client has determined your workspace and server information from the path you gave it. This should only happen with local paths, however, not with server paths. Can you confirm that you're only using server paths in your commands?
I have a (clearcase) preop non-interactive trigger that needs to evaluate an environment variable value (from client side) in order to perform some checks.
Is there a way or w/a to pass such environment variable value from client with CCRC to the trigger, considering it seems to do not work as with dynamic or snapshot view?
Thanks a lot!
According to this IBM article, no. A non-defined environment variable on the server side might cause the trigger to think it is an interactive one.
The trigger script was referencing a user defined environment variable which was set on the client but could not be found on the RWP web server.
In this example the user defined environment variable MYCC_TRIGGER_TMP was set on the client to define an alternate temp directory and is referenced by the trigger script.
However, it was not defined on the RWP web server.
An example of the full error message:
Unable to checkin "<path to file>"
Error: directory for environmentvariable "MYCC_TRIGGER_TMP" or "TMP" not found
ccweb: Warning: Trigger "ci_pre" has refused to let checkin proceed.
Interactive triggers are not supported in the Web interface.
If the trigger was interactive, it may have failed for that reason.
ccweb: Error: Unable to check in "<path to file>".
The article Writing triggers for the ClearCase Remote Client confirms that, albeit indirectly.
Note: Under certain conditions, pre-op triggers will not work (for example, triggers that require specific ClearCase environment variable evaluation).
CCRC runs as a client process that sends RPC commands to the CCRC server, where they are executed by separate CCRC server processes.
These server processes run under Apache, so the environment variables (EVs) will likely be different from those seen in command shell windows during interactive development.
The server config file (rwp.conf, ccrc.conf) can be modified to add environment variables using the SetEnv command.