I am new to the Perforce source control and I am trying to build a script that login to the server and get files to the working directory. Now I am able to connect to the server, set the working folder but I can not get the client.
This is the code of the connection to the server
$p4 = new P4;
$p4->SetProg($ScriptName);
$p4->SetVersion($ScriptRevision);
$p4->SetCharset($ENV{P4_CHARSET});
$p4->SetPort ($ENV{P4_IP_AND_PORT});
$p4->SetClient($ENV{P4_CLIENT});
$p4->SetUser($ENV{P4_USER});
$p4->SetPassword($ENV{P4_PWD});
$p4->Connect() or die "blblblblblb" ;
$p4->SetCwd($Working_dir); ## the dir exists
Now I need to create the client. Is this is the way ?
$p4->FetchClient( $ENV{P4_CLIENT} );
I am getting back an object , but the
Runsync command fail with error
Client 'xxxxxx' unknown - use 'client' command to create it
My question is how to create the client in Perl and how to use it in the sync command.
To create the client in P4Perl, you'll need to:
Construct the spec data for the new client, either field-by-field, or by using a template client.
Issue $p4->SaveClient( $clientspec ); with your spec data.
Then, once you've created your client spec that describes your workspace, you'll need to tell P4Perl to use that client spec for your commands: $p4->SetClient( 'my-client-name' );
Then you can run your sync command.
Related
I wish to better understand the way the Log4cplus SocketAppender works with regard to the logging server that recieves this appender events.
I read the Log4cplus src code for loggingserver and socketappender and I will be glad to be clarified:
Can the SocketAppender only send events to the Log4cplus logging server, and not to any other server?
and if this is the case: does it mean that if I want to send log messages to remote machine, that machine must be installed with the Log4cplus lib?
I would also like to know- does this Log4cplus logging-server run as a service? and does it require special configuration and pre-setup in order to use it?
Can the SocketAppender only send events to the Log4cplus logging server, and not to any other server?
Yes and yes.
does it mean that if I want to send log messages to remote machine, that machine must be installed with the Log4cplus lib?
Well, sort of. If you want to use only SocketAppender, you will have to use the logging server. You could also use SysLogAppender and send to remote server using that. Obviously, you have to have syslog service and allow receiving from network in it. You could also write your own custom appender that sends the events to whatever server you desire.
I would also like to know- does this Log4cplus logging-server run as a service?
No, it is a simple executable that listens on a socket.
and does it require special configuration and pre-setup in order to use it?
It requires configuration file so that it knows where to log the events.
I just wanted to share how I used SocketAppender (this setup also works for docker containers being in the same network).
/usr/share/elasticsearch/config/log4j2.properties
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.socket.type=Socket
appender.socket.name=socket
appender.socket.port=601
appender.socket.host=api
appender.socket.reconnectDelayMillis=10000
appender.socket.layout.type = PatternLayout
appender.socket.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.socket.ref = socket
in the second container I used syslog-ng:
apk add syslog-ng
vi /etc/syslog-ng/syslog-ng.conf
syslog-ng -f /etc/syslog-ng/syslog-ng.conf
/etc/syslog-ng/syslog-ng.conf
#version: 3.13
source s_network {
network(
transport(tcp)
port(601)
);
};
log {
source(s_network);
destination(d_network);
};
destination d_network {
file("/var/log/es_slowlog.log", template("${MSGHDR}${MESSAGE}\n"));
};
Notice that the #version: has to correspond to your version of syslog-ng. You can check it by invoking syslog-ng -V.
I'm experiencing some strange behaviour with a ColdFusion 11 server, which (among other things) publishes some web services accessed via both SOAP and HTTP. The server itself is Windows 2012, running IIS. Actual folder config is as follows:
IIS has two websites configured, 'BOB' and 'BOB_Services'. Both have been configured with the CF Server Config tool so that CF handles .cfc, .cfm files. They share a common CFIDE config.
BOB's root is I:/inetpub/BOB
BOB_Services's root is I:/inetpub/BOB_Services
There is a folder mapping configured in CF Admin from '/' to 'I:/inetpub/BOB'. Don't ask me why, no one seems to know.
Normally there is a services.cfc file in BOB_Services ONLY. Yesterday we accidentally copied that same file into the BOB root folder, and all of our SOAP services using BOB_Services\services.cfc started throwing errors. Yet I can query the same webservice via HTTP (eg. using http://bob/services.cfc?method=function1¶m1=0 ....etc) and get a valid result.
This is a reference answer in case anyone else comes across this strange behaviour.
It appears that when BOB_Services/services.cfc is called using HTTP GET, the folder mapping
'/' -> 'I:/inetpub/BOB'
is ignored and the actual file used to process the request is I:/inetpub/BOB_Services/services.cfc.
When a function in BOB_Services/services.cfc is called using a SOAP client, the folder mapping is invoked and the file used to process the request is I:/inetpub/BOB/services.cfc, IF IT EXISTS. If it does not exist, the file I:/inetpub/BOB_Services/services.cfc is used as expected.
This behaviour appears to be entirely repeatable - I can make a SOAP request, get one result, change the mapping, make another request and get the other result.
clear case trigger implemented and working in server but while trying in client it's throwing below error - this trigger prevent from unreserved checkout
error checking out
M:\view_main\xxx\abcd.java
can't execute "C:\Program FIles\IBM\RatinalSDLC\Clearcase\bin\ccperl //\trigger\trig_reservedonly.pl";
the system cannot fine the specified file
Trigger action "-exec "C:\Program FIles\IBM\RatinalSDLC\Clearcase\bin\ccperl //\trigger\trig_reservedonly.pl"
unable to run : Exec format error
unable to check out "M:\view_main\xxx\abcd.java"
A trigger script must be accessible by all clients.
It is best to declared it in an UNC path (a shared path \\server\folder\path\to\script).
That way, the script is accessible from any client able to access and read the content of that shared path.
See for instance an example at "Creating a ClearCase trigger to disallow checkins for certain Rose RealTime versions".
I also used that technique in "how to get a notification for every checkin in clearcase for a particular Vob".
I am running Ubuntu 12.04 and am trying to record http requests with perl's HTTP::Recorder module. I am following the instructions here: http://metacpan.org/pod/HTTP::Recorder
I have the following perl script running:
#!/usr/bin/perl
use HTTP::Proxy;
use HTTP::Recorder;
my $proxy = HTTP::Proxy->new();
# create a new HTTP::Recorder object
my $agent = new HTTP::Recorder;
# set the log file (optional)
$agent->file("/tmp/myfile");
# set HTTP::Recorder as the agent for the proxy
$proxy->agent( $agent );
# start the proxy
$proxy->start();
And I have changed firefox settings so that it uses port 8080 on the localhost as the proxy. Here is a snapshot of my settings:
When I try to visit a website with firefox, however, I get the following error:
Content Encoding Error
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression.
Not sure what to do. And when I visit:
http://http-recorder
(Where my recorded activity is supposed to show up) I do see that GET requests are being logged. For example, if I try to visit google:
$agent->get('http://www.google.com');
Edit: I should also mention that ubuntu is running inside of virtualbox, not sure if that is messing with anything.
I am trying to connect to my local queue by using cpan mqseries lib through perl script, in solaris environment.When i am executing my script it is giving Reson code as 2058.which means Queuemanager name error.
I have done following thing to analysis this issue,but still getting the reson code 2058.
1)Stop and started the queue manager.
2)checked the queuemanager name in my code.
3)sucessfully put and get the message in my queue by using amqget and amqput command,but it not working with my script.
Could anybody please help me in this,what kind of environment i have to set or any configuration setting i am missing.
my $qm_name = "MQTEST";
my $compCode = MQCC_WARNING;
my $Reason = MQRC_UNEXPECTED_ERROR;
my $Hconn = MQCONN($qm_name,
$compCode,
$Reason,
) || die "Unable to Connect to Queuemanager\n";
Maybe you are running to this issue?
"By default, the MQSeries module will try to dynamically determine
whether or not the localhost has any queue managers installed, and if
so, use the "server" API, otherwise, it will use the "client" API.
This will Do The Right Thing (tm) for most applications, unless you want to connect >directly to a remote queue manager from a host
which is running other queue managers locally. Since the existence of
locally installed queue managers will result in the use of the
"server" API, attempts to connect to the remote queue managers will
fail with a Reason Code of 2058."