gSoap namespaces from multiple wsdls are all present in request envelope - soap

There are N wsdl schemes:
a.wsdl
b.wsdl
...
n.wsdl
Request (which definition is in a.wsdl) which I sent looks like:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:ns1="http://www.example.com/A/"
xmlns:ns2="http://www.example.com/B/" -----> This line is redundant
...
xmlns:nsN="http://www.example.com/N/" -----> This line is redundant
>
<SOAP-ENV:Body>
<ns1:A>
...
</ns1:A></SOAP-ENV:Body></SOAP-ENV:Envelope>
Is there any way how to force gsoap NOT to add all namespaces from all wsdls to every requests SOAP-ENV:Envelope?
For creating source code I use following commands:
wsdl2h.exe -oSoap.h -s -y -c a.wsdl b.wsdl ... n.wsdl
soapcpp2.exe -C -L -n -x -w -c -d.\0 Soap.h
as a result I get:
soap.nsmap
soapStub.h
soapH.h
soapClient.c
soapC.c

Related

Trying to remove line and all following using sed but one line remains

I'm removing this text from the bottom of some files
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
(snipped for brevity) I'm using
sed -n -i '' '/<?xml version="1.0" encoding="UTF-8"?>/q;p' myfile.txt
The text is removed as expected but not the first line - I thought I was asking 'when you get to this line, remove it and everything following.'
(I seem to get everything removed when I just run this in a Terminal window, but not when saving the file.)
Mac user BTW.
Use this Perl one-liner:
perl -i.bak -pe 'last if m{\Q<?xml version="1.0" encoding="UTF-8"?>\E}; ' in_file
For multiple files:
find /path/to/files ... -exec perl -i.bak -pe 'last if m{\Q<?xml version="1.0" encoding="UTF-8"?>\E}; ' {} \;
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-i.bak : Edit input files in-place (overwrite the input file). Before overwriting, save a backup copy of the original file by appending to its name the extension .bak.
\Q quote (disable) pattern metacharacters until \E
\E end either case modification or quoted section, think vi
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlre: Perl regular expressions (regexes)
This is my solution which removes all the lines I want to remove (in files in a directory).
for file in *.emlx; do
sed -i '' '/<?xml version="1.0" encoding="UTF-8"?>/,$d' "$file"
done

How to use rex command with REST api of splunk curl as client

i am trying to Extract new field from raw data by regular expression(rex command).My regular expression is working fine in splunk web search bar and getting results. But not not working with REST api curl as client.
i want to extract a field from a csv data set train.csv and want to give it name as "numbers"
curl -u admin:password -k https://localhost:8089/services/search/jobs -d search="search source=train.csv|rex field=_raw '^(?:\[^\"\\n\]*\"){2},\\w+,\\d+,\\d+,\\d+,\\d+,\\d+\\.\\d+,(?P<numbers>\[^,\]+)'| top numbers"
by executing this command i got sid
<?xml version="1.0" encoding="UTF-8"?>
<response>
<sid>1548238904.70</sid>
by after asking for result i am getting error
curl -u admin:password -k https://localhost:8089/services/search/jobs/1548238904.70/results
Error in 'rex' command: The regex ''^(?:\[^\n\]*){2}' does not extract anything. It should specify at least one named group. Format: (?<name>...).</msg>
what is named group ,why its working well in splunk search bar
i want result with "number" as column or new field
Looking at your SPL itself, it appears that you've got single quotes instead of double quotes in your rex, which is known to cause issues or no results.
Try the following approach of escaping double quotes instead:
$ curl -u 'username:password' -k https://dummy.splunk.url/services/search/jobs -d search="| stats count | eval foo=\"bar\" | rex field=foo \"\w(?<named>\w*)\" | table foo named"
When asking for the results back, you should see the following:
$ curl -u 'username:password' -k https://dummy.splunk.url/services/search/jobs/1548868781.39708/results
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 315 100 315 0 0 420 0 --:--:-- --:--:-- --:--:-- 420<?xml version='1.0' encoding='UTF-8'?>
<results preview='0'>
<meta>
<fieldOrder>
<field>foo</field>
<field>named</field>
</fieldOrder>
</meta>
<result offset='0'>
<field k='foo'>
<value><text>bar</text></value>
</field>
<field k='named'>
<value><text>ar</text></value>
</field>
</result>
</results>
Presuming the above approach is used, you should end up with a curl command as such:
curl -u admin:password -k https://localhost:8089/services/search/jobs -d search="search source=train.csv|rex field=_raw \"^(?:\[^\"\\n\]*\"){2},\\w+,\\d+,\\d+,\\d+,\\d+,\\d+\\.\\d+,(?P<numbers>\[^,\]+)\"| top numbers"

How to scan logfile for xml values and combine them on one line

I have a logfile with multiple lines like "DEBUG MDM payload:" and on separate lines after that is the xml content which can differ in lenght but always ends the same. How can i user tr and sed or another method to combine all the xml content on the same line as "DEBUG MDM payload:"?
2017-01-26T10:54:28.712+0100 - CORE {wff-device-thread-15 : deviceRestExecuteWorkflow.deviceRestExecuteActivity.EventQueueConsumer} device.ios.management|logger [{{Correlation,dhdjwdw-3a44-4b0d-aa52-dwdwdwdwdw}{Uri,PUT /S29112264/ios/mdm2/dwdwdwdw-c7d2-465c-be44-dwdwdwdw}{host,de0-server.name.fqdn}}] - DEBUG MDM payload:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Command</key>
<string>dd2c2a59-2007-4863-9c8c-dgfhgfh666666</string>
<key>command</key>
<dict/>
<key>Status</key>
<string>Approved</string>
<key>UDID</key>
<string>90f0c56287474782748274827487842cc64c2fc4</string>
</dict>
</plist>
Update: The following command will print what i want just cant figure out how to take that and alter the file to put it on one line.
sed -n "/\/ios\/mdm2\DEBUG MDM payload/,/<\/plist>/p"
Update 2: Ok bit further now but i still have some tabs in some cases. trying to find a way to remove tabs next.
sed -n "/DEBUG MDM payload/,/<\/plist>/p" fake_log.txt | tr -d "\r" | tr -d "\n"
Update 3: OK, got it. Now is there a way to remove the existing lines and have this newly modified line added in its place?
sed -n "/DEBUG MDM payload/,/<\/plist>/p" fake_log.txt | tr -d "\r" | tr -d "\n" | tr -d "\t"
To replace newlines in place, sed is probably not the tool you're looking for. Try using Perl instead. This one-liner should do the trick.
perl -i -a -n -e '$matched=true if /DEBUG MDM payload/; if ($matched && /<\/plist>/) {print "#F\n"; $matched=false;} elsif ($matched) {print #F;} else {print "#F\n";}' log.txt
To do a dry run, just remove the -i option, and it will output instead of changing the file in place.

crawl links of sitemap.xml through wget command

I try to crawl all links of a sitemap.xml to re-cache a website. But the recursive option of wget does not work, I only get as respond:
Remote file exists but does not contain any link -- not retrieving.
But for sure the sitemap.xml is full of "http://..." links.
I tried almost every option of wget but nothing worked for me:
wget -r --mirror http://mysite.com/sitemap.xml
Does anyone knows how to open all links inside of a website sitemap.xml?
Thanks,
Dominic
It seems that wget can't parse XML. So, you'll have to extract the links manually. You could do something like this:
wget --quiet http://www.mysite.com/sitemap.xml --output-document - | egrep -o "https?://[^<]+" | wget -i -
I learned this trick here.
While this question is older, google send me here.
I finally used xsltproc to parse the sitemap.xml:
sitemap-txt.xsl:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:sitemap="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text" version="1.0" encoding="UTF-8" indent="no"/>
<xsl:template match="/">
<xsl:for-each select="sitemap:urlset/sitemap:url">
<xsl:value-of select="sitemap:loc"/><xsl:text>
</xsl:text>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
Using it (in this case it is from a cache-prewarming-script, so the retrieved pages are not kept ("-o /dev/null"), only some statistics are printed ("-w ....")):
curl -sS http://example.com/sitemap.xml | xsltproc sitemap-txt.xsl - | xargs -n1 -r -P4 curl -sS -o /dev/null -w "%{http_code}\t%{time_total}\t%{url_effective}\n"
(Rewriting this to use wget instead of curl is left as an exercise for the reader ;-) )
What this does is:
Retrieve sitemap.xml
Parse sitemap, output url-list as texts (one url per line)
use xargs to call "curl" on each url, using 4 requests in parallel)
You can use one of the sitemapping tools. Try Slickplan. It has the site crawler option and by using it you can import a structure of existing website and create a visual sitemap from it. Then you can export it to Slickplan XML format, which contains* not only links, but also SEO metadata, page titles (product names), and a bunch of other helpful data.

Concatenation of awk outputs

I'm using regex to parse NMAP output. I want the ip addresses which are up with the corresponding ports open. Now I've a very naive method of doing that:
awk '/^Scanning .....................ports]/ {print substr ($2,1,15);}' results.txt
awk '/^[0-9][0-9]/ {print substr($1,1,4);}' results.txt | awk -f awkcode.awk
where awkcode.awk contains the code to extract numbers out of the substring.
The first line prints all the ips that are up and 2nd gives me the ports. My problem is that I want them mapped to each other. Is there any way to do that? Even a sed script would do.
You will probably find using the "Grepable" output format to be easier to parse:
nmap -oG - -v -A 192.168.0.1-254
Sample output:
Host: 192.168.1.1 (foo) Status: Up
Host: 192.168.1.1 (foo) Ports: 22/open/tcp//ssh//OpenSSH 5.1p1 Debian 6ubuntu2 (protocol 2.0)/, 80/open/tcp//http//Apache httpd 2.2.12 ((Ubuntu))/, 139/open/tcp//netbios-ssn//Samba smbd 3.X (workgroup: BAR)/, 445/open/tcp//netbios-ssn//Samba smbd 3.X (workgroup: BAR)/, 7100/open/tcp//font-service//X.Org X Font Server/ Ignored State: closed (995)
Or if you have an XML parser, use the XML output format:
nmap -oX - -v -A 192.168.0.1-254
Sample output:
<?xml version="1.0" ?>
<?xml-stylesheet href="file:///usr/share/nmap/nmap.xsl" type="text/xsl"?>
<!-- Nmap 5.00 scan initiated Sun Jun 13 08:11:32 2010 as: nmap -oX - -v -A 192.168.1.1-254 -->
<nmaprun scanner="nmap" args="nmap -oX - -v -A 192.168.1.1-254" start="1276434692" startstr="Sun Jun 13 08:11:32 2010" version="5.00" xmloutputversion="1.03">
...
...
<host starttime="1276434692" endtime="1276434775"><status state="up" reason="syn-ack"/>
<address addr="192.168.1.1" addrtype="ipv4" />
<hostnames><hostname name="foo" type="PTR" /></hostnames>
<ports><extraports state="closed" count="995">
<extrareasons reason="conn-refused" count="995"/>
</extraports>
<port protocol="tcp" portid="22"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="ssh" product="OpenSSH" version="5.1p1 Debian 6ubuntu2" extrainfo="protocol 2.0" ostype="Linux" method="probed" conf="10" /><script id="ssh-hostkey" output="1024 1a:2b:4d:5e:6f:00:f1:e2:d3:c4:b5:a6:e2:f3:fe (DSA)
2048 fa:eb:dc:cd:be:af:a0:75:65:8a:52:7d:11:22:33:44 (RSA)" /></port>