I am working on a loadrunner winsocket script.
The buff that should be sent has a special character "~", when loadrunner sends the request it sends it as "~7e".
Request to be sent - FBE442757F3FA860~1cFFFF0222050017200181
Request that is sent to the application - FBE442757F3FA860~7e1cFFFF0222050017200181
How can we accommodate special characters in loadrunner winsocket.
see lr_set_send_buffer() for how you can set the contents of any buffer. I refer to your C programming expertise on construction of the buffer with embedded hex characters.
Related
I am trying to implement an FTP server in C. However, as I send bytes to the client, the client automatically modifies the bytes in a certain way.
For example, I notice that the client changes 0D0A to 0A (CR+LF to LF). There are some other mysterious changes as well.
Is there a way that from the server side I can direct the client to not change any bytes it received? Or do I have to modify the bytes that I send in order to adjust to the client's conventions?
The FTP program found on unix systems has an "ascii" mode (the default) and a "binary" mode which must be used for binary transfers.
Possibly the issue you are having is due to the use of ascii mode.
Try using the "binary" command to switch modes.
I have a mail command that looks something like this:
sh ~/filter.sh ~/build/site_build.err | mail -s "Site updated." me\#site.com
The bash script has a few commands, mostly grep, to filter the stderr. It sends the filtered content to me when our build process has finished.
It used to send the file contents in the message body, however now it's sending as an attachment, which I do not want.
When I delete most of the text it sends to me in the message body once again, so I've reasoned that this has to do with the message size -- is this correct?
Anyway, how can I prevent Unix from sending the message contents as an attachment, no matter the circumstances?
Thanks!
Still unsure as many things could cause mail to apparently send input as attachement. But in many Linux distribution, mail is an alias for is heirloom mailx. It is known to encode its input if there are any non printable characters in it.
As soon as it finds non standards characters, it adds the following headers to the mail :
Content-Type: application/octet-stream
Content-Transfert-Encoding: base64
and the text of the mail is effectively base 64 encoded. It is not a true attachement, but many mail readers treat such mails as an empty body with an unnamed attached file.
If your LANG environment variable declares a locale that can use non 7 bits chars (éèûôüö ...) mail seems to be clever enough to declare an extended charset (ISO-8859-1 for fr locale) and to do quoted-printable encoding. So even with (at least west european) non 7bits ASCII characters in the message, provided there are no control chararacters, the mail should be sent normally.
The alternative would be to filter all non printable characters from the output of filter.sh, and the last chance solution would be not to use mail and directly use sendmail.
(Ref : an answer of mine to an other post but that one concerned java)
TCP client sends data byte by byte. So, how to tell the server that this message has ended and the new message begins now?
One way is to fix a special character that'll be sent as a bookmark, but that character can also be a part of the message causing confusions.
Any other optimum way out?
If the message is binary, delimited encoding using a special character is not possible. Tag Length Value (TLV) encoding will be best suited for this.
for example
+--------+----------+----------------+
| Tag | Length | Content |
| 0x0001 | 0x000C | "HELLO, WORLD" |
+--------+----------+----------------+
in addition to that, you can have more than one message type
One possible way can be that before sending the actual message you can send the number of bytes in the particular message. When the receiving side has received that number of bytes it can start receiving next message
Checkout the implementation used in networkComms.net, the open source communication framework. In particular IncomingPacketHandleHandOff() on line 892 here.
It guarantees that the first byte received specifies the size of a packet header (Less than 255 bytes). Once enough bytes have been received in order to rebuild the header, the header can be inspected to determine remaining size to be received (data section). If you have more incoming bytes than the expected header and data sections you look at the very first byte and start over.
Using bookmarked characters is what is used at the base level of the network stack but must be implemented carefully to avoid further complications.
If you wish to use a character as both the end of message marker or as a part of the message, you need to use an escape sequence.
For example: Use the character '$' to end the message, and '%' to escape
i.e.
%$ -> $
%% -> %
then use '$' to end the message
All alternatively send the number of bytes to be received at the start of the messssage (or message chunk if you do not know the lenght of the complete message at that point).
In the example for Zend_Mail on http://framework.zend.com/manual/en/zend.mail.attachments.html they use ENCODING_8BIT but searching for what that might be sends me to http://msdn.microsoft.com/en-us/library/ms526992%28EXCHG.10%29.aspx were (and this sounds logical to me) it is explained that 8bit encoding does not make sense for emails.
Edit:
When I use this encoding for a mail with an attachment, I receive the mail with a corrupted attachment in my mail software (Thunderbird)
In which cases does it make sense to use ENCODING_8BIT?
As everybody said, ENCODING_8BIT represents the Content Transfer Encoding.
Basically, 8BITMIME is used for Internationalization. It's using a 8-bit character sets and therefore, allow you to send any character supported in the UTF8 charset.
In general, non-MIME mailers send 8-bit data but do not include any
MIME headers to mark the message as 8-bit data. MIME mailers should
cope with this without any problems. [source]
So basically there is not really a case where it makes sense to use ENCODING_8BIT over another encoding since emails in UTF8 are a standard today. Also, note that most of the MTAs (Message Transfer Agent, such as Postfix, etc.) automatically force the encoding to 8BITMIME (UTF-8).
Here is a good resource about the 8BITMIME encoding.
The 8BITMIME extension has two effects in practice:
The client will avoid Q-P conversion.
The client may add extra
information at the end of a MAIL request: a space followed by either
"BODY=7BIT" or "BODY=8BITMIME".
Zend_Mime::ENCODING_8BIT sets the Content-Transfer-Encoding.
The Content-Transfer-Encoding defines methods for representing binary data in ASCII text format.
The use of Zend_Mime::ENCODING_8BIT in the example is a Bug.
For sending Attachments you should always use Zend_Mime::ENCODING_BASE64
Not for email but for attachements. If you take a look on the RFC 2045 at page 7:
RFC2045
"Binary data" refers to data where any
sequence of octets whatsoever is
allowed.
The book "Designing Embedded Hardware" in the chapter "9.3. Old Faithful: RS-232C" mentions that emails are still sent in 7bit char set because of RS-232C:
It's also not unheard of to see
RS-232C systems still using 7-bit data
frames (another leftover from the
'60s), rather than the more common
8-bit. In fact, this is one of the reasons why you'll
still see email being sent on the
Internet limited to a 7-bit character
set, just in case the packets happen
to be routed via a serial connection
that supports only 7-bit
transmissions.
How can I confirm the observation?
Check out the spec. The original rfc822, for ARPA Internet Text Messages, explicitly states:
A message consists of header fields
and, optionally, a body. The body is
simply a sequence of lines containing
ASCII characters.
Since ASCII is 7-bit, voila.
Note, however, that there are a whole bunch of additions to that original spec, all the MIME extensions, which allow message header extensions for non-ascii text.
The Quoted-printable MIME encoding is specifically designed to encode 8-bit data in 7-bit characters. This encoding is widely used to encode email.
Note also that the text you quoted says "in case the packets happen to be routed via a serial connection" which is misleading, especially if they're talking in a context of IP packets. IP packets assume an 8-bit data path, and cannot be sent directly over a 7-bit RS-232 link without additional encoding (and then it's not a 7-bit data path anymore, it's 8-bit).
The systems that were restricted to 7 bits were already old when email first became popular. The chances that you will find one today approach zero.
Since certain characters have special meaning to email programs (most notably the end-of-line character), it still makes sense to limit the character set.