I have problems when downloading attachments with javamail when they have a blankspace in the file name and no extension.
This is due to the content-type of the BodyPart. For the files example.pdf and example I have the content-type equal to APPLICATION/PDF; name=example.pdf and APPLICATION/OCTET-STREAM; name=example, respectively, while if I have the file example 2 i have APPLICATION/OCTET-STREAM;. This makes me impossible to retrieve the file with javamail.
This is very strange to me, anybody knows why? Or some workaround?
Thanks
I don't know what "problems downloading attachments" means exactly. Is there an exception thrown?
If the message is properly formatted, spaces in the filename shouldn't make any difference.
If the message isn't properly formatted (e.g., the "name" parameter isn't quoted when it contains spaces), you may need to set some properties to have JavaMail work around that bug in the sending program. See the javadocs for the javax.mail.internet package for details.
Related
I have a owl file generated by Protege. Some classes' name and property name contain Chinese words like "苹果".
It's ok when I just open the owl file. However, when I usw OntologyGraph to load the owl file and foreach for OntologyClass, it shows error codes.
I want to ask, does dotnetrdf support chinese? How can I set the encoding style by dotnetrdf
Thanks for answering!
The problem might be with the file encoding, similar to the one reported in this question.
A Protege .owl file is an XML file that should contain a first line that specifies what the file encoding is. If that line is either missing or specifies an incorrect encoding for the file then dotNetRDF will potentially misread the file, leading to errors.
Alright. I thought this problem had something to do with my rails app, but it seems to have to do with the deeper workings of email attachments.
I have to send out a csv file from my rails app to a warehouse that fulfills orders places in my store. The warehouse has a format for the CSV, and ironically the header line of the CSV file is super long (1000+ characters).
I was getting a line break in the header line of the csv file when I received the test emails and couldn't figure out what put it there. However, some googling has finally showed the reason: attached files have a line character limit of 1000. Why? I don't know. It seems ridiculous, but I still have to send this csv file somehow.
I tried manually setting the MIME type of the attachment to text/csv, but that was no help. Does anybody know how to solve this problem?
Some relevant google results : http://www.google.com/search?client=safari&rls=en&q=csv+wrapped+990&ie=UTF-8&oe=UTF-8
update
I've tried encoding the attachment in base64 like so:
attachments['205.csv'] = {:data=> ActiveSupport::Base64.encode64(#string), :encoding => 'base64', :mime_type => 'text/csv'}
That doesn't seem to have made a difference. I'm receiving the email with a me.com account via Sparrow for Mac. I'll try using gmail's web interface.
This seems to be because the SendGrid mail server is modifying the attachment content. If you send an attachment with a plain text storage mime type (e.g text/csv) it will wrap the content every 990 characters, as you observed. I think this is related to RFC 2045/821:
Content-Transfer-Encoding Header Field
Many media types which could be usefully transported via email are
represented, in their "natural" format, as 8bit character or binary
data. Such data cannot be transmitted over some transfer protocols.
For example, RFC 821 (SMTP) restricts mail messages to 7bit US-ASCII
data with lines no longer than 1000 characters including any trailing
CRLF line separator.
It is necessary, therefore, to define a standard mechanism for
encoding such data into a 7bit short line format. Proper labelling
of unencoded material in less restrictive formats for direct use over
less restrictive transports is also desireable. This document
specifies that such encodings will be indicated by a new "Content-
Transfer-Encoding" header field. This field has not been defined by
any previous standard.
If you send the attachment using base64 encoding instead of the default 7-bit the attachment remains unchanged (no added line breaks):
attachments['file.csv']= { :data=> ActiveSupport::Base64.encode64(#string), :encoding => 'base64' }
Could you have newlines in your data that would cause this? Check and see if
csv_for_orders(orders).lines.count == orders.count
If so, a quick/hackish fix might be changing where you call values_for_line_item(item) to values_for_line_item(item).map{|c| c.gsub(/(\r|\n)/, '')} (same for the other line_item calls).
I'm having issues finding out what's wrong with the json string I receive from http://www.hier-bin-ich-koenig.de/json/events to be able to parse it. It doesn't validate, at least not with jsonlint, but I don't know where the issue is. So of course SBJson is unhappy too.
I also don't understand where that [Ô] is coming from. I'd love to know if it's from the content or the code that's converting the content into json. Being able to find where the validation error is would be great.
The exact error sent by the tokeniser is:
JSONValue failed. Error is: Illegal start of token [Ô]
Your page includes a UTF-16 BOM (byte order mark), followed by a UTF-8 encoded document. You should drop the BOM entirely. It is not recommended for UTF-8 encoding.
I had the same problem when I was parsing a json string which was generated by a PHP page. I resolved this problem by using Notepad++,
1, open the php file.
2, menu -> encoding -> encode UTF8 without BOM
3, save.
that's done.
I'm working with perl. I have data saved on database as  “
and I want to escape those characters to avoid having malformed URI sequence error on the client side. This error seems to happen on fire fox only. The fix I found while googling is not to use decodeURI , yet I need this for other characters to be displayed correctly.
Any help? uri_escape does not seem enough on the server side.
Thanks in advance.
Detalils:
In perl I'm doing the following:
print "<div style='display:none;' id='summary_".$note_count."_note'>".uri_escape($summary)."</div>";
and on the java script side I want to read from this div and place it on another place as this:
getObj('summary_div').innerHTML= unescape(decodeURI(note_obj.innerHTML));
where the note_obj is the hidden div that saved the summary on perl.
When I remove decodeURI the problem is solved, I don't get malformed URI sequence error on java script. Yet I need to use decodeURI for other characters.
This issue seems to be reproduced on firefox and IE7.
you can try to use the CGI module, and perform
$uri = CGI::escape($uri);
maybe it depends of the context your try to escape the uri.
This worked fine for me in CGI context.
After you added details, i can suggest :
<div style='display:none;' id='summary_".$note_count."_note'>".CGI::escape($summary)."</div>";
URL escaping won't help you here -- that's for escaping URLs, not escaping text in HTML. What you really want is to encode the string when you output it. See the Encode.pm built-in library. Make sure that you get your charset statements right in the HTTP headers: "Content-Type: text/html; charset=UTF-8" or something like that.
If you're unlucky, you may also have to decode the string as it comes out of the database. That depends on the database driver and the encoding...
I am using a perl script to POST to Google Appengine application. I post a text file containing some XML using the -F option.
http://www.cpan.org/authors/id/E/EL/ELIJAH/bget-1.1
There is a version 1.2, already tested and get the same issue. The post looks something like this.
Host: foo.appspot.com
User-Agent: lwp-request/1.38
Content-Type: text/plain
Content-Length: 202
<XML>
<BLAH>Hello World</BLAH>
</XML>
I have modified the example so the 202 isn't right, don't worry about that. On to the problem. The Content-Length matches the number of bytes on the file, however unless I manually increase the Content-Length it does not send all of the file, a few bytes get truncated. The number of bytes truncated is not the same for files of different sizes. I used the -r option on the script and I can see what it is sending and it is sending all of the file, but Google Appengine self.request.body shows that not everything is received. I think the solution is to get the right number for Content-Length and apparently it isn't as simple as number of bytes on the file or the perl script is mangling it somehow.
Update:
Thanks to Erickson for the right answer. I used printf to append characters to the end of the file and it always truncated exactly the number of lines in the file. I suppose I could figure out what is being added by iterating through every character on the server side but not worth it. This wasn't even answered over on the google groups set up for app engine!
Is the number of extra bytes you need equal to the number of lines in the file? I ask because perhaps its possible that somehow carriage-returns are being introduced but not counted.
I've run into similar problems before.
I assume you're using the length() function to determine the size of the file? If so, it;s likely the data that you're posting is UTF-8 encoded, instead of ASCII.
To get the correct count you may need to add a "use bytes;" pragma to the top of your script, or wrap the length call in a block:
my $size;
do {use bytes; $size = length($file_data)}
From the perlfunc man page:
"Note the characters: if the EXPR is in Unicode, you will get the number of characters, not the number of bytes."
How are you getting the number of bytes? .. By looking at the size of the file on the filesystem?
You can use "-s" to get the size of the file.
Or, if you want to do more, you may use File::Stat