% character encoding with URIEncoding setting on Tomcat - special-characters

Is % character encoded to '%25' With URIEncoding="utf-8" setting on server.xml?
I have a file named abc%.wav on the server and try to access it with hyperlink, but the server response 404 not found.
I encoded the file name in the java class file, then it worked fine.
So I am curious whether % character is encoded with the setting above or not.
Thanks.

Related

How to specify encoding in Alembic migrations

I am working to add Alembic migration to a legacy project. I am hoping to execute some raw sql, including some insert statements with unicode that I want to be encoded with UTF-8, but am getting a UnicodeDecodeError. To reproduce this error, I created this example:
def upgrade():
op.execute("SELECT '𝔥𝔢𝔩𝔩𝔬'")
When I run this migration, I get:
File "virtual_env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1263, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "virtual_env/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 324, in _execute_on_connection
self, multiparams, params, execution_options
File "virtual_env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1462, in _execute_clauseelement
cache_hit=cache_hit,
File "virtual_env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1678, in _execute_context
e, util.text_type(statement), parameters, None, None
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 8: ordinal not in range(128)
I confirmed that my Postgres database encoding is set to UTF-8. I also tried to work out whether execution_options parameters exist to set the encoding, but I was not able to discern this. I also tried abandoning op.execute() and tried creating my own engine instance using sa.create_engine(url, encoding='utf-8'), which mysteriously still gave me an encoding error and was wanting to use ascii.
Which Alembic, SQLAlchemy or Psycopg2 subsystem is expecting ascii, and is there a way to change that expectation either in Alembic's configuration or for a specific migration?
I'm happy to dig into the internal APIs and use a hacky solution, but would hope there is a straightforward way to do this that I have just not encountered in a couple hours searching the documentation.
Note: this project uses Python 2.7.16, and despite my efforts I do not have authorization to port it to Python3 yet.

Cannot access FTP directory with CP1250/CP852/UTF-8 encoding

I am trying to read in some files from the following directory structure:
/jc/06 Önéletrajzok/Profession/Előszűrés sablonok név szerint
But for some strange reason I cannot enter not even in the upper level directories.
I already tried with PHP/Python3.6/Ruby but without much luck. At least with PHP and Python I can CWD() at least until the /jc/06 Önéletrajzok/Profession part.
Here is my python code for reference:
from ftplib import FTP
ftp = FTP('hostname')
ftp.login('username','pwd')
ftp.cwd('jc') # Just for demonstration purposes as step by step
ftp.cwd('06 Önéletrajzok')
ftp.cwd('Profession')
print(ftp.nlst()[2]) # Which gives: 'ElÅ\x91szűrés sablonok név szerint
# But when I am trying:
ftp.cwd('ElÅ\x91szűrés sablonok név szerint')
# Or either:
ftp.cwd('Előszűrés sablonok név szerint')
# It gives:
# UnicodeEncodeError: 'latin-1' codec can't encode character '\u0151' in position 6: ordinal not in range(256)
# So I am trying encoding CP1250 or CP852 (for Hungarian)
dir = 'Előszűrés sablonok név szerint'.encode('cp852') # which gives: b'El\x8bsz\xfbr\x82s sablonok n\x82v szerint'
ftp.cwd(dir.decode('utf-8'))
# and it gives the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 2: invalid start byte
So I am starting to give up on this one, I don't know how to access those files. The directory structure was created with Windows laptops accessing a Synology File server.
I have already tried with ftp.encoding = "utf-8" too.
Any ideas?

Problems with non-"UTF-8" file collection using flume - Spooldir type

My flume spool directory contains non-"UTF-8" files.
So I get a Java.nio.charset.MalformedInputException error when I try to collect it.
Changing the encoding option of a .conf file will also cause an error.
And I have to use spooldir type.
How can I collecrt non-"UTF-8" files.
enter image description here
The encoding of our log files were Latin5 (which is Turkish)
Fixed it by adding the below line into the conf file:
AGENTNAME.sources.SOURCENAME.inputCharset = ISO-8859-9

Start glassfish server on different port

I have created a new domain ac and defined a different port in domain.xml i.e. 11305. But when I try to start the domain it gives me following exception:
com.sun.enterprise.universal.xml.MiniXmlParserException: "Xml Parser Error: javax.xml.stream.XMLStreamException: ParseError at
[row,col]:[253,34]
Message: Open quote is expected for attribute "{1}" associated with an element type "port".
Here is the command:
./asadmin start-domain ac
Here is the part of the domain.xml file
<network-listeners>
<network-listener port=“11305” protocol="http-listener-1" transport="tcp" name="http-listener-1" thread-pool="http-thread-pool"></network-listener>
<network-listener port="33389" protocol="http-listener-2" transport="tcp" name="http-listener-2" thread-pool="http-thread-pool"></network-listener>
<network-listener port="33356" protocol="admin-listener" transport="tcp" name="admin-listener" thread-pool="admin-thread-pool"></network-listener>
</network-listeners>
Where am I going wrong?
The xml config got malformed. Check if the port value is quoted and check if your quote is the right symbol (try copying quotes from other places in the xml)
Update: First and second quotes are non-standard in your excerpt of the domain.xml file, 3rd line. Compare the quotes around 11305 and the quotes around http-listener-1. They are different. The quotes around http-listener-1 are proper and you should use those.

Mule ESB execute script from Groovy with unicode params

I have this little chain of components in my Mule ESB project:
<set-payload value="Получена заявка ##[sessionVars['ticketID']]" doc:name="Set SMS Text"/>
<scripting:transformer doc:name="Send SMS" ignoreBadInput="true">
<scripting:script engine="Groovy"><![CDATA[
def command = ["/tmp/call.sh", message.payload]
def proc = command.execute()
proc.waitFor()
]]></scripting:script>
</scripting:transformer>
And /tmp/call.sh listing:
#!/bin/bash
echo $# > /tmp/call.out
When message passes Mule chain in /tmp/call.out I can see "Џолучена заЯвка #4041" instead of expected "Получена заявка #4041" ("Получена заявка" - Russian words), i.e. there is a problem with unicode chars output and there are no problems with ASCII chars.
When I check /tmp/groovy.out with HEX editor I see that all Russain chars has 1-byte lenght (in unicode that must be 2-bytes length), i.e. output of my Groovy component is not unicode.
There is no problem with unicode output to Mule log when I user Echo and Logger components. Also in SMTP component everything is perfect: I successfully receive letters in unicode from Mule.
Can you help me with unicode arguments in Mule ESB with Groovy command call?
Solved by selecting UTF-8 ecoding in Run configuration options (menu Run -> Run configurations...). By default it was MacCyrilic encoding.