PostgreSQL - Searching a text field that contains special characters - postgresql

I'm trying to search for the following text in a field called th.descripition. The text is
Previous ticket, Priority: STANDARD --> RELOCATE
Searching using '%RELOCATE%' at the beginning, and found everything I needed except the text above. I then started researching and the best that I could find suggested something using square brackets like th.description like '[a-zA-Z0-9]RELOCATE%'. That doesn't work either. What am I missing or should be doing?

Related

postgresql fulltext returning wrong results

I'm using postgresql full text tsvector column.
But I found a problem:
When I search for "calça"
The results contains the following results:
1- calça red
2- calça blue
3- calçado red
Why "calçado" is being returned when I search for "calça" ?
Is there any configuration so I can solve this?
Thanks.
It isn't just a matter that one string contains the other. The Portuguese stemmer thinks this is the way they should be stemmed. If you turn the longer word into 'calçadot', for example, it no longer stems it, because (presumably) 'adot' is not recognized as a Portuguese suffix which ought to be removed the way 'ado' is.
If you don't want stemming at all, then you could change the config to 'simple', which doesn't stem. But at that point, maybe you don't want full text search at all, and could just use LIKE instead with a pg_trgm index.
If it is just this particular word that you don't want stemmed, I think you can set up a synonym dictionary which will map calçado to itself, which will bypass stemming.

Postgres 9.3 Search word fragments with full text search

I am trying to find the appropriate syntax within full text search to search for fragments. I know that something like this:
document ## to_tsquery('2161:*) will return anything that started with 2161, but if my token is ABC021613 it does not return that item. What is the syntax to wild card both before and after the 2161?
This is because https://www.postgresql.org/docs/current/static/textsearch-controls.html#TEXTSEARCH-PARSING-QUERIES supports prefix matching, but not "suffix"...
I believe it is answered well here: https://stackoverflow.com/a/13072165/5315974

Copy formatted text to clipboard

A simple html page has FORMATTED text - not fancy - line breaks, and italic.
I want to have a button that takes this formatted text, and copies it to the clipboard, formatted (it is planned to be pasted into some LibreOffice document later).
Couldn't find how to do it.
I tried ZeroClipboard, and a suggestion to parse the text, replacing ""-s to "\r\n". That indeed does the trick for line breaks, but what about italic?... Any means to get this functionality?...
When you create an italic tag the responsible for formatting the document and showing the text properly is the browser. If you want to copy the text you should get the text already parse and render by the browser or parse yourself the text like you did with break lines. For italics when you find a ... tag you must create the adequate text. That is, text in italics, but that depends on the language you are using, but i'm sure it can be done.
OK,
Turns out that ZeroClipboard had this functionality (of rendering the HTML text upon paste), but have disabled it.
However, the version that supports it can be found at: https://github.com/botcheddevil/ZeroClipboard
Note:
You may find that in this version, creating the client, binding the flash to a component, and handling the events are rather different than the documentation of current version of ZeroClipboard (https://github.com/zeroclipboard/zeroclipboard/blob/master/docs/instructions.md

Apostrophe issue in RTF

I have a function within a custom CRM web application (old VB.Net circa 2003) that takes a set of fields from a database and merges them with palceholders in a set of RTF based template documents. These generate merged letters and documentation. The code essentially loops through each line of the RTF template file and replaces any instances of the placeholder values with text from a database record. The issue I'm having is that users have pasted a certain type of apostrophe into the web app (and therefore into the database) that is not rendering correctly in the resulting RTF file. It is rendering like this - ’.
I need a way to spot this invalid apostrophe in the code and replace it with a valid one. Unfortunately when I paste the invalid apostrophe into the Visual Studio editor it gets converted into the correct one. So I need another way to express this invalid apostrophe's value. Unfortunately I do not know a great deal about unicode and other encodings so I'm calling out for help with this.
Any ideas?
If you really just want to figure out what the character is you might want to try and paste it into a text editor like ultraedit. It has a hex mode that you can flip to to see the actual underlying bytes.
In order to do the replace once you've figured out the character you'd do something like this in Vb,
text.Replace(ChrW(2001), "'")
Note that you might not be able to figure it out easily using the text editor because it might also get mangled by paste from the clipboard. You might want to either print some debug of the ascii values from code. You can use the AscW function to do that.
I can't help but think that it may actually simply be a case of specifying the correct encoding to use when you write out the stream though. Assuming you're using a StreamWriter you can specify it on the constructor. I'm guessing you actually want ASCII given your requirement.
oWriter = New System.IO.StreamWriter(path, False, System.Text.Encoding.ASCII)
It looks like you probably want to encode characters out of the 8 bit range (>255).
You can do that using \uNNNN according to the wikipedia article.

Japanese characters in a latex \section{} cause an error

I am working on getting Japanese documents created with latex. I have installed the latest version of texlive-2008 which includes CJK.
In my document I have the following:
\documentclass{class}
\usepackage{CJK}
\begin{document}
\begin{CJK*}{UTF8}{min}
\title{[Japanese Characters here 1]}
\maketitle
\section{[Japanese Characters here 2]}
[Japanese Characters here 3]
\end{CJK*}
\end{document}
In the above code there are 3 locations Japanese characters are used.
1 + 3 work fine whereas 2, which contains Japanese characters in a \section{} fails with the following error.
! Argument of \#sect has an extra }.
After some research it turns out this error manifests when you’ve put a fragile command inside a moving argument. A moving argument because section can be moved to a contents page for example.
Does anyone know how to get this to work and why latex thinks Japanese characters are "fragile".
Sorry to post this as an answer rather than a comment to your answer; I don't have enough rep yet to comment. (EDIT: Now I have enough rep to comment, but I'm not sorry anymore. Thanks Will.)
Your solution of replacing
\section{[Japanese Text]}
with
\section{\texorpdfstring{[Japanese Text]}{}}
suggests that you're using the hyperref package. When you use the hyperref package, any sort of not-totally-boring text (e.g. math) within \section causes a problem because \section is having trouble generating pdf bookmarks. \texorpdfstring allows you to specify how you want the section title to appear in the pdf bookmark. For example, I might write
\section{Calculation of \texorpdfstring{$H_2(\mathcal{X})$}{H\_2(X)}}
if I want the section title to be "Calculation of $H_2(\mathcal{X})$" but I want the pdf bookmark to be "Calculation of H_2(X)".
You should probably use xetex/xelatex, as it has been created to support unicode. The change is sometimes not easy for already existing documents, though. (xelatex should be included in texlive, it is just different executable to call -- this is how it is done in Debian).
I have managed to get this working now!
Using Latex and CJK as before.
\section{[Japanese Text]}
was replaced with
\section{\texorpdfstring{[Japanese Text]}{}}
Now the contents pages and section titles work and update fine.