I have a table in postgreSQL with some text which can contains emoji, I want to find which emoji is the most used. How can I do that without have to count separately the number of text with each emoji.
Related
I have a table in my Postgres DB. In this table, there is a column city this column has type character and length 255.
When I try to add a city in this column, for example, London and after that, I try to get this city I get a value with 255 lengths.
Looks likes [London....................-255] where dots are empty characters
When I add value in db always doing trim.
I use pg for node js
As the comment says, you don't want to use character(255) as the field type, which is always 255 characters, padded with whitespace.
Instead, you might consider using varchar(255), but even so, you probably don't actually want to limit the length here – Postgres doesn't care, storage-wise!, whereas MySQL does – so just use text.
If I have a string like 'Picá' in Redshift, how can I extract just the 'á'?
Trying to get at the count of foreign characters in a column full of strings.
If you want a count of non-ascii characters you could use something like
select regexp_count('Picá', '[^\u0000-\u007F]');
which returns the value 1.
If you really want a count of Latin or Cyrillic alphabet then you must probably revert to a redshift UDF.
My Filemaker app wants to display a text build up from filtered child rows. It should be displayed in a scrollable text field in the layout of the parent row.
Essentially, I have a tree structure where each node contains a paragraph or two of text.
In the layout of any node, I want to display its own text plus the text of all its descendents.
But since these are text fields which can be one or more paragraphs long, the usual list view doesn't satisfy me, as it doesn't expand to show the full text, only one line. Also, it only shows the direct descendents.
I want to show the full text of all descendents, and pick two text fields from them - a headline (optional field) and the main text.
I'm new to Filemaker. Tried to google for an answer to this but could not find anything that fits. Finding the related rows is easy enough, but I can't figure out how to display them in the way I want.
You would need to display your related texts in a portal, since you want to indicate which ones you want to use. Make your portal rows tall enough for your needs and use a scroll bar on the text field if needed. You would need to gather all descendants in one table occurrence to display them in one portal as separate rows.
Alternatively, build up your list in a text field and show this field on the layout. You won't be able to mark any of the original records this way.
I am performing a mail merge and have an issue when trying to correct the percentage format. The problem is that the source column contains both a percent value and text. If I map the field, percents display as decimal in word. If I use the following, it displays correctly:
{=«Percent»*100 # 0%}
However, now when the row contains text I receive an error.
Is there another way I can do this?
Here is the formula you need
{={MERGEFIELD XYZ}*100\ #0.00%}
No, Word has no way to do string manipulation in its fields. Add another field / column to your data source for the text, or format the percentile in Excel before performing the merge.
I don't have much experience with MS SQL server 2008 R2 but here is the issue if you would help me please:
I have a table with a column/field (type : nvarchar) that stores text. The text is read from a text file and written to the database using vb.net application.
The text in the text file contains Turkish characters such as the u with 2 dots on top(in the future it will be in different languages )
When I open the table, the text in the column is not readable. It converts the Turkish special character to some unreadable characters.
Is there anyway to make the text readable in the table?
Thank you so much.
SQL Server doesn't change any character stored in tables, I think the problem is displaying the text in different character set. Try using UTF-8 character set.