I want to configure GUID filed in Haproxy, example which introduced here - this format has more than 128-bit, but I thought, I can do without all those fields, I'm looking forward to use %Ts fields with %rt field. As I understood %Ts is 32 bit integer number -- time in seconds, but I'm not understand which size is %rt field, in link above 16 bit, but I thought for request counter it's too small.
So my question is: which size has %rt flag?
As described in the doc it's a numeric type as shown in Custom log format which is formatted as %04X from uniq_idas you can see in the source.
https://github.com/haproxy/haproxy/blob/master/src/log.c#L2819.
The uniq_id is from type unsigned int https://github.com/haproxy/haproxy/blob/master/src/log.c#L1909
Related
In PostgreSQL, I have a column with people's height in meters. If the height is, say 1.75 m, it shows properly, but if the height is 1.70 m, it shows as 1.7. I would like to have this already formatted to two decimal places, showing as 1.70 without formatting in each and every SQL call. Can I specify this in the table creation? Or a stored procedure, or something? I've seen a few things about timestamps, but not for real fields. Knowing how to format the decimal point as a colon (1,70) would be a plus.
Basically, presentation and "cosmetics" are the job of the application, not the database.
Having a default number of decimal places for floats would also create a problem, because the data returned by the database would not be the actual data in the column. So if you did a SELECT and it returned a value of 1.75, then if you searched for this value, you might not find it because the actual value stored was not 1.75 but 1.7499999999 and it was only rounded for display.
Potential solutions:
If you want to store a specified number of digits, use NUMERIC. This will solve the 1.7499999999 problem above. If you use NUMERIC, when doing a SELECT you get the actual contents of the column.
In your app, if you use an ORM, use a Decimal (or similar) type for the column with the appropriate settings so it displays the way you want.
Or create a view with the format applied to the column, but in this case if you want the trailing zero, the type will be text and not float, and it will not be searchable unless you create an extra index on it.
Generated column with the number formatted as you want, maybe easier than a view
Below Query, I am using to get the SP definition but in TEXT column I am getting as NULL Value in IBM DATA Studio but I am able to CALL the SP.
SELECT PROCNAME, TEXT FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
Please Help
You have confirmed that the syscat.procedures.language is SQL, and that your query-tool is able to display a substr() of the text.
Workaround depends on the length(text) of the row of interest:
SELECT PROCNAME, substr(TEXT,1, 1024) FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
You may need to adjust the length of the substr extract depending on the length of the text and your configuration. For example substr(TEXT, 1, 2048 ) or a higher value for the length as necessary that your query-tool can cope with.
You can find the length of the text column with the LENGTH(TEXT) for the row of interest.
You can also CAST a CLOB to char or varchar to a length that fits within their limits and whatever query tool limitations you have.
Another option is to use a different query tool that can work with CLOB.
Are you using the latest version of Data Studio with the latest fix? It sounds like you might have an invalid UTF-8 character in you SP, or as you are using SUBSTR and SUBSTRING you are breaking a mulit-byte character in two.
You could try setting
-Ddb2.jcc.charsetDecoderEncoder=3
in your eclipse.ini to get Java to use a replacment character rather than replace the invalid string with nul
See this tech note
https://www-01.ibm.com/support/docview.wss?uid=swg21684365
Otherwise, do raise this with IBM Suppport
I'm working on obiee 12c rpd. I have a measure column in my physical table in DB with bigint data type. In physical layer of rpd, I've chosen its data type as numeric because int data type is so small for my values. Because of numeric data type, it's added '.00' at the end of my values. I used to remove them with round function in BMM layer's expression builder but it didn't work. I tried this steps with Changing the numeric to double data type in physical layer but I got the same result means I see values with .00 at the end in my dashboards.
Now I'm going to remove these zeros in rpd.
Is it possible? How can I do it?
Thanks
I agree with the answers above. If that doesn't seem to work, you could try to change the format to custom and work with a mask as explained here: https://docs.oracle.com/cd/E29542_01/bi.1111/e10544/format.htm#BIEUG10831
From oracle doc:
JDBC and the Administration Tool do not support this type (BIG INT);
therefore, Oracle BI EE does not fully support the BIG INT type. BI
Server does offer some support for this type, but BIG INT has not
been thoroughly tested with Oracle BI Server. The BIG INT type is
intended to be same as the C int64 data type.
Link:https://docs.oracle.com/cd/E28280_01/bi.1111/e10540/data_types.htm#BIEMG4602
Making it DOUBLE and sort the .00 issue inside the answers solves your problem ?
Go to column properties and data format, here is the window:
That's not how OBI works. The RPD is the number crunching engine. NOT the visualization part.
If you want the decimals to be hidden by default, then you set the data format with zero decimals by default. That's how the tool works. Not in the RPD.
Message => FRM-40831: Truncation occurred: value too long for field PHONE.
I have table named 'CLIENT'. The table has many fields. One of the field name is 'PHONE'. The field data type and length is VARCHAR2(20 byte). By using the 'CLIENT' table I have created a form using Forms Developer 10g. It works fine. But I have changed the field length and Forms Property Value as VARCHAR2(40 byte) and Forms Property > Maximum Length = 40. Now the Form data save smoothly. But when I am going to retrieve data the message shows => FRM-40831: Truncation occurred: value too long for field PHONE.
N.B: The message shows when it gates the value over than 20 characters otherwise not.
How to solve the problem?
Please help me.
Apparently you are facing the multi-byte character set problem.
Define your column as VARCHAR2(40 CHAR).
Currently the column can contain only 40 bytes, so if - as I assume - you use multi-byte character set in your database, the truncation occurs.
So I have a question about Solr's field date types which is pretty straight forward: what's the difference between a 'date' field and a 'tdate' one?
The schema .xml claims that 'For faster range queries, consider the tdate type' and 'A Trie based date field for faster date range queries and date faceting. '
Fair enough... but what's the precisionStep="6" all about? should i change this? does it change the way i would create the query in case I use the tdate? What's the real advantage or what does Solr do that makes it better?
P.S went through google, Solr manual, solr wiki and the java docs without any luck so I'd appreciate a kind and explanatory answer :)...
Also checked:
http://www.lucidimagination.com/blog/2009/05/13/exploring-lucene-and-solrs-trierange-capabilities/
http://web.archiveorange.com/archive/v/AAfXfqRYyLnDFtskmLRi
Trie fields make range queries faster by precomputing certain range results and storing them as a single record in the index. For clarity, my example will use integers in base ten. The same concept applies to all trie types. This includes dates, since a date can be represented as the number of seconds since, say, 1970.
Let's say we index the number 12345678. We can tokenize this into the following tokens.
12345678
123456xx
1234xxxx
12xxxxxx
The 12345678 token represents the actual integer value. The tokens with the x digits represent ranges. 123456xx represents the range 12345600 to 12345699, and matches all the documents that contain a token in that range.
Notice how in each token on the list has successively more x digits. This is controlled by the precision step. In my example, you could say that I was using a precision step of 2, since I trim 2 digits to create each extra token. If I were to use a precision step of 3, I would get these tokens.
12345678
12345xxx
12xxxxxx
A precision step of 4:
12345678
1234xxxx
A precision step of 1:
12345678
1234567x
123456xx
12345xxx
1234xxxx
123xxxxx
12xxxxxx
1xxxxxxx
It's easy to see how a smaller precision step results in more tokens and increases the size of the index. However, it also speeds up range queries.
Without the trie field, if I wanted to query a range from 1250 to 1275, Lucene would have to fetch 25 entries (1250, 1251, 1252, ..., 1275) and combine search results. With a trie field (and precision step of 1), we could get away with fetching 8 entries (125x, 126x, 1270, 1271, 1272, 1273, 1274, 1275), because 125x is a precomputed aggregation of 1250 - 1259. If I were to use a precision step larger than 1, the query would go back to fetching all 25 individual entries.
Note: In reality, the precision step refers to the number of bits trimmed for each token. If you were to write your numbers in hexadecimal, a precision step of 4 would trim one hex digit for each token. A precision step of 8 would trim two hex digits.
Basically trie ranges are faster. Here is one explanation. With precisionStep you configure how much your index can grow to get the performance benefits. To quote from the link you are referring:
More importantly, it is not dependent on the index size, but instead the precision chosen.
and
the only drawbacks of TrieRange are a little bit larger index sizes, because of the additional terms indexed
Your best bet is to just look at the source code. Some of the things for Solr aren't well documented and the fastest way to get a trustworthy answer is to simply look at the code. If you haven't been in the code yet, that too is to your benefit. At least in the long run.
Here's a link to the TrieTokenizerFactory.
http://www.jarvana.com/jarvana/view/org/apache/solr/solr-core/1.4.1/solr-core-1.4.1-sources.jar!/org/apache/solr/analysis/TrieTokenizerFactory.java?format=ok
The javadoc in the class at least hints at the purpose of the precisionStep. You could dig futher.
EDIT: I dug a bit further for you. It's passed off directly to Lucene's NumericTokenStream class, which will used the value during parsing the token stream. Probably worth closer examination. It seems to deal with granularity and is probably a tradeoff between size in the index and speed.