Apache Phoenix SqlLine configuration - apache-phoenix

I'm using Sqlline and need to increase the size of the query thread pool.
I added a "sqlline.properties" file to ~/.sqlline and added the line
phoenix.query.threadPoolSize=256
But the thread pool size is still 128. So, what file do I need to edit and what's the syntax?
Thanks.

follow these it might get resolved
https://community.hortonworks.com/questions/224984/phoenix-query-call-from-java-on-larger-data-set-fa.html
https://community.hortonworks.com/content/supportkb/150633/phoenix-query-on-large-table-fails-because-of-quer.html

Related

Find out how to fix maxClauseCount is set to 1024 error in Gravitee.io

I have many API's registered in Gravitee.io. I tried to add the following:
index.query.bool.max_clause_count: 10240
To the file elasticsearch.yml
But it didn't work, I don't know how to change it in gravitee
If you are trying to change the max clause count of a query setting in Elasticsearch than correct setting is below as explained in Search settings doc
indices.query.bool.max_clause_count

Dataflow: set DataflowPipelineDebugOptions

My pipeline gives OOM errors constantly so I read a fowllowing answer and try to set --dumpHeapOnOOM and --saveHeapDumpsToGcsPath. But it seems that these options do not work. Do I need to change my code or modify something else?
Memory profiling on Google Cloud Dataflow
You will want to check configuring-pipeline-options.
The current way in Apache Beam (2.9.0) to configure pipeline option in command line is --<option>=<value>.
In your case, you can set --dumpHeapOnOOM=true --saveHeapDumpsToGcsPath="gs://foo"

Play 2.6 EntityStreamSizeException exception

In my application I need to upload quite large files (up to 4GB). I do it using file form field and save the file to temporary location for further processing. However when it comes to files that exceed the content-size limit I get an EntityStreamException:
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[EntityStreamSizeException: EntityStreamSizeException: actual entity size (Some(781434380)) exceeded content length limit (8388608 bytes)! You can configure this by setting `akka.http.[server|client].parsing.max-content-length` or calling `HttpEntity.withSizeLimit` before materializing the dataBytes stream.]]
I've tried to set both akka.*.[client|server] limits in my application.conf as follows:
akka.http.server.parsing.max-content-length = 4096MB
akka.http.client.parsing.max-content-length = 4096MB
but it still crashes with the same message.
I've also tried to follow the documentation and set play's settings:
play.http.parser.maxMemoryBuffer=512k
play.http.parser.maxDiskBuffer=4096MB
as it is proposed here:
https://www.playframework.com/documentation/2.6.x/ScalaBodyParsers
The last thing I've tried was to explicitly override the setting in my post handler:
def doCreate = checkToken {
Action(parse.maxLength(400000000, parse.multipartFormData)) { implicit request =>
...
}
Nothing seems to work. Can anybody tell me what I'm doing wrong?
Upd: After lurking in play/akka code and some debugging I can see that any akka related settings just being completely ignored. I see that play.http.parser setting is propagated to context and is used, however any akka setting is not applied and maxContentLength remains set to default value of 8MB. But according to this document: https://www.playframework.com/documentation/2.6.x/SettingsAkkaHttp
they should be applied. Not sure what to do next. Any help will be really appreciated.
I've found a related pull request: https://github.com/playframework/playframework/pull/7548
So, as I understand, this setting should not work. The message in the exception is misleading.
UPD: https://blog.playframework.com/play-2-6-1-released/ here in change notes this merge request is present. After updating to Play 2.6.1 I can see that akka max-content-limit is set to infinite so only play settings counts.

How do we examine a particular job in GTM?

Just as we have in Intersystem Cache D ^JOBEXAM to examine the jobs running in background or scheduled.
How can we do the same in GTM?
Do we have any command for the same. Please advice.
The answer is $zinterrupt; and what triggers it: mupip intrpt. Normally it dumps a file on your GT.M start-up directory containing the process state via ZSHOW "*"; however, you can make $zinterrupt do any thing you want.
$ZINT documentation:
http://tinco.pair.com/bhaskar/gtm/doc/books/pg/UNIX_manual/ch08s35.html
A complex example of using $ZINT:
https://github.com/shabiel/random-vista-utilities/blob/master/ZSY.m
--Sam
Late answer here. In addition to what Sam has said, there is a code set, "^ZJOB" that is used in the VistA world. I could get you copies of this if you wanted.

How Can I import larger dataset to mobiledata service of Bluemix?

I am trying to import my dataset to mobiledata service through Mobiledata import function, however I am getting this error: {"message":"There is an exception while uploading file:
[org.apache.commons.fileupload.FileUploadBase$SizeLimitExceededException:
the request was rejected because its size (174693558) exceeds the
configured maximum (104857600)]","code":20004}
Are there other ways to import larger datasets?
Thanks
You can do it in following ways :
1: Try to increase max-file-size & max-request-size
2: Or best approach would to break the file in chunks and then try. In this approach you would never have to bother about max-file-size & max-request-size and also the process would be faster. There are many algorithms to break files in chunks and join them at target.
This link can help you :
https://mobile.ng.bluemix.net/mbaas-api/#!/data/inject_post_0
you may need to change the entries like max-file-size & max-request-size of web.xml.
Below is the reference link for the same:
http://mail-archives.apache.org/mod_mbox/tomcat-users/201102.mbox/%3Cloom.20110203T214804-280#post.gmane.org%3E
Apart from that you can use node.js sdk framework in Bluemix and upload data:
http://mbaas-gettingstarted.ng.bluemix.net/node#mobile-data
Hope it helps.
This issue seems related to web.xml,below is the similar issue reported on this forum:
FileUploadBase$SizeLimitExceededException apache tomcat