Find out how to fix maxClauseCount is set to 1024 error in Gravitee.io - mongodb

I have many API's registered in Gravitee.io. I tried to add the following:
index.query.bool.max_clause_count: 10240
To the file elasticsearch.yml
But it didn't work, I don't know how to change it in gravitee

If you are trying to change the max clause count of a query setting in Elasticsearch than correct setting is below as explained in Search settings doc
indices.query.bool.max_clause_count

Related

When shrink litedb files then app cannot find index on <collection>._id. How could i fix it?

I use 3.xx version of litedb and want to shrink the file since it always grows its size.
I shrunk it using LiteDbExplorer's "shrink" button, but now app cannot use this db file.
But still it seems good when using viewer litedbexplorer, litedbviewer, litedbstudio.
I can see data well on these viewers.
but when run application then
it shows
message like "LiteDB.LiteException: Index not found on '<collection_name>._id',
at LiteDB.Query.Run(CollectionPage col, indexService indexer)
at
LiteDB.LiteEngine.<>c__displayClass14_0.b__0(Collectionpage col) ~~~..."
Did I make it broken?
but Index on _id(primary key) and other indexes still exist well.
I check it using "db.collectionname.indexes".
anybody help me through?
It was because of collections' name restriction.
If collection's name is not fit then it cannot manipulate itself, not only its documents.
And also LiteDBExplorer's shrinking function might have some problem.
It does not work always, but LiteDBshell works.
Please use "db.shrink" at shell.
I guess messages like "Invalid format: Collection" at shell can be fixed by changing collection's name.

Play 2.6 EntityStreamSizeException exception

In my application I need to upload quite large files (up to 4GB). I do it using file form field and save the file to temporary location for further processing. However when it comes to files that exceed the content-size limit I get an EntityStreamException:
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[EntityStreamSizeException: EntityStreamSizeException: actual entity size (Some(781434380)) exceeded content length limit (8388608 bytes)! You can configure this by setting `akka.http.[server|client].parsing.max-content-length` or calling `HttpEntity.withSizeLimit` before materializing the dataBytes stream.]]
I've tried to set both akka.*.[client|server] limits in my application.conf as follows:
akka.http.server.parsing.max-content-length = 4096MB
akka.http.client.parsing.max-content-length = 4096MB
but it still crashes with the same message.
I've also tried to follow the documentation and set play's settings:
play.http.parser.maxMemoryBuffer=512k
play.http.parser.maxDiskBuffer=4096MB
as it is proposed here:
https://www.playframework.com/documentation/2.6.x/ScalaBodyParsers
The last thing I've tried was to explicitly override the setting in my post handler:
def doCreate = checkToken {
Action(parse.maxLength(400000000, parse.multipartFormData)) { implicit request =>
...
}
Nothing seems to work. Can anybody tell me what I'm doing wrong?
Upd: After lurking in play/akka code and some debugging I can see that any akka related settings just being completely ignored. I see that play.http.parser setting is propagated to context and is used, however any akka setting is not applied and maxContentLength remains set to default value of 8MB. But according to this document: https://www.playframework.com/documentation/2.6.x/SettingsAkkaHttp
they should be applied. Not sure what to do next. Any help will be really appreciated.
I've found a related pull request: https://github.com/playframework/playframework/pull/7548
So, as I understand, this setting should not work. The message in the exception is misleading.
UPD: https://blog.playframework.com/play-2-6-1-released/ here in change notes this merge request is present. After updating to Play 2.6.1 I can see that akka max-content-limit is set to infinite so only play settings counts.

Using sails and waterline, where is the showJoins option set when using toObject()?

Using the Model.toObject() call , I saw that all association keys were being stripped. Looking through the documentation, it seems there's a showJoin option as well as a joins[] array of keys to include.
But, I haven't found a place where to send in or set those options. Anyone know?

google-cloud-sql - max_allowed_packet

I was moving a database to using Google Cloud SQL which previously had a max_allowed_packet of 20M.
Currently the Google Cloud SQL default for max_allowed_packet is 1M.
Is there any way to increase this variable to 20M? I have already tried the following:
set global max_allowed_packet = 20971520;
Which returns:
Error Code: 1621. SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value
and then:
set global max_allowed_packet = 20971520;
This returns the error:
Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation
Thank you in advance for your help!
To change your max_allowed_packet on Google Cloud SQL, go to the overview of your instance on the cloud console, click on edit and look for the MySQL Flags section at the bottom of the page. max_allowed_packet is one of the flags you can set there. Set the value you want, and save/confirm.
You can now set it yourself by editing the instance in Developer Console.
All the settable flags are documented here: https://cloud.google.com/sql/docs/mysql-flags
In my case I couldn't update the max_allowed_packet setting because I had a flag of sql_mode=TRADITIONAL which expects the value to be a multiple of 1024.

Sphinx Multiple indexing

I am working on one project in which i have one config file. What exactly i am doing right now is i have about 20 sources and indexes in my file.And new will be added with the time.Now i am searching on particular index by specify its name.But my problem is with the time some indexes are not in use any more but config content related to it is still there inside config file.So what i actually want to do is,making one file which contain only source and index information .Then refer that content into main config file.Is there any way to doing this?
There are no "include" option supported currently in Sphinx config file, but there is scripting support (or shebang style) which you could use in your Sphinx setup: http://www.ivinco.com/blog/scripting-in-sphinx-config/
Hope this helps.