Implement completion suggester on fields in the elasticsearch server - autocomplete

I am trying to implement the completion suggester to my fields in the elasticsearch server. When I try to execute the curl command
curl -X POST localhost:9200/anisug/_suggest?pretty -d '{
"test" : {
"text" : "n",
"completion" : {
"field" : "header"
}
}
}'
I get an exception:
ElasticSearchException[Field [header] is not a completion suggest
field].
What am I missing out on?

I think, while defining the mapping of anisug, you will need to set the header field for completion suggest. For example, you can use this
curl -X PUT localhost:9200/anisug/_mapping -d '{
"test" : {
"properties" : {
"name" : { "type" : "string" },
"header" : { "type" : "completion",
"index_analyzer" : "simple",
"search_analyzer" : "simple",
"payloads" : true
}
}
}
}'
Similarly, while indexing the data, you'll need to send additional completion information. For more information, visit this link

Related

For OpenAPI Specification 3 how should I define the parameter of a request body

I want to define my API endpoints with swagger OAS 3.0.0. My API requires quite a few parameter in the request body and I would like to give a detailed explanation for each request body parameter. The older version of OAS allows for "path" : { "endpoint" : { "in" : "body"}}} which would be perfect because I can describe each parameter individually. However, for OAS 3.0.0 it is stated that parameter in request body should be defined using the requestBody field which does not support description(ie what is the parameter referring to) for each parameter. Is there anyway for me to describe each request body parameter in OAS 3.0.0.?
This way of defining parameters is perfect for me as my clients will be able to see the various parameters clearly.
"parameters" : [{
"name" : "phone_no",
"in" : "body",
"description" : "User mobile no. It should follow International format.",
"required" : true,
"explode" : true,
"schema" : {
"type" : "string",
"example": "+XXXXXX"
}
}, {
"name" : "signature",
"in" : "body",
"description" : "How to get signature ....",
"required" : true,
"explode" : true,
"schema" : {
"type" : "string",
"example" : "XXXXXXXXXXX"
}
} ]
Good readable request body parameters
According to OAS 3.0.0 I should define the parameters in this format which is not ideal as the rendered API documentation will clump the definition of the parameters together which would be less readable for the client.
"requestBody" : {
"description" : "HTTP Request Body",
"content" : {
"application/json" : {
"schema" : {
"properties" : {
"phone_no" : {
"type" : "string",
"required" : true,
"description" : "User mobile no. It should follow International format."
}, "signature" : {
"type" : "string",
"required" : true,
"description" : "How to get signature ...."
}
}
}
}
}
}
Less readable Request Body parameters

JFrog Artifactory API query for object properties does not return requested detail

I am requesting label properties for docker artifact, perhaps the url is not correct? I get response object (json) but label properties are not included. Code example:
response = Net::HTTP.get_with_headers("http://myrepo:8081/artifactory/api/storage/dockerv2-local/anonymizer/functional/manifest.json;docker.label.com.company.info.build='*'",
{'Authorization' => 'Bearer <REDACTED>'})
if response.code.to_s == "200"
puts ("Artifactory response "+ response.body)
puts ("response object: "+response.inspect())
else
puts ("Artifactory request returned "+response.code.to_s)
end
Connecting to artifactory
Artifactory response {
"repo" : "dockerv2-local",
"path" : "/anonymizer/functional/manifest.json",
"created" : "2018-03-14T14:52:22.681-07:00",
"createdBy" : "build",
"lastModified" : "2018-03-15T15:52:34.225-07:00",
"modifiedBy" : "build",
"lastUpdated" : "2018-03-15T15:52:34.225-07:00",
"downloadUri" : "http://myrepo:8081/artifactory/dockerv2-local/anonymizer/functional/manifest.json",
"mimeType" : "application/json",
"size" : "1580",
"checksums" : {
"sha1" : "bf2a1f85c7ab8cec14b64d172b7fdaf420804fcb",
"md5" : "9c1bbfc77e2f44d96255f7c1f99d2e8d",
"sha256" : "53e56b21197c57d8ea9838df7cffb3d8f33cd714998d620efd8a34ba5a7e33c0"
},
"originalChecksums" : {
"sha256" : "53e56b21197c57d8ea9838df7cffb3d8f33cd714998d620efd8a34ba5a7e33c0"
},
"uri" : "http://myrepo:8081/artifactory/api/storage/dockerv2-local/anonymizer/functional/manifest.json"
}
response object: #<Net::HTTPOK 200 OK readbody=true>
If I understand you correctly, you want to get the properties of the manifest.json file, "docker.label.com.company.info.build" in particular.
From looking at your command:
response = Net::HTTP.get_with_headers("http://myrepo:8081/artifactory/api/storage/dockerv2-local/anonymizer/functional/manifest.json;docker.label.com.company.info.build='*'",
It seems that you are using a semicolon to get the properties, which is not the right way. As you can see in this REST API, in order to use the get properties you should use the ampersand sign, so your command should look like:
response = Net::HTTP.get_with_headers("http://myrepo:8081/artifactory/api/storage/dockerv2-local/anonymizer/functional/manifest.json&docker.label.com.company.info.build='*'",

How to specify supported http operation for a resource in json-ld?

I'm new to JSON-LD and I was wondering if there is any way of specifying supported operation of a resource in JSON-LD without using Hydra's supportedOperation or supportedProperty.
Is there any way to specify the context something like :
{
"#context" : {
"#vocab" : "http://www.schema.org/",
"data" : "object",
"id" :"Number",
"name" : "alternateName",
"full_name" : "name",
"links" : {
"#id" : "URL",
"#type" : "collection"
},
"href" : "URL",
"rel" : "relatedTo",
"operation" : [
{
"href" : "http://example.com/resources/1/anotherresources/2",
"method" : "POST",
"expects" :[parameter list],
"required" : [list of mandatory arguments],
"fixed value" : [list of argument with fixed value for a resource]
}
]
}
Any guidance would be of great help..
No, you can't specify it in the context. What you can do, however, is to bind an operation to a property in a Hydra ApiDocumentation (example 10 in the spec) and reference it via an HTTP Link header.

Get Ember Data working with array of objects

I have a simple Ember Data app to list and show various objects.
My /servers.json API (for example) return this kind of format:
[
{
"hosted_domain" : "example.com",
"status" : 1,
"name" : "srv0443",
"id" : 443
},
{
"id" : 392,
"status" : 1,
"name" : "srv0392",
"hosted_domain" : "example.com"
},
{
"hosted_domain" : "example.com",
"id" : 419,
"name" : "srv0419",
"status" : 1
}
]
But I got the following error:
Assertion Failed: The response from a findAll must be an Array, not undefined
Ember Data expects this kind of format:
{
"servers" : [
{
"name" : "srv0443",
"status" : 1,
"id" : 443,
"hosted_domain" : "example.com"
},
{
"status" : 1,
"name" : "srv0392",
"id" : 392,
"hosted_domain" : "example.com"
},
{
"status" : 1,
"name" : "srv0419",
"hosted_domain" : "example.com",
"id" : 419
},
]
}
I know I can override the payload with the extractArray of the RESTSerializer.
It's works by doing payload = { servers: payload } but how get it working in a generic way?
How can I do to catch the needed key of an model type?
In a more general way, what is the good REST format, by convention?
Thanks.
Ember Data works by having the data follow a certain convention ({servers: payload}). So the data either needs to conform, or you have to extend the serializer as you mentioned (or some other customization like overriding the model's findAll() method). There isn't anyway around it, if you want to use Ember Data. Of course, you don't have to use Ember Data. Here is a good article about not using it: http://eviltrout.com/2013/03/23/ember-without-data.html
To customize the serializer you can extend it like this:
App.ServerSerializer = DS.RESTSerializer.extend({
extractArray: function(store, type, payload) {
this._super(store, type, {servers: payload});
},
});
Extract array is automatically called by ember after it gets a response from the server. This will put in the format ember data expects, then pass it on to continue processing as usual. But you will have to do that for each type of model. If you override App.ApplicationSerializer instead you might be able to use the type paramter to figure out which key should go in the modified payload, so it will work for any model, but I can't check it right now.
Finally found a solution by using primaryType.typeKey and ember-inflector tool on the RESTSerializer:
App.ApplicationSerializer = DS.RESTSerializer.extend
extractArray: (store, primaryType, payload) ->
# Payload reload with { type.pluralize: hash }
payloadKey = Ember.Inflector.inflector.pluralize primaryType.typeKey
payloadReloaded = []
payloadReloaded[payloadKey] = payload
#_super store, primaryType, payloadReloaded
In a nutshell:
Get the type key (e.g. server)
Pluralize it (e.g. servers)
Add it as payload master key (e.g. { servers: payload }
And that's it!
Please feel free to comment this solution if you have a better proposition.

Cannot get any results from ElasticSearch with JDBC River

I cannot figure out how to use this plugin at all.
I am running this curl:
curl -XPUT 'localhost:9200/_river/faycare_kids/_meta' -d '{
"jdbc":{
"driver" : "org.postgresql.Driver",
"url" : "jdbc:postgresql://localhost:5432/faycare",
"user" : "faycare",
"password" : "password",
"strategy" : "simple",
"poll" : "5s",
"scale" : 0,
"autocommit" : true,
"fetchsize" : 10,
"index" : "faycare",
"type" : "kid",
"max_rows" : 0,
"max_retries" : 3,
"max_retries_wait" : "10s",
"sql" : "SELECT kid.id as _id,kid.first_name,kid.last_name FROM kid;"
}
}'
It returns:
{"ok":true,"_index":"_river","_type":"faycare_kids","_id":"_meta","_version":1}
How do I search/fetch/see my data?
How do I know if anything is indexed?
I tried so many things:
curl -XGET 'localhost:9200/_river/faycare_kids/_search?pretty&q=*'
This gives me info about the _river
curl -XGET 'localhost:9200/faycare/kid/_search?pretty&q=*'
This tells me: "error" : "IndexMissingException[[faycare] missing]"
I am running sudo service elasticsearch start to run it in the background.
For one, I would install elasticsearch head it can be super useful for checking on your cluster.
You can get stats for all indices:
curl -XGET 'http://localhost:9200/_all/_status'
You can check if an index exists:
curl -XHEAD 'http://localhost:9200/myindex'
You should be able to search all indices like this:
curl -XGET 'localhost:9200/_all/_search?q=*'
If nothing shows up, your rivers are probably not working, I would check your logs to see if any errors appear.
The problem is in the way you setting up your river. You specifying and index and type where the river should bulk indexing records in the wrong place.
The proper way of doing this would be this:
curl -XPUT 'localhost:9200/_river/faycare_kids/_meta' -d '{
"type" : "jdbc",
"jdbc":{
"driver" : "org.postgresql.Driver",
"url" : "jdbc:postgresql://localhost:5432/faycare",
"user" : "faycare",
"password" : "password",
"strategy" : "simple",
"poll" : "5s",
"scale" : 0,
"autocommit" : true,
"fetchsize" : 10,
"max_rows" : 0,
"max_retries" : 3,
"max_retries_wait" : "10s",
"sql" : "SELECT kid.id as _id,kid.first_name,kid.last_name FROM kid;"
},
"index":{
"index" : "faycare",
"type" : "kid"
}
}'
I appreciate all of your help. The elastic-head did give me some insight. Apparently I just had something wrong with my JSON For some reason when I changed my JSON to this it worked:
curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"driver" : "org.postgresql.Driver",
"url" : "jdbc:postgresql://localhost:5432/faycare",
"user" : "faycare",
"password" : "hatpants",
"index" : "jdbc",
"type" : "jdbc"
"sql" : "SELECT kid.id as _id,kid.first_name,kid.last_name FROM kid;"
}
}'
I am not sure what specifically needed to be changed to make this work, but it does now work. I am guessing that it is the outer jdbc needed to be added. I am guessing I can change the inner index and type.
I wrote a quick post on using this plugin, hopefully in can give you a little more insight - the post is located here.