I've just made a Kibana query using its web interface. The query has a WHERE-like part (source:*blah2.log), and a SELECT-like part (showing only 3 fields of each match).
Naturally, Kibana fetches these items by making a REST request to ElasticSearch, which I would like to use programmatically.
How to I get the Kibana search query as in cURL / other HTTP format?
PS - I actually asked this two years ago (Representing a Kibana query in a REST, curl form), but the interface has changed and the new Kibana lacks the good old caret that opened the menu featuring that option.
There is a little arrow between the overview graph on the top and the table containing the results of your query. When you click this arrow the graph representation changes to some kind of debug view where you can either inspect the results or also see both, the request and response of the elasticsearch query.
So, if you want to get the request used for your specific query, just use the according view. But keep in mind that this request contains a bit more than you usually need, because it adds
a time range which is selected implicitly in kibana
some highlighting rules that you probably don't need
an aggregation which is used for displaying the overview chart
Under the query key you find what you want (combined with the time range). If you want to restrict the keys that are returned (and thus not want the full documents), you can use the so called source filtering as described in the elasticsearch docs. In Kibana this filtering is done on the client side which is why you don't see any excludes in the request from kibana.
Under the chart on the left there is a little arrow:
Now click on it it will display a little dropdown list.
Choose request and you can see the exact request that is send to ES. You can see as well the response and other stuff.
Related
I am a total newbie so this may be a silly question, but I can't find any tutorials on how to query overpass api to display things on own website. Do I install it on my server or is there a code to query it in the script?
What I want to achieve is to have a searchbar on one page to search for tags, and that would display one random point with that tag on the other page with a leaflet map.
But I am struggling to even display any points on it. Would it be actually better to have a local geojson file with set list of points in one town if I want to limit them to just this town anyway?
I will be grateful for any help, it's a first time I am doing something like this and it horribly stresses me out
You can visually run and try overpass queries using http://overpass-turbo.eu/.
In order to unload the overpass server, it would be a good idea to fetch the data once (and update regularly) and host them on your own server (also pay attention to the terms of use of the specific APIs, they might limit the number of requests per hour or prohibit using the for autocomplete).
To query the server from an application, GET from https://overpass-api.de/api/interpreter?data=, followed by your request (the same you would type into overpass turbo, just without line breaks).
It is also possible to host an overpass instance on your own.
If you need to learn the Overpass Query Syntax first, you can read the docs.
Xpage rest service searching is limited when searching because of full text index.
In our environment we have some rather large databases. In our grids we use xe:restService to surface data to a sencha grid framework. This works great... until you need to search the grid. The grid search works unless you get a result with a ton of data and hit the full text limit. We've raised the limit on the server to upwards of 100k only to have it crash during regular operation.
Are there any other options? I am not really sure what it would be but I though I would ask.
I am building a web app that pulls data through the Core Reporting Api v3 from Google. I am using the client PHP library offered by Google.
I am currently trying to specify a page and retrieve its pageviews for a time range. Every other seems to be working okay except for the fact that if a specfy a filter with ga:pagePath==http://link/uri then I get 0 all the time no matter the time range.
I think the problem is got to do with the setting of value for this pagePath. I want to have spearate data for the desktop version of the site and the smartphone version denoted by s. subdomain
Can anyone hint me on some tips and or tricks to use to get the required data?
Example URL:
http://domain.com/user/profile/id/1
http://s.domain.com/user/profile/id/1
Thanks in advance!
for the the default implementation of Google Analytics, ga:pagePath doesn't include the scheme or hostname so in your case you'd actually want to filter using ga:hostname and ga:pagePath together.
I suggest you use the Query Explorer to build your queries and get familiar with what will work. You can also use this tool to at least get a sense for what type of data the ga:pagePath and ga:hostname dimensions return before trying to filter on them. Finally, once you have the query you want, you can easily get the exact Core Reporting API query by clicking on the Query URI button.
Also check out the Combining Filters section of GA API docs.
So if you want filter on ga:pagepath for domain.com and s.domain.com separately you could do something like
filters=ga:pagePath==/user/profile/id/1;ga:hostname==domain.com
filters=ga:pagePath==/user/profile/id/1;ga:hostname==s.domain.com
I did not realise the power of REST until I started using scaffolds in rails. This makes life so simple. Now everytime I try to develop a web application I only think of those 6 verbs. But I have a doubt. How is search related to REST.
Basically the search page which contain a form for the user to input a search term.
which verb does this come under? Is it list??
and what does the search results come under? show?
Search is GET on the collection with some fancy attributes:
GET /articles?q=RESFful+Architecture&in_title=1
Something like that.
There are plenty of resources on the subject, check out Handling arbitrary actions, on ajaxpatterns, for example.
If I understand what you are saying properly, the search page wouldn't be a part of the rest service, but would submit to it.
The search results would be a list of whatever the first class object you had defined was. The Uri would describe the resource that was being displayed.
Retrieving resources is always done with a GET
eg: GET /cars?term=hyundai+green
I'm currently using the Jira SOAP interface within a C# (I suppose the language used here isn't terribly important).
Basically, I'm creating an API and a Winform that wraps some of the functionality of the soap service so that our Devs can programmaticly add bugs when something goes wrong in our application.
As part of this, I need to know the custom field IDs that are in use in Jira, rather than hardcoding them (as they are still prone to the occasional change) I used the GetCustomFields() method in the jira-rpc api then filtered it, so that all the developer needs to know is the name of the field, then the ID is filled in for them automagically.
This all works fine, but with one quite important proviso: that you login to the SOAP/RPC service as a user with administrative privaliges.
The Jira documentation indicates that the soap/rpc service follows the usual workflows and security schemes, however I can't find anything anywhere that would appear to remove this restriction on enumerating custom fields (and quite why in any instance you would want someone to HAVE to be an administrator to gain this access, especially as the custom field id's tend to be in Jira's HTML source is beyond me)
Does anyone know if I've missed a setting somewhere? Or if there is some sort of work-around for this, short of hardcoding the custom field id's?
Or is this a case of having to delve in to Jira's RPC plugin and modifying the source for it in order to give me the functionality I require?
Cheers
Edit for the sake of google/posterity
Wow, all this time on, and it looks like Atlassian still haven't changed this behavior.
Worked around this by creating a custom dictionary that logs in as an administrative user, grabs the custom fields and then logs out. Not ideal, but it should work 'til atlassian change things
You're not missing anything - there's no way to get custom fields via standard SOAP API.
In JIRA Client, we learn about custom fields in two ways:
We download issues via RSS view of the issue navigator, or via XML representation of a specific issue. If a custom field is set for an issue, the XML will have its id, class and value (values).
From time to time we inspect the content of IssueNavigator search page - looking for searchers for the custom fields. Screen-scraping the HTML gives us not only ids of the custom fields but also possible values for enum fields.
This is hackery, of course, and it may go wrong, so a good API would have been a lot better.
In your case, I can suggest two solutions:
Create your own SOAP (or REST) remote API plugin that will give you just that info that you miss from the standard API. Since you're seemingly in control of your JIRA, you can install anything there.
Screen-scrape the "New Bug" page for the project and type of issue you need to submit. You'll get all the info - fields, options, default values, which field is required.