Is there any way how I can get JSON raw data from Splunk for a given query? - getjson

Is there any way how I can get JSON raw data from Splunk for a given query in a RESTful way?
Consider the following timechart query:
index=* earliest=<from_time> latest=<to_time> | timechart span=1s count
Key things in the query are: 1. Start/End Time, 2. Time Span (say sec) and 3. Value (say count)
The expected JSON response would be:
{"fields":["_time","count","_span"],
"rows":[ ["2014-12-25T00:00:00.000-06:00","1460981","1"],
...,
["2014-12-25T01:00:00.000-06:00","536889","1"]
]
}
This is the XHR (ajax calls) for the output_mode=json_rows calls. This requires session and authentication setups.
I’m looking for a RESTful implementation of the same with authentication.

You can do something like this using the curl command
curl -k -u admin:changeme --data-urlencode search="search index=* earliest=<from_time> latest=<to_time> | timechart span=1s count" -d "output_mode=json" https://localhost:8089/servicesNS/admin/search/search/jobs/export

Related

Is it possible to query a KSQL Table/Materialized view via HTTP?

I have a materialized view created using
CREATE TABLE average_latency AS SELECT DEVICENAME, AVG(LATENCY) AS AVG_LATENCY FROM metrics WINDOW TUMBLING (SIZE 1 MINUTE) GROUP BY DEVICENAME EMIT CHANGES;
I would like to query the table average_latency via a REST API call to get the AVG_LATENCY and DEVICENAME column in the response.
HTTP Client -> KSQL Table/Materialized view
Is this use-case possible? If so, how?
It is possible to query the internal state store used by Kafka streams by exposing an RPC endpoint on the streams application.
Check out the following documentation and examples provided by Confluent.
https://docs.confluent.io/platform/current/streams/developer-guide/interactive-queries.html#streams-developer-guide-interactive-queries-rpc-layer
https://github.com/confluentinc/kafka-streams-examples/blob/7.1.1-post/src/main/java/io/confluent/examples/streams/interactivequeries/kafkamusic/KafkaMusicExample.java
https://github.com/confluentinc/kafka-streams-examples/blob/4.0.x/src/main/java/io/confluent/examples/streams/interactivequeries/kafkamusic/MusicPlaysRestService.java
You can get the result of the query using http. You have to send a post method with your query to the ksql address
curl -X "POST" "http://localhost:8088/query" \
-H "Accept: application/vnd.ksql.v1+json" \
-d $'{
"ksql": "SELECT * FROM TEST_STREAM EMIT CHANGES;",
"streamsProperties": {}
}'
I took it from the developer guide of ksql
https://docs.ksqldb.io/en/latest/developer-guide/api/

How to write nested query in druid?

I am new to druid. I have worked with mysql databases so far. I want to know, how to write below nested mysql query as a druid query?
Select distinct(a.userId) as userIds
from transaction as a
where
a.transaction_type = 1
and a.userId IN (
select distinct(b.userId) where transaction as b where a.transaction_type = 2
)
I really appreciate your help.
There are couple of things you might be interested to know as you are new to druid.
Druid supports SQL now, it does not support all the fancy and complex feature like SQL does but it does support many standard SQL thing. It also provides the way to write SQL query in druid JSON.
Here's the more detail on that with example:
http://druid.io/docs/latest/querying/sql
Your query is simple enough so you can use druid sql feature as below:
{
"query" : "<your_sql_query>",
"resultFormat" : "object"
}
If you want to build a JSON query for above query and don't want to write entire big JSON then try this cool trick:
Running sql query to broker node with -q and it will print JSON query for you which you can use and then also modify it as necessary, here's the syntax for that:
curl -X POST '<queryable_host>:<port>/druid/v2/?pretty' -H 'Content-Type:application/json' -H 'Accept:application/json' -q <druid_sql_query>
In addition to this, You can also use DruidDry library which provides support to write fancy druid query in Java.

Hbase data query using rest api

To get data from Hbase table using rest we can use:
http://ip:port/tablename/base64_encoded_key
My key is byte array of
prefix + customer_id + timestamp
byte[] rowKey = Bytes.add(Bytes.toBytes(prefix),Bytes.toBytes(customer_id),Bytes.toBytes(timestamp));
My sample key
3\x00\x00\x00\x02I9\xB1\x8B\x00\x00\x01a\x91\x88\xEFp
How do I get data from Hbase using rest?
How do I get data from Hbase using customer_id and time range?
You must send an HTTP request to get your value. For example if your are an Linux you can easily try a GET request to take a single value. This example retrieves from table users row with id row1 and column a from column family f
curl -vi -X GET \
-H "Accept: text/xml" \
"http://example.com:20550/users/row1/cf:a"
You can see more here including how to retrieve data with timestamp

How to combine Graph API field expansions?

When using this syntax to query multiple metrics
https://graph.facebook.com/v2.4/69999732748?fields=insights.metric(page_fan_adds_unique,page_fan_adds,page_fan_adds_by_paid_non_paid_unique)&access_token=XYZ&period=day&since=2015-08-12&until=2015-08-13
I always get data for the last three days regardness of the values of the since and until parameters
https://graph.facebook.com/v2.4/69999732748/insights/page_fan_adds_unique?period=day&access_token=XYZ&since=2015-09-10&until=2015-09-11
If I ask for a single metric, then the date parameters have effect.
Is there a different syntax for requesting multiple insights metrics which will accept the date parameters?
I think this should work:
curl -G \
-d "access_token=PAGE_TOKEN" \
-d "fields=insights.metric(page_fan_adds_unique,page_fan_adds,page_fan_adds_by_paid_non_paid_unique).since(2015-08-12).until(2015-08-13).period(day)" \
-d "pretty=1" \
"https://graph.facebook.com/v2.4/me"
You are requesting the page's insights as a field. This field contains a list of results in a data array (insights.data), and you want to filter this array.
The way to do that is by chaining parameterizations to the requested insights field like so:
fields=insights
.filter1(params)
.filter2(params)
.filter3(params)
Each .filterX(params) will be applied to the particular field that precedes it.
I've added newlines and indentation for clarity, but in your actual request you'd chain them all in a single line, without spaces.

How to limit the -findall curl call result to certain number in FileMaker's curl call

I know the -findall curl call parameter for FileMaker server will return all the rows in that specific database, however, for testing purpose, I just want to show for maybe, three.
I am aware sql has command of LIMIT, but how a curl command for FM is handling this same scenario?
In this case, -find is not an option for me.
I am assuming you are using curl to query a FileMaker server. If so, you are looking for the -max url parameter
From the Help PDF for FileMaker Server 13
–max (Maximum records) query parameter
Specifies the maximum number of records you want returned
Value is: A number, or use the value all to return all records. If –max is not specified, all records
are returned.