magento 2 rest api product filters - rest

I am working on magento 2 api. I need products based on below filters
store id
by product name search
shorting by name
category id
add limit
I have try with this api but no option available
index.php/rest/V1/categories/{id}/products
Please someone suggest how to archive this.
Thanks

You are looking for the (GET) API /rest/V1/products.
the store ID should be automatically detected by the store, because you can pass the store code in the URL before. If you have a store with code test, the API will start with GET /rest/test/V1/products/[...].
You can use the likecondition type. Ex.: products with "sample" in their name: ?searchCriteria[filter_groups][0][filters][0][field]=name
&searchCriteria[filter_groups][0][filters][0][value]=%sample%
&searchCriteria[filter_groups][0][filters][0][condition_type]=like
you are looking for the sortOrders. Ex.: searchCriteria[sortOrders][0][field]=name. You can even add the sort direction, for example DESC, with searchCriteria[sortOrders][0][direction]=DESC.
Use the category_id field and the eq condition type. Ex.: if you want products from category 10: searchCriteria[filter_groups][0][filters][0][field]=category_id&
searchCriteria[filter_groups][0][filters][0][value]=10&
searchCriteria[filter_groups][0][filters][0][condition_type]=eq
use searchCriteria[pageSize]. Ex.: 20 products starting from the 40th, equivalent in SQL to LIMIT 20 OFFSET 40: &searchCriteria[pageSize]=20&searchCriteria[currentPage]=3
Of course you can perform AND and OR operations with filters.

[
"filter_groups": [
{
"filters": [
{
"field": "type_id",
"value": "simple",
"condition_type": "eq"
}
]
},
{
"filters": [
{
"field": "category_id",
"value": "611",
"condition_type": "eq"
}
]
}
],
"page_size": 100,
"current_page": 1,
"sort_orders": [
{
"field": "name",
"direction": "ASC"
}
]
]

Related

druid groupBy query - json syntax - intervals

Im attempting to create this query (which works as I hope)
SELECT userAgent, COUNT(*) FROM page_hour GROUP BY userAgent order by 2 desc limit 10
as a json. I've tried this:
{
"queryType": "groupBy",
"dataSource": "page_hour",
"granularity": "hour",
"dimensions": ["userAgent"],
"aggregations": [
{ "type": "count", "name": "total", "fieldName": "userAgent" }
],
"intervals": [ "2020-02-25T00:00:00.000/2020-03-25T00:00:00.000" ],
"limitSpec": { "type": "default", "limit": 50, "columns": ["userAgent"] },
"orderBy": {
"dimension" : "total",
"direction" : "descending"
}
}
but instead of doing the aggregation over the full range it appears to pick an arbitrary time span (EG 2020-03-19T14:00:00Z)
If you want results from the entire interval to be combined in a single result entry per user agent, set granularity to all in the query.
A few notes on Druid queries:
You can generate a native query by entering a SQL statement in the management console and selecting the explain/plan menu option from the three-dot menu by the run button.
It's worth confirming expectations that the count query-time aggregator will return the number of database rows (not the number of ingested events). This could be the reason the resulting number is smaller than anticipated.
A granularity of all will prevent bucketing results by hour.
A fieldName spec within the count aggregator? I don't know what behavior might be defined for this, so I would remove this property. The docs:
see: https://druid.apache.org/docs/latest/querying/aggregations.html#count-aggregator

Create a Dataset in BigQuery using API

So forgive my ignorance, but I can't seem to work this out.
I want to create a "table" in BigQuery, from an API call.
I am thinking https://developer.companieshouse.gov.uk/api/docs/search/companies/companysearch.html#here
I want to easily query the Companies House API, without writing oodles of code?
And then cross reference that with other datasets - like Facebook API, LinkedIn API.
eg. I want to input a company ID/ name on Companies house and get a fuzzy list of the people and their likely Social connections (Facebook, LinkedIn and Twitter)
Maybe BigQuery is the wrong tool for this? Should I just code it??
Or
It is, and adding a dataset with an API is just not obvious to me how to figure it out - in which case - please enlighten me.
You will not be able to directly use BigQuery and perform the task at hand. BigQuery is a web service that allows you to analyze massive datasets working in conjunction with Google Storage (or any other storage system).
The correct way of going about the situation would be to perform a curl request to collect all the data you require from Companies House and store the data as a spreadsheet (csv). Afterwards you may store the csv within Google Cloud Storage and load the data into BigQuery.
If you simply wish to link clients from Companies House and social media applications such as Facebook or LinkedIn, then you may not even need to use BigQuery. You may construct a structured table using Google Cloud SQL. The fields would consist of the necessary client information and you may later do comparisons with the FaceBook or LinkedIn API responses.
So if you are looking to load data from various sources and do big-query operations through API - Yes there is a way and adding to the previous answer, big-query is meant to do only analytical queries (on big data) otherwise simply, it's gonna cost you a lot and slower than a regular search api if you intend to do thousands of search queries on big datasets joining various tables etc.,
let's try to query using api from bigquery from public datasets
to authenticate - you will need to generate the authentication token using your application default credentials
gcloud auth print-access-token
Now using the token generated by gcloud command - you can use it for rest api calls.
POST https://www.googleapis.com/bigquery/v2/projects/<project-name>/queries
Authorization: Bearer <Token>
Body: {
"query": "SELECT tag, SUM(c) c FROM (SELECT CONCAT('stackoverflow.com/questions/', CAST(b.id AS STRING)), title, c, answer_count, favorite_count, view_count, score, SPLIT(tags, '|') tags FROM \`bigquery-public-data.stackoverflow.posts_questions\` a JOIN (SELECT CAST(REGEXP_EXTRACT(text,r'stackoverflow.com/questions/([0-9]+)/') AS INT64) id, COUNT(*) c FROM `fh-bigquery.hackernews.comments` WHERE text LIKE '%stackoverflow.com/questions/%' AND EXTRACT(YEAR FROM time_ts)>=#year GROUP BY 1 ORDER BY 2 DESC) b ON a.id=b.id), UNNEST(tags) tag GROUP BY 1 ORDER BY 2 DESC LIMIT #limit",
"queryParameters": [
{
"parameterType": {
"type": "INT64"
},
"parameterValue": {
"value": "2014"
},
"name": "year"
},
{
"parameterType": {
"type": "INT64"
},
"parameterValue": {
"value": "5"
},
"name": "limit"
}
],
"useLegacySql": false,
"parameterMode": "NAMED"
}
Response:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{
"name": "tag",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "c",
"type": "INTEGER",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<project-id>",
"jobId": "<job-id>",
"location": "<location>"
},
"totalRows": "5",
"rows": [
{
"f": [
{
"v": "javascript"
},
{
"v": "102"
}
]
},
{
"f": [
{
"v": "c++"
},
{
"v": "90"
}
]
},
{
"f": [
{
"v": "java"
},
{
"v": "57"
}
]
},
{
"f": [
{
"v": "c"
},
{
"v": "52"
}
]
},
{
"f": [
{
"v": "python"
},
{
"v": "49"
}
]
}
],
"totalBytesProcessed": "3848945354",
"jobComplete": true,
"cacheHit": false
}
Query - The most popular tags on Stack Overflow questions linked from Hacker News since 2014:
#standardSQL
SELECT tag, SUM(c) c
FROM (
SELECT CONCAT('stackoverflow.com/questions/', CAST(b.id AS STRING)),
title, c, answer_count, favorite_count, view_count, score, SPLIT(tags, '|') tags
FROM `bigquery-public-data.stackoverflow.posts_questions` a
JOIN (
SELECT CAST(REGEXP_EXTRACT(text,
r'stackoverflow.com/questions/([0-9]+)/') AS INT64) id, COUNT(*) c
FROM `fh-bigquery.hackernews.comments`
WHERE text LIKE '%stackoverflow.com/questions/%'
AND EXTRACT(YEAR FROM time_ts)>=2014
GROUP BY 1
ORDER BY 2 DESC
) b
ON a.id=b.id),
UNNEST(tags) tag
GROUP BY 1
ORDER BY 2 DESC
LIMIT 5
Result :
So, we do some of our analytical queries using api to build periodical reports. But, I let you explore the other options & big-query API to create & load data using API.

Validate referential integrity of object arrays with Joi

I'm trying to validate that the data I am returned it sensible. Validating data types is done. Now I want to validate that I've received all of the data needed to perform a task.
Here's a representative example:
{
"things": [
{
"id": "00fb60c7-520e-4228-96c7-13a1f7a82749",
"name": "Thing 1",
"url": "https://lolagons.com"
},
{
"id": "709b85a3-98be-4c02-85a5-e3f007ce4bbf",
"name": "Thing 2",
"url": "https://lolfacts.com"
}
],
"layouts": {
"sections": [
{
"id": "34f10988-bb3d-4c38-86ce-ed819cb6daee",
"name": "Section 1",
"content:" [
{
"type": 2,
"id": "00fb60c7-520e-4228-96c7-13a1f7a82749" //Ref to Thing 1
}
]
}
]
}
}
So every Section references 0+ Things, and I want to validate that every id value returned in the Content of Sections also exists as an id in Things.
The docs for Object.assert(..) implies that I need a concrete reference. Even if I do the validation within the Object.keys or Array.items, I can't resolve the reference at the other end.
Not that it matters, but my context is that I'm validating HTTP responses within IcedFrisby, a Frisby.js fork.
This wasn't really solveable in the way I asked (i.e. with Joi).
I solved this for my context by writing a plugin for icedfrisby (published on npm here) which uses jsonpath to fetch each id in Content and each id in Things. The plugin will then assert that all of the first set exist within the second.

Does the OData protocol provide a way to transform an array of objects to an array of raw values?

Is there a way specify in an OData query that instead of certain name/value pairs being returned, a raw array should be returned instead? For example, if I have an OData query that results in the following:
{
"#odata.context": "http://blah.org/MyService/$metadata#People",
"value": [
{
"Name": "Joe Smith",
"Age": 55,
"Employers": [
{
"Name": "Acme",
"StartDate": "1/1/1990"
},
{
"Name": "Enron",
"StartDate": "1/1/1995"
},
{
"Name": "Amazon",
"StartDate": "1/1/1999"
}
]
},
{
"Name": "Jane Doe",
"Age": 30,
"Employers": [
{
"Name": "Joe's Crab Shack",
"StartDate": "1/1/2007"
},
{
"Name": "TGI Fridays",
"StartDate": "1/1/2010"
}
]
}
]
}
Is there anything I can add to the query to instead get back:
{
"#odata.context": "http://blah.org/MyService/$metadata#People",
"value": [
{
"Name": "Joe Smith",
"Age": 55,
"Employers": [
[ "Acme", "1/1/1990" ],
[ "Enron", "1/1/1995" ],
[ "Amazon", "1/1/1999" ]
]
},
{
"Name": "Jane Doe",
"Age": 30,
"Employers": [
[ "Joe's Crab Shack", "1/1/2007" ],
[ "TGI Fridays", "1/1/2010" ]
]
}
]
}
While I could obviously do the transformation client side, in my use case the field names are very large compared to the data, and I would rather not transmit all those names over the wire nor spend the CPU cycles on the client doing the transformation. Before I come up with my own custom parameters to indicate that the format should be as I desire, I wanted to check if there wasn't already a standardized way to do so.
OData provides several options to control the amount of data and metadata to be included in the response.
In OData v4, you can add odata.metadata=minimal to the Accept header parameters (check the documentation here). This is the default behaviour but even with this, it will still include the field names in the response and for a good reason.
I can see why you want to send only the values without the fields name but keep in mind that this will change the semantic meaning of the response structure. It will make it less intuitive to deal with as a json record on the client side.
So to answer your question, The answer is 'NO',
Other options to minimize the response size:
You can use the $value OData option to gets the raw value of a single property.
Check this example:
services.odata.org/OData/OData.svc/Categories(1)/Products(1)/Supplier/Address/City/$value
You can also use the $select option to cherry pick only the fields you need by selecting a subset of properties to include in the response

How to remove array elements when array is nested in multiple levels of embedded docs?

Given the following MongoDB example collection ("schools"), how do you remove student "111" from all clubs?
[
{
"name": "P.S. 321",
"structure": {
"principal": "Fibber McGee",
"vicePrincipal": "Molly McGee",
"clubs": [
{
"name": "Chess",
"students": [
ObjectId("111"),
ObjectId("222"),
ObjectId("333")
]
},
{
"name": "Cricket",
"students": [
ObjectId("111"),
ObjectId("444")
]
}
]
}
},
...
]
I'm hoping there's some way other than using cursors to loop over every school, then every club, then every student ID in the club...
MongoDB doesn't have a great support for arrays within arrays (within arrays ...). The simplest solution I see is to read the whole document into your app, modify it there and then save. This way, of course, the operation is not atomic, but for your app it might be ok.