I wonder how to fetch only a part of large JSON file
In my example, its not that large but in my project the file is sometime like 7000 lines of code.
Example Json: https://statsapi.web.nhl.com/api/v1/schedule?expand=schedule.linescore
How to fech only the team Name for example.
Normally, from a network request you only get what the server serves you. You can't fetch only a portion from that. You can process the data after the response from server, where you can refer to the portion of your data by key values. Like this.
response['totalItems']
Related
I want to run the Jmeter for a soap service and save both success and failed request values into separate csv files. First of all would like to know if this is possible. I am using CSV input file for generating the request. I could see some posts here but I dont know how we can extract multiple specific values from the soap request upon the status of response. As I mentioned I want to do this for both success and failure responses.
Tried adding XPath2 Extractor and I could see debug sampler is printing values but not sure how to get them into sepaerate csv files. First of all is that doable ?
Update I just realized that my original question was wrong. Saving response wont help much to identify the failed records. My idea is to identify both failed and successful records. Need to do some business logic on failed records. Is there anyway I can do that? I would like to get all those fields from CSV file to both success and failed output files. Thanks in advance
If you want to save successful and not successful responses into separate files you can use Simple Data Writer:
For successful responses
For failures:
If the file has to be CSV and you need only variable values I can think of Flexible File Writer, it doesn't allow you to filter successful and failed responses, however you can add an appropriate column to CSV and filter it later on.
And finally you can just use Sample Variables property and variable values will be stored into JMeter's .jtl results file. It's in CSV format and contains the information regarding whether the request was successful or not and filtering it in Excel or equivalent will be quite easy.
I work on application for fetching and downloading SharePoint data. For every folder in SharePoint I can get the list of all files inside given folder by using next SharePoint REST API endpoint:
/_api/web/GetFolderById('<folder_guid>')/Files
The expected size and guid is provided for every file so I can use them when I want to download the file. Then I use the next endpoint from SharePoint REST API in order to actually get file content:
/_api/web/GetFileById('<file_guid>')/$value
From time to time when I download the file I get less data than expected: size of downloaded data is just different from the value I obtain while getting the properties list of files. However when I try to get its content again it can be successfully downloaded (size of downloaded data is equal the expected value) or I can get another incomplete data.
I verified that the first endpoint (one used to get properties of all files in the folder) returns the correct file size. The problem is in the call of the second one.
I see that there is "transfer-encoding" header with "chunked" value in response. So when my http client performs chunked data download and if zero chunk is received at some point then we reached the end of the body by definition. So it looks like in some cases SharePoint either returns the incomplete data or zero chunks when they should not be sent.
What can be the reason of such strange behavior? Is it a know issue?
We actually also see this, strange behaviour, many files are just small aspx files, about 3-4kb and they are constantly smaller by 15% and more than appears in file propertis. We're also using REST API and this is really frustrating. All those strange bugs in Sharepoint Online are very annoying.
this is an interesting topic... are those files large? like over 1GB? It would seem that chunk file download is not supported way in SP Online. Better option is to user RPC. Please see this links for examples:
https://sharepoint.stackexchange.com/questions/184789/download-large-files-from-sharepoint-online
https://social.msdn.microsoft.com/Forums/office/en-US/03e55d41-1daf-46a5-b61d-2d80139123f4/download-large-files-using-rest?forum=sharepointdevelopment
https://piyushksingh.com/2016/08/15/download-large-files-from-sharepoint-online/
You could also check the MS Graph API if maybe will work better for this case
https://learn.microsoft.com/en-us/graph/api/driveitem-get-content?view=graph-rest-1.0&tabs=http
... I hope this will be of any help
We are writing a REST service to query for PDF files. The service consumer wants the metadata for those PDFs, not the actual PDF. The metadata for happens to be stored as an XML document, one XML document for each PDF resource. They resource and the resource's metadata are completely different files.
What should the query response look like?
Typically we use JSON for request/response bodies. Should the response body be a JSON object that contains a collection of URLs, where each URL links to a metadata document? This seems pretty clean, but causes a lot unnecessary network traffic because the consumer must send a GET request for each metadata document.
Should the XML of the metadata documents be embedded in the response body's JSON object? (yuck!)
Is there a solution is both clean and efficient?
Based on some clarifying comments, I'm going to suggest that you don't write a "RESTful" API. You don't need one. You don't have objects that you need to interact with in any complex way. You don't have state that needs to be affected (REST means Representational State Transfer).
You just need an HTTP API. Just return the XML file. You can also provide an endpoint to get multiple XML documents ZIPed, if you want.
So do something like this:
/api/host/123 - download the PDF file (Content-Type: application/pdf) - You didn't say if you already have an endpoint for PDFs, but if you did want one, this is how I would structure it.
/api/host/123/metadata - download the XML metadata (Content-Type: text/xml)
/api/host/bulk_metadata - download a ZIP of the metadata for file IDs listed in a POST parameter (Content-Type: application/zip)
Use Content-Disposition: attachment; filename="{filename}.{pdf|xml|zip}" to tell browsers to download the content to disk rather than displaying it inline.
I am trying utilise Django REST APIs to insert data into the database, instead of the direct write. I've been able to read JSON data using the tRESTClient component but I am not too sure about the insertion/POST. Could someone point me to the components (and relation) that I should use?
The current job that I have is mostly:
Read data from raw file -> tMap -> DB
and I wish to do something like:
Read data from raw file -> tMap -> (pass on data to REST endpoint via POST)
Used the tRestClient component after my tMap and I could see the records getting inserted into the DB but all of them are without any data. Strangely nowhere I was asked to specify the JASON tree. The number of records getting inserted are equal to rows being read from raw file so at least something is right. But I couldn't locate the menu/options to specify which data element read from the raw file should tag to which JASON element.
How do I specify the data to JSON mapping?
PS: I realise that this might not be the most efficient way to ingest data but that's what the business wants since it brings in an additional layer of control.
I am new to dropwizard and REST.
My sample application is a order viewing system. Currently, I am working on a functionality where the UI page consists of set of order search criteria, search button, and link to download the search result as CSV. Download link is displayed only after the successful search. The application has to write the search result to a CSV file and the file location returned will be used to download the file.
I need help in organising the endpoints for this.
Initially, I thought of an end point GET - /orders - text/JSON with search criteria passed in as query params. But, since I will be actually creating the CSV for every GET request, I am wondering if I am violating the HATEOAS rest constraint for the GET (resource should not be created). Or, since the actual resource is Order and not the CSV, is it ok to have the endpoint as GET?
Or, do I need multiple endpoints adhering to the REST constraints and conventions interacting with each other to produce the required result?
Like:
1.POST - /orders/csv - text/json (file name) : creates the CSV file of orders and returns the JSON of file name.
2.GET /orders/csv/<file_name>: gets the file to download.
Many thanks for your help.