Koha RESTful api - rest

I've been looking around the internet for information on Koha ILS restful api but I haven't found anything concrete. There is this link which talks about its HTTP API: http://wiki.koha-community.org/wiki/Koha_/svc/_HTTP_API but there are no examples and I'm quite confused with the MARCXML format required.
What I want to do is use this API to create biblio records into a remote Koha ILS system. If I understand correctly, using these services I can create records (probably using a JSON-to-MARC convert tool) but will I be able to also upload pdf files of each record in BASE64 format? It doesn't look like this is possible using this API although I'm not really sure.

The HTTP API available in Koha is a well-established protocol, called SRU, for searching library catalogs. This is protocol is meant only for searching, not for updating records.
Secondly, even though SRU 2.0 provides option for transmission of records in JSON format, most implementations do not support it yet.
Coming back to your use case, Koha cannot store PDF documents. It is a process automation tool in a library for physical collections, which deals only with metadata records. For storing digital documents you should look for document management solutions, such as DSpace, or smaller and easier Omeka. DSpace provides its own REST API for searching as well as supports SWORD protocol for uploading documents.

Related

How can I use my own api to other platforms?

I made a json api with using this => https://www.django-rest-framework.org/tutorial/quickstart/
All the articles I read teach the creation and use of api within its own platform, what I need is what I produce on the web, use it to in other platforms. I made my api but no idea about how to import it in other platforms..
so how can I use my own api in my c# windows form application or my flutter project
Any link, guide etc.
First of all you should be clear about why you need an api. If you need to transfer data from one system to another, pick a way that you know you can operate on both sides.
JSON or XML are just ways of representing data, first think about what you need and how can you transport that data between systems...After that the implementation should be clear.

Creating Metadata Catalog in Marklogic

I am trying to combined data from multiple sources like RDBMS, xml files, web services using Marklogic. For this as I see from MarkLogic documentation on Metadata Catalog (https://www.marklogic.com/solutions/metadata-catalog/), Data Virtualization (https://www.marklogic.com/solutions/data-virtualization/) and Data Unification it is very well possible. But I am not able to get hold of any documentation describing how exactly to go about it or which tools to use to achieve this.
Looking for some pointers.
As the second image in the data-virtualization link shows, you need to ingest all data into MarkLogic databases. MarkLogic can then be put in between to become the single entry point for end user applications that need access to that data.
The first link describes the capabilities of MarkLogic to hold all kinds of data. It partly does so by storing them as-is, partly by extracting text and metadata for searching, partly by conversion (if you needs go beyond what the original format allows).
MarkLogic provides the general purpose MarkLogic Content Pump (MLCP) tool for this purpose. It allows ingesting zipped or unzipped files, and applying transformations if necessary. If you need to retrieve your data from a different database, you might need a bit more work to get that out. http://developer.marklogic.com holds tutorials, blogs, and tools that should help you get going. Searching the MarkLogic Mailing List through http://marklogic.markmail.org/ can provide answers as well.
HTH!
Combining a lot of data is a very broad topic. Can you describe a couple types of data you'd like to integrate, and what services or queries you would like to build on that data?

Document versioning with MarkLogic REST API

We're currently using MarkLogic's dls functions to handle document versioning, and are trying to switch over to use the REST API. The document endpoint doesn't use versioning by default, and I can't figure out a way to get it to. I'm referring to the dls functions for keeping multiple document versions, btw, not the new "content versioning" the REST API documentation mentions. In fact, the only reference to document versions in the REST API docs seems to be a line saying that content versioning isn't the same thing.
The only solution we've been able to come up with is to write a custom endpoint that duplicates everything the existing document endpoint's PUT does, plus document management. I'd rather avoid that if possible, especially when looking at MarkLogic 7's partial document updates. We're using MarkLogic 6 now, if it matters, but it doesn't look like 7 has any new features related to this.
Is there a way to do this using MarkLogic's existing endpoints?
You can write a REST API extension that automates the DLS operations. See http://docs.marklogic.com/guide/rest-dev/extensions. You will largely end up duplicating a lot of the same things, but this will plug into the existing endpoints.
Yes, MarkLogic 7 added content versioning to make refreshing of caches easier. And unfortunately, the DLS library hasn't been integrated into the REST api so far. You can file a feature request at support if you like.
In the mean time, the best suggestion I can give is use a separate route to do document updates using DLS (your current route or a limited custom endpoint that only supports the DLS functions you need for doc updates), and do anything else (as far as possible) using the existing REST api. You can look at this other stackoverflow question to see how to limit searches to the latest doc versions:
Marklogic REST API search for latest document version
HTH!
A member of MarkLogic has put together a REST extension to provide better DLS support in the REST-api. Hopefully that makes working with DLS over the MarkLogic REST-api a lot easier:
https://github.com/sanjuthomas/marklogic-dls-rest-extension
HTH!

StockTwits API Streaming and Search Used Like Twitter Streaming

The StockTwits API documentation describes steams in a way that sounds like static search results, for example streams/symbol:
This allows an API application to search for a symbol or user. 30 Results will be a
combined list of symbols and users.
This seems similar to search/symbols:
This allows an API application to search for a symbol directly. 30 Results will return
only ticker symbols.
Other than the fact that search excludes users, I don't see the difference.
In contrast, the Twitter API provides methods to request a continuous stream of tweets, which I have gotten to provide tens of thousands of tweets in a few days.
Is it possible to have StockTwit pump tweets continuously, similar to Twitter?
If so, what is required? Since StockTwit streaming looks like searching to me, the only option I have seen is to submit repeated search requests, but that would exhaust the rate limit.
I prefer C#, but I am glad to study answers in other languages, such as PHP.
This is a static search for symbols or both symbols and users as a combined search. This isn't a streaming search endpoint for filtering content. This is strictly for use for finding a symbol or a user to go directly to the stream.
We are looking into offering streaming endpoints and search would be part of this offering.
You may be interested in using streamdata.io which allows to stream any APIs. We have already implemented a StockTwits demo, which can be found here and explanations can be found in this blog post.
I think it's quite easy to transpose what has been done with Android to the C# world. All you need is an EventSource library and a JSON-Patch library.

Programmatic export/dump/mass data retrieval (BaaS)

Does anyone have experiences with programmatic exports of data in conjunction with BaaS providers like e.g. parse.com or StackMob?
I am aware that both providers (as far as I can tell from the marketing talk) offer a REST API which will allow for queries against the database, not only to be used by mobile clients but also by e.g. custom web apps.
I am also aware that both providers offer a manual export of data (parse.com via their web interface, StackMob via support).
But lets say I would like to dump all data nightly, so that I can import it into a reporting system for instance. Or maybe simply to have an up-to-date backup.
In this case, I would need a programmatic way to export/replicate the data stored in the backend. Manual exports are not an option for obvious reasons.
The REST APIs offered however seem to be designed for specific queries, not for mass reads (performance?). Let alone the pricing - I assume none of the providers would be happy about a nightly X Gigabyte data export via their REST API, so their probably will be a price tag.
I just couldn't find any specific information on this topic so far, so I was wondering if anyone else has already gone through this. Also, any suggestions on StackMob/parse alternatives are welcome, especially if related to the data export topic.
Cheers, Alex
Did you see the section of the Parse REST API on Batch operations? Batch operations reduce the number of API calls needed to grab data so that you are not using a call for every row you retrieve. Keep in mind that there is still a limit (the default is 100, but you can set it to a maximum of 1000). That means you are still limited to pulling down 1000 rows per API call.
I can't comment on StackMob because I haven't used it. At my present job, we are using Parse and we wrote a C# app which compares the data in a Parse class with a SQL table and pulls down any changes.