How can I content-encrypt FHIR/REST - rest

We have a requirement to transfer documents (HL7/FHIR and others) over HTTP (REST), but where the network architecture involves multiple hops through proxies that unpack and repack the TLS. So the TLS-encryption doesn't help us much, since we break the law if the proxies in-between can see the data. (This is all within a secure semi-private network, but that doesn't help us because different organizations own the different boxes and we're required to have end-to-end encryption).
So we need payload encryption of the documents transfered with HTTP/REST. The common way to do it here is to use SOAP and encrypt the envelope.
What is the best mechanism for content/payload-encrypting REST-transported data? Is there a standard or open specification for it? In health or some other industry?
I guess one feasible way could be to add a special content type that requests encrypted content (S/MIME-based?) to the HTTP request header. A FHIR/REST-server should then be able to understand from the Accept-header of the HTTP-request that the content must be encrypted before responding. As long as the URL itself isn't sensitive, I guess this should work?
I guess also that maybe even the public key for encrypting the content could be passed in a special HTTP request header, and the server could use this for encryption? Or the keys could be shared in setting up the system?
Is this feasible and an ok approach? Has payload-encryption been discussed in the HL7-FHIR work?

It hasn't been discussed significantly. One mechanism would be to use the Binary resource. Simply zip and encrypt the Bundle/resource you want to transfer and then base-64 encode it as a Binary resource. (I suggest zipping because that makes it easy to set the mime type for the Binary.) That resource would then be the focus of a MessageHeader that would expose the necessary metadata to ensure the content is appropriately delivered across the multiple hops.
This might be a good discussion to start up on the HL7 security list server as they will likely have some additional thoughts and recommendations. (And can also ensure that wherever things land in terms of final recommendation, that that gets documented as part of the spec :>)

Related

How to inform clients that the returned representation of a HTTP resource is deprecated?

If I would have a resource on a certain URI, like https://api.example.com/things/my-things and so far this resource may be displayed on the following representations:
application/xml
application/xhtml+xml
text/xml
text/html
How SHOULD the server inform the clients asking for application/xml, application/xhtml+xml and text/xml that those are going to stop being supported as representation of the resource? Not right now, so a 406 Not Acceptable is not adequate.
I found a internet draft The Deprecation HTTP Header Field, but it is a internet draft, not a RFC, and I am not sure if this would be a valid implementation of the specification, or it would mean that the resource/URI is the one actually being deprecated.
Does anyone know a authoritative way to express that a representation of a resource is being deprecated, and is going to reach its sunset, but the URI would still be available with a different set of representations available?
Ultimately, the information that you want to inform the client of is one of API or application policy. There really isn't any standard way to convey this information via HTTP; at least, not today. Unless your clients are savvy, even if you did provide this information, they'd likely ignore it and you're back to 406 or 415.
The best standard way I can think of to negotiate this would require the client to send HEAD first with Accept or Content-Type and then the server responds with OK if allowed or the appropriate 406 or 415. HTTP caching and/or other techniques can be used to minimize the number of negotiations, but in the worst case scenario, there is always two requests.
The next best way would arguably be through policy enforced with API versioning. Although the differences in version would only be by representation, all facets are clearly separated. If API version 1.0 supports application/xml it should stay that way - forever. This provides:
Stability and predictability for clients
Should allow you to easily identify clients on the old API (and possibly notify them)
Keeps things simple on the server
There are also few ways to loosely advertise that a particular API version is being deprecated. You could use standard headers such as pragma or warning, or you can use something like api-deprecated-versions: 1.0, 1.1. This approach still requires a client to pay attention to these response headers and may not necessarily indicate when the API will transition from deprecated to completely sunset. Most mature server API policies would have a deprecation period of 6+ months, but this is by no means a hard and fast rule; you'd have to establish that will your clients. What this approach can do is enable telemetry owned by clients to detect that an API (and/or version) they are using is deprecated. This should alert client developers to determine the next course of action; for example, upgrade their client.
Depending on your versioning semantics, if they even exist, you likely can achieve a similar albeit more optimal approach using OPTIONS. There isn't an Allow-Content-Type complement to Allow, but you could certainly define a custom one. You might also simply report api-supported-versions and api-deprecated-versions this way. This would enable tooling and clients to select or detect an appropriate endpoint and/or media type. A client might use this approach each time its application starts up to detect and record whether the endpoint they are using is still up-to-date.
A final suggestion could be to advertise this information by way of an OpenAPI (formerly Swagger) document. Such a document would indicate the available URLs, parameters, and media types. A client could request the appropriate document to determine whether their API and expected media type are supported.
Hopefully that gives you a few ideas. First, you need to define a policy and decide how that will be conveyed. You'll then need to educate your clients on how to take advantage of that information and capability. If they opt not to honor that information, then caveat emptor - they'll just get 406, 415, or some other appropriate error response.
There is no authoritative (normative) way of doing this now.
When I first sought to answer this question, it was in my head to suggest adding a header, and lo, that's been proposed.
The Deprecation HTTP Header Field you refer to appears to become that normative way of doing that.
It's also the simplest way to inform a client without the added complexity of other options. This way of informing the client means the client can 100% expect the API to behave the way it always has during the deprecation period, which is often critical.
Often resource, representation, and "representation of resource" can mean the same or different things depending on who you talk to. I would say that pragmatically from the client's perspective, they're the same thing, and so a header is a reasonable method of informing about deprecation.

Is there standard way of making multiple API calls combined into one HTTP request?

While designing rest API's I time to time have challenge to deal with batch operations (e.g. delete or update many entities at once) to reduce overhead of many tcp client connections. And in particular situation problem usually solves by adding custom api method for specific operation (e.g. POST /files/batchDelete which accepts ids at request body) which doesn't look pretty from point of view of rest api design principles but do the job.
But for me general solution for the problem still desirable. Recently I found Google Cloud Storage JSON API batching documentation which for me looks like pretty general solution. I mean similar format may be used for any http api, not just google cloud storage. So my question is - does anybody know kind of general standard (standard or it's draft, guideline, community effort or so) of making multiple API calls combined into one HTTP request?
I'm aware of capabilities of http/2 which include usage of single tcp connection for http requests but my question is addressed to application level. Which in my opinion still make sense because despite of ability to use http/2 taking that on application level seems like the only way to guarantee that for any client including http/1 which is currently the most used version of http.
TL;DR
REST nor HTTP are ideal for batch operations.
Usually caching, which is one of RESTs constraints, which is not optional but mandatory, prevents batch processing in some form.
It might be beneficial to not expose the data to update or remove in batch as own resources but as data elements within a single resource, like a data table in a HTML page. Here updating or removing all or parts of the entries should be straight forward.
If the system in general is write-intensive it is probably better to think of other solutions such as exposing the DB directly to those clients to spare a further level of indirection and complexity.
Utilization of caching may prevent a lot of workload on the server and even spare unnecessary connecctions
To start with, REST nor HTTP are ideal for batch operations. As Jim Webber pointed out the application domain of HTTP is the transfer of documents over the Web. This is what HTTP does and this is what it is good at. However, any business rules we conclude are just a side effect of the document management and we have to come up with solutions to turn this document management side effects to something useful.
As REST is just a generalization of the concepts used in the browsable Web, it is no miracle that the same concepts that apply to Web development also apply to REST development in some form. Thereby a question like how something should be done in REST usually resolves around answering how something should be done on the Web.
As mentioned before, HTTP isn't ideal in terms of batch processing actions. Sure, a GET request may retrieve multiple results, though in reality you obtain one response containing links to further resources. The creation of resources has, according to the HTTP specification, to be indicated with a Location header that points to the newly created resource. POST is defined as an all purpose method that allows to perform tasks according to server-specific semantics. So you could basically use it to create multiple resources at once. However, the HTTP spec clearly lacks support for indicating the creation of multiple resources at once as the Location header may only appear once per response as well as define only one URI in it. So how can a server indicate the creation of multiple resources to the server?
A further indication that HTTP isn't ideal for batch processing is that a URI must reference a single resource. That resource may change over time, though the URI can't ever point to multiple resources at once. The URI itself is, more or less, used as key by caches which store a cacheable response representation for that URI. As a URI may only ever reference one single resource, a cache will also only ever store the representation of one resource for that URI. A cache will invalidate a stored representation for a URI if an unsafe operation is performed on that URI. In case of a DELETE operation, which is by nature unsafe, the representation for the URI the DELETE is performed on will be removed. If you now "redirect" the DELETE operation to remove multiple backing resources at once, how should a cache take notice of that? It only operates on the URI invoked. Hence even when you delete multiple resources in one go via DELETE a cache might still serve clients with outdated information as it simply didn't take notice of the removal yet and its freshness value would still indicate a fresh-enough state. Unless you disable caching by default, which somehow violates one of REST's constraints, or reduce the time period a representation is considered fresh enough to a very low value, clients will probably get served with outdated information. You could of course perform an unsafe operation on each of these URIs then to "clear" the cache, though in that case you could have invoked the DELETE operation on each resource you wanted to batch delete itself to start with.
It gets a bit easier though if the batch of data you want to remove is not explicitly captured via their own resources but as data of a single resource. Think of a data-table on a Web page where you have certain form-elements, such as a checkbox you can click on to mark an entry as delete candidate and then after invoking the submit button send the respective selected elements to the server which performs the removal of these items. Here only the state of one resource is updated and thus a simple POST, PUT or even PATCH operation can be performed on that resource URI. This also goes well with caching as outlined before as only one resource has to be altered, which through the usage of unsafe operations on that URI will automatically lead to an invalidation of any stored representation for the given URI.
The above mentioned usage of form-elements to mark certain elements for removal depends however on the media-type issued. In the case of HTML its forms section specifies the available components and their affordances. An affordance is the knowledge what you can and should do with certain objects. I.e. a button or link may want to be pushed, a text field may expect numeric or alphanumeric input which further may be length limited and so on. Other media types, such as hal-forms, halform or ion, attempt to provide form representations and components for a JSON based notation, however, support for such media-types is still quite limited.
As one of your concerns are the number of client connections to your service, I assume you have a write-intensive scenario as in read-intensive cases caching would probably take away a good chunk of load from your server. I.e. BBC once reported that they could reduce the load on their servers drastically just by introducing a one minute caching interval for recently requested resources. This mainly affected their start page and the linked articles as people clicked on the latest news more often than on old news. On receiving a couple of thousands, if not hundred thousands, request per minute they could, as mentioned before, reduce the number of requests actually reaching the server significantly and therefore take away a huge load on their servers.
Write intensive use-cases however can't take benefit of caching as much as read-intensive cases as the cache would get invalidated quite often and the actual request being forward to the server for processing. If the API is more or less used to perform CRUD operations, as so many "REST" APIs do in reality, it is questionable if it wouldn't be preferable to expose the database directly to the clients. Almost all modern database vendors ship with sophisticated user-right management options and allow to create views that can be exposed to certain users. The "REST API" on top of it basically just adds a further level of indirection and complexity in such a case. By exposing the DB directly, performing batch updates or deletions shouldn't be an issue at all as through the respective query languages support for such operations should already be build into the DB layer.
In regards to the number of connections clients create: HTTP from 1.0 on allows the reusage of connections via the Connection: keep-alive header directive. In HTTP/1.1 persistent connections are used by default if not explicitly requested to close via the respective Connection: close header directive. HTTP/2 introduced full-duplex connections that allow many channels and therefore requests to reuse the same connections at the same time. This is more or less a fix for the connection limitation suggested in RFC 2626 which plenty of Web developers avoided by using CDN and similar stuff. Currently most implementations use a maximum limit of 100 channels and therefore simultaneous downloads via a single connections AFAIK.
Usually opening and closing a connection takes a bit of time and server resources and the more open connections a server has to deal with the more a system may suffer. Though open connections with hardly any traffic aren't a big issue for most servers. While the connection creation was usually considered to be the costly part, through the usage of persistent connections that factor moved now towards the number of requests issued, hence the request for sending out batch-requests, which HTTP is not really made for. Again, as mentioned throughout the post, through the smart utilization of caching plenty of requests may never reach the server at all, if possible. This is probably one of the best optimization strategies to reduce the number of simultaneous requests, as probably plenty of requests might never reach the server at all. Probably the best advice to give is in such a case to have a look at what kind of resources are requested frequently, which requests take up a lot of processing capacity and which ones can easily get responded with by utilizing caching options.
reduce overhead of many tcp client connections
If this is the crux of the issue, the easiest way to solve this is to switch to HTTP/2
In a way, HTTP/2 does exactly what you want. You open 1 connection, and using that collection you can send many HTTP requests in parallel. Unlike batching in a single HTTP request, it's mostly transparent for clients and response and requests can be processed out of order.
Ultimately batching multiple operations in a single HTTP request is always a network hack.
HTTP/2 is widely available. If HTTP/1.1 is still the most used version (this might be true, but gap is closing), this has more to do with servers not yet being set up for it, not clients.

What to use PATCH or POST?

I had a quiet long debate with my colleague about the proper HTTP verb to be used for one of our operation that changes the STATE of a resource.
Suppose we have a resource called WakeUpLan that tries to send event to a system connected in a network. This is kind of a Generic State Machine,
{
id: 1,
retries: {
idle: 5, // after 5 retries it went to FAILED state
wakeup: 0,
process: 0,
shutdown: 0
},
status: 'FAILED',
// other attributes
}`
IDLE --> WAKEUP ---> PROCESS ---> SHUTDOWN | ----> [FAILED]
Every state has a retry mechanism, i.e in IDLE case it tries for x times to transition to WAKEUP and after x retries it dies out and goes to FAILED state.
All the FAILED resource can be again manually restarted or retried one more time from some interface.
So, we have a confusion regarding which HTTP verb best suits in this case.
In my opinion, it is just a change in status and resetting retry count to 0, so that our retry mechanism can catch this and try in next iteration.
so it should be a pure PATCH request
PATCH retry/{id}
{state: 'IDLE'}
But my colleague opposes it to be a POST request as this is a pure action and should be treated as POST.
I am not convinced because we are not creating any new resource but just updating existing resource that our REST server already knows about it.
I would like to know and corrected if I am wrong here.
Any suggestions/advices are welcome.
Thanks in advance.
Any suggestions/advices are welcome.
The reference implementation of the REST architectural style is the world wide web. The world wide web is built on a foundation of URI, HTTP, and HTML -- and HTML form processing is limited to GET and POST.
So POST must be an acceptable answer. After all, the web was catastrophically successful.
PATCH, like PUT, allows you to communicate changes to a representation of a resource. The semantics are more specific than POST, which allows the client to better take advantage. So if all you are doing is creating a message that describes local edits to the representation of the resource, then PATCH is a fine choice.
Don't overlook the possibilities of PUT -- if the size of the complete representation of the resource is of roughly the same order as the representation of your PATCH document, then using PUT may be a better choice, because of the idempotent semantics.
I am not convinced because we are not creating any new resource but just updating existing resource that our REST server already knows about it.
POST is much more general than "create a new resource". Historically, there has been a lot of confusion around this point (the language in the early HTTP specifications didn't help).
HTTP Basics
PATCH
What is PATCH actually? PATCH is a HTTP method defined in RFC 5789 that is similar to patching code in software engineering, where a change to one or multiple sources should be applied in order to transform the target resource to a desired outcome. Thereby a client is calculating a set of instructions the target system has to apply fully in order to generate the requested outcome. These instruction are usually called "patch", in the words of RFC 5789 such a set of instructions is called "patch document".
RFC 5789 does not define in which representation such a patch document need to be transferred from one system to the other. For JSON-based representations application/json-patch+json (RFC 6902) can be used which contains certain instructions like add, replace, move, copy, ... that are more or less clear on what they are doing but the RFC also describes each of the available instructions further.
A further JSON-based, but totally different take on how to inform a system on how to change a resource (or document) is captured in application/merge-patch+json (RFC 7386). In contrast to json-patch, this media-type does define a set of default rules to apply on receiving a JSON based representation to the actual target resource. Here, a single JSON representation of the modified state is sent to the server that only contains fields and objects that should be changed by the server. Default rules define that fields to be removed from the target resource need to be nullified in the request while fields that should change need to contain the new value to apply. Fields that remain unchanged can be left out in the request.
If you read through RFC 5789, you will find merge-patch as more of a hack though. Compared to json-patch, a merge-patch representation lacks the control of the actual sequence the instructions are applied, which might not always be necessary, as well as the lack of changing multiple, different resources at once.
PATCH itself is not idempotent. For a json-patch patch document it is pretty clear that applying the same instructions multiple times may lead to different results, i.e. if you remove the first field. A merge-patch document here behaves similar to a "partial PUT" request that so many developers perform due to pragmatism, even though the actual operation still does not guarantee idempotency. In order to avoid applying the same patch to the same resource unintentionally multiple times, i.e. due to network errors while transmitting the patch document, it is recommended to use PATCH alongside conditional requests (RFC 7232). This guarantees that the changes are only applied to a specific version of the resource and if that resource had changed either through a previous request or by an external source, the request would be declined to prevent data loss. This is basically optimistic locking.
A requirement that all patch documents have to fulfill is, that they need to be applied atomically. Either all the changes are applied or none at all. This puts some transaction burden onto the service provider.
POST
POST method is defined in RFC 7231 as:
requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
This is basically a get-free-out-of-jail-card that lets you do anything you want or have to do here. You are free to define the syntax and structure to receive on a certain endpoint. Most of these so-called "REST APIs" consider POST as the C in CRUD, which it can be used for, but is just an oversimplification of what it actually can do for you. HTML basically only supports POST and GET operations so POST requests are used for sending all kinds of data to the server to start of backing processes, create new resources such as blog-posts, Q&A, videos, ... but also to delete or update stuff.
The rule of thumb here is, if a new resource is created as an outcome of triggering a POST request on a certain URI the response code should be 201 Created containing a HTTP response header Location with a URI as a value that points to the newly created resource. In any other case POST does not map to the C (create) of the CRUD stereotype.
REST-related
REST isn't a protocol but an architectural style. As Robert (Uncle Bob) C. Martin stated, architecture is about intent and REST intention is about decoupling clients from servers which allows the latter one to evolve freely by minimizing interoperability issues due to changes introduced by the server.
These are very strong benefits if your system should still work in decades to come. However, these benefits are unfortunately not obtained easily. As outlined in Fieldings dissertation to benefit from REST the mentioned constraints need to be followed strictly or otherwise couplings will remain increasing the likelihood of breaking clients due to changes. Fielding later on ranted about people that did either not read or understand his dissertation and clarified what a REST API has to do in a nutshell.
This rant can be summarized into the following points:
The API should adhere to and not violate the underlying protocol. Altough REST is used via HTTP most of the time, it is not restricted to this protocol.
Strong focus on resources and their presentation via media-types.
Clients should not have initial knowledge or assumptions on the available resources or their returned state ("typed" resource) in an API but learn them on the fly via issued requests and responses that teaches clients on what they can do next. This gives the server the freedom over its namespace and move around things it needs to without negatively impacting clients.
Based on this, REST is about using well-defined standards and adhering to the semantics of the protocols used as transportation facilities. Through the utilization of HATEOAS and stateless communication, the concepts that proved the Web to be scalable and evolution-friendly, the same interaction model that is used by humans in the Web is now used by applications in a REST architecture.
Common media-types provide the affordance on what a system might be able to do with data received for that payload while content-type negotiation guarantees that both, sender and receiver, are able to process and understand the payload correctly. The affordance may differ from media-type to media-type. A payload received for a image/png might be rendered and shown to the user while a application/vnd.acme-form+json might define a form where a server teaches a client on the elements of a request the server does support and a client can enter data and issue the request without having to actively know the method to use or target URI to send the data to as this is already given by the server. This not only removes the need for out-of-band (external) API documentation but also the need for a client to parse or interpret URIs as they are all provided by the server, accompanied by link-relations, that should be either standardized by IANA, follow common conventions such as existing rel values microformats or ontologies like Dublin Core, or represent extension types as defined in RFC 5988 (Web linking).
Question-related
With the introductory done, I hope that for a question like
But my colleague opposes it to be a POST request as this is a pure action and should be treated as POST. I am not convinced because we are not creating any new resource but just updating existing resource that our REST server already knows about it
it is clear that there is no definite yes or no answer to this quest but more of a it depends.
There are a couple of questions that could be asked, i.e. like
How many (different) clients will use the service? Are they all under your control? If so, you don't need REST, but you can still aim for it
How is the client taught or instructed on to perform the update? Will you provide an external API documentation? Will you support a media-type that supports forms, such as HTML, hal-forms, halo+json, Ion or Hydra
In general, if you have multiple clients, especially ones that are not under your control, you might not know which capabilities they support. Here content-type negotiation is an important part. If a client supports application/json-patch+json it might also be able to calculate a patch document containing the instructions to apply onto the target resource. The chances that it will also support PATCH are also very likely as RFC 6902 mentions it. In such a case it would make sense to provide a PATCH endpoint the client can send the request to.
If the client supports application/patch-merge+json one might assume that it supports PATCH as well, as it is primarily intended for use with the HTTP PATCH method, according to RFC 7386. Here the update from a client side perspective is rather trivial as the updated document is send as is to the server.
In any other case though, it is less clear in what representation formats the changes will be transmitted to the server. Here, POST is probably the way to go. From a REST stance, an update here has probably to be similar to an update done to data that is edited in a Web form in your browser with the current content being loaded into each form-element and the client modifies these form elements to its liking and then submits the changes back to the server in probably an application/x-www-form-urlencoded (or the like) structure. In such a case though, PUT would probably be more appropriate as in such a case you'd transmit the whole updated state of the resource back to the service and therefore perform a full update rather than a partial update on the target resource. The actual media-type the form will submit is probably defined in the media-type of the respective form. Note that this does not mean that you can't process json-patch or merge-patch documents in POST also.
The rule of thumb here would be, the more media-type formats and HTTP methods you support, the more likely different clients will be able to actually perform their task.
I would say you're in the right since you are not creating any new resource.
Highlight the part that says use put when you modify the entire existing resource while use patch when you are modifying one component of existing resource.
More here
https://restfulapi.net/rest-put-vs-post/

Should a REST API wrapper validate inputs before making a request?

Suppose that the server restricts a JSON field to an enumerated set of values.
e.g. a POST request to /user expects an object with a field called gender that should only be "male", "female" or "n/a".
Should a wrapper library make sure that the field is set correctly before making the request?
Pro: Makes it possible for the client to quickly reject input that would otherwise require a roundtrip to the server. In some cases this would allow for a much better UX.
Con: You have to keep the libary in sync with the backend, otherwise you could reject some valid input.
With a decent type system you should encode this particular restriction in the library API anyway. I think usually people validate at least basic stuff on the client and let server do further validation, like things that can’t be verified on the client at all.
This is a design choice - the enum type constraint should be documented in the public API of the server and it's part of its contract.
Clients are forced to obey the contract to make a successful request, but are not required to implement the validation logic. You can safely let the clients fail with "Bad Request" or other 4xx error.
Implementing the validation logic on both sides couples the client and the server - any changes to the validation logic should be implemented on both sides.
If the validation logic is something closer to common sense (e.g. this field should not be empty) it can safely be implemented on both sides.
If the validation logic is something more domain specific, I think it should be kept on the backend side only.
You have to think about the same trade-offs with a wrapping library (which can be looked at as a client of the server API). It depends on what the role of the wrapping library is - if the wrapping library should expose the full API contract of the server - than by all means the validation logic can be duplicated in the wrapping lib - other wise I would keep it to the backend.
The wrapper-library is the actual client of the REST api and hence has to adhere to both the architectural and protocol imposed constraints. In his blog post Fielding explained some of the constraints even further. One of them are typed resources which states that clients shouldn't assume the API to return a specific type, i.e. some user details in JSON. This is what media-types and content negotiation are actually for.
The definition of a media type may give clients a hint on how to process the data received i.e. like with the JSON or XML based vCard format. As media types define the actual format of some specific document it may contain processing rules like pre-validation requirements or syntax regulations i.e. through XML schema or JSON schema validation.
One of the basic rules in remote computing is though to never trust inputs received and hence the server should validate the results regardless if the client has done a pre-validation before. Due to the typed resource constraint a true RESTful client will check if the received media type does support pre-validation through its spec and only apply pre-validation if the spec is defining it and also mentions some mechanisms on how to perform it (i.e. through certain schema mechanism).
My personal opinion on this is that if you try to follow the REST architectural approach you shouldn't validate the input unless the media type explicitely supports it. As a client will learn through error responses which fields and values a certain REST endpoint expects and the server hopefully validates the inputs anyway I don't see the necessity to validate it on the client side. As performance considerations are often more important than following the rules and recommendations it is though up to you. Note however, that this may or may not couple the client to the server and hence increase the risk of breaking on server changes more easily. As REST is not a protocol but a design suggestion, it is up to you which route you prefer.

iPhone/iPad Encrypting JSON

I want to encrypt some json from a server and then decrypt it on the iphone/ipad. What are your thoughts on this? What is the best approach to this? Should I scrap this idea and just go via SSL?
Save yourself a lot of trouble and just use HTTPS for all server communications.
As stated above one way is to do everything over https.
An alternative I can think of is the following:
Generate an symmetrical encryption
key per session/login per client on
the server
Send that key to the client over
https
From there on encrypt all the data
you send to the client with that key
The client can then decrypt the
encrypted data
I don't have enough knowledge about https. I often read that is heavy on the resources of the system, but since I have not made or read some good benchmarks I can't give you a rigorous argument for or against it.
The implementation I proposed require a little bit more coding, but you can tailor to your encryption needs.
I think ultimately your decision should be made based on your usage scenario, if you sent very little data, not often to a few client application, you can't go wrong with https. If your expected encrypted traffic is high, the alternative solution might make sense.