Using HTTPS and multiple NSURLProtectionSpace's in iOS - iphone

I'm creating a iOS app that requires the user to log in at startup, and then uses those credentials to query 4-5 different services on a server over the course of the session.
The server (xyz) it self doesn't accept the credentials, but if the services that it provides are queried then they get accepted. For example https://xyz/service1 works, https://xyz doesn't.
Now what I'm wondering about is if there is anything that stands in the way of creating 4-5 NSURLProtectionSpace's at log in, one for each service on the server, and then use the corresponding protection space when use each service?
Or is there a better way of implementing something that could work in this situation?
All help would be appreciated.

Turns out that there is nothing that stands in the way of creating multiple NSURLProtectionSpace's since each is created for a separate url.

Related

How exactly does backend work from a developer perspective?

Theres a ton of videos and websites trying to explain backend vs frontend, but unfortunately none of them explains it in a way that you know how to develop a backend - driven website (at least I haven't found anything good).
So, I wanted to ensure that I understood it and kindly ask you to confirm or correct me on this topic.
Example:
I wanted to build Mini - Google. I have a Database containing 1000 stored websites.
Assumption #1:
Everytime I type something into the search bar, the autofill suggestions change. This means, everytime i type, another website / API gets called returning the current autofill suggestions. On a developer site, this means the website e.g. is a Python script which gets called with the current word typed in as a Parameter and is returning all suggestions as e.g. JSON:
// Client Side Script
function ontype(input):
suggestions = get("https://api.googlemini.com/suggestions?q=" + str(input))
show(suggestions)
Assumption #2:
This also means I could manually call the website containing the Python script, providing a random word and it would always return a JSON containing the autofill suggestions for that word.
Question #1:
If A#1 turns out true but A#2 turns out false, how could I prevent a user from randomly accessing the "API" while still returning results when called by a script?
Assumption #3:
After pressing enter, my website googlemini.com/search?... would be called. As google.com/search reloads everytime searching for a new query (or going to page 2 etc.), I assume, instead of calling an API, when the server gets the client request, it first searches through its database, sorts the results and then returns a whole html as a static webpage:
// Server Side Script
#app.route("/search")
function oncall():
query = getparam("q")
results = searchdatabase(query)
html = buildhtml(results)
return html
Question #2:
Often, I hear (or at least understand it this way) that database and webserver are 2 seperate servers. How would that work? Wouldn't that mean the database server needs to be accessible to the web too (of course it would have security layers etc., but technically it would)? How could I access the database server from the webserver?
Question #3:
Are there, on a technical basis, any other ways to build backend services?
That's it. I would also appreciate any recommendations like videos, websites or others to learn how to technically setup and / or secure backend servers.
Thanks in advance.
For your first question you can yes there is a way to prevent miss use.
What you can do is add identifier to api like Auth token to identify a user and every time a user access the api you can save the count on the server n whenever the count has exceeded a limit within a time span you can reject the call. And the limit can be set in such a way that it doesn't trouble the honest user and punishes the wrong one. There are even more complex and effective methods but this is the basic idea.
For question number to let me explain you a simple concept a database is a very efficient, resourcefull and expensive data storage solution we never want it to be used in a general sense as varible store or something. We always want to access the database in call get the data process the data update the data. So we do it data way and its not necessary you make sepreate server for data base. The thing is we mostly make databse to be accessible to various platforms android, ios, windows. So its better to add some abstraction and keep data base as a separte entity.
For the last, I am not well aware about what you meant by other but I am listing some backend teechnologies, some of these might be used in isolation some of these not some other tools as well.
Django
FLask
Djnago rest
GraphQL
SQL
PHP
Node
Deno

is the kaa application modularity?

Kaa demo show lots of example of application&client,but actually in the fact solution, may need some of function together,
so,question is:
one client (e.g. android app) access the multiple function (e.g. 2 function:event and notifications)in the server,so:
i need put all function in one application,or
create 2 applications,android app access the different function separate?
You can use some of the SDK API (functions) or all at the same time (within single client application). The Kaa Sample Applications were made with an intent to be as simple (thus clear and easy to understand) as possible. Therefore, just for simplicity most of the samples use just one Kaa feature. This does not mean that they cannot be used in a single application at the same time.
Please feel free putting them all into a single application.

Routing using OSRM for multiple profiles - does profile in the URL actually do anything?

With ORSM there are 3 profiles for different modes of transport, cycle, foot and car. These come with OSRM.
According to the following post which was made 1 year ago, OSRM does not support multiple profiles:
OSM routing (OSRM): do I need to duplicate all data for different profiles?
Yet in the official documentation there is a profile argument as part of the URL called for retrieving a route from a running OSRM instance:
http://project-osrm.org/docs/v5.6.4/api/#general-options
The path would look something like this:
http://router.project-osrm.org/route/v1/driving/
Without driving, foot or cycle in the URL a route won't be retrieved so one of them is required for the API, yet if I compile a route for car on the server, but then use /foot/ in the URL to retrieve a route, it will still retrieve a car based route, completely ignoring 'foot'.
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn, and what the point of driving is in the above URL seeing as it is ignored anyway and just appears to use the profile attached to the running instance of OSRM?
The solution to the problem of multiple profiles appears to be to host parallel copies of the routing machine for each profile and address different IP's, so again, what is the point of 'profile' in the URL?
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn
The support has never been there. You will need to run separate osrm instances for each profile.
The URL option is merely there to make it easier to stick a nginx in front of your OSRM instances and distribute to the correct instance based on profile string.
We might implement multiple profiles in the same OSRM instance in the future, but this is still far out.

Keeping iPhone application in sync with GWT application

I'm working on an iPhone application that should work in offline and online modes.
In it's online mode it's supposed to feed all the information the user enters to a webservice backed by GWT/GAE.
In it's offline mode it's supposed to store the information locally, and when connection is available sync it up to the web service.
Currently my plan is as follows:
Provide a connection between an app and a webservice using Protobuffers for efficient over-the-wire communication
Work with local DB using Core Data
Poll the network status, and when available sync the database and keep some sort of local-db-to-remote-db key synchronization.
The question is - am I in the right direction? Are the standard patterns for implementing this? Maybe someone can point me to an open-source application that works in a similar fashion?
I am really new to iPhone coding, and would be very glad to hear any suggestions.
Thanks
I think you've blurring the questions together.
If you've got a question about making a GWT web interface, that's one question.
Questions about how to sync an iPhone to a web service are a different question. For that, you don't want to use GWT's RPCs for syncing, as you'd have to fake out the 'browser-side' of the serialization system in your iPhone code, which GWT normally provides for you.
about system design direction:
First if there is no REAL need do not create 2 different apps one GWT and other iPhone
create one but well written GWT app. It will work off line no problem and will manage your data using HTML feature -- offline application cache
If it a must to create 2 separate apps
than at least save yourself effort and do not write server twice as if you go with standard GWT aproach you will almost sertanly fail to talk to server from stand alone app (it is zipped JSON over HTTP with some tricky headers...) or will write things twise so look in to the RestLet library it well supported by the GAE.
About the way to keep sync with offline / online switching:
There are several aproaches to consider and all of them are not perfect. So when you conseder yours think of what youser expects... Do not be Microsoft Word do not try to outsmart the user.
If there at least one scenario in the use cases that demand user intervention to merge changes (And there will be - take it to the bank) - than you will have implement UI for this - than there is a good reason to use it often - user will get used to it. it better than it will see it in a while since he started to use the app because a need fro it is rare because you implemented a super duper merging logic that asks user only in very special cases... Don't do it.
balance the effort. Because the mess that a bug in such code will introduce to user is much more painful than the benefit all together.
so the HOW:
The one way is the Do-UnDo way.
While off line - keep the log of actions user did on data in timed order user did them
as soon as you connected - send to server and execute them. Same from server to client.
Will work fine in most cases as long as you are not writing a Photoshop kind of software with huge amounts of data per operation. Also referred as Action Pattern by the GangOfFour.
Another way is a source control way. - Versions and may be even locks. very application dependent. DBMS internally some times use it for transactions implementations.
And there is always an option to be Read Only when Ofline :-)
Wonder if you have considered using a Sync Framework to manage the synchronization. If that interests you can take a look at the open source project, OpenMobster's Sync service. You can do the following sync operations
two-way
one-way client
one-way device
bootup
Besides that, all modifications are automatically tracked and synced with the Cloud. You can have your app offline when network connection is down. It will track any changes and automatically in the background synchronize it with the cloud when the connection returns. It also provides synchronization like iCloud across multiple devices
Also, modifications in the Cloud are synched using Push notifications, so the data is always current even if it is stored locally.
Here is a link to the open source project: http://openmobster.googlecode.com
Here is a link to iPhone App Sync: http://code.google.com/p/openmobster/wiki/iPhoneSyncApp

C# ASMX webservice semi -permanant storage requirement

I'm writing a mock of a third-party web service to allow us to develop and test our application.
I have a requirement to emulate functionality that allows the user to submit data, and then at some point in the future retrieve the results of processing on the service. What I need to do is persist the submitted data somewhere, and retrieve it later (not in the same session). What I'd like to do is persist the data to a database (simplest solution), but the environment that will host the mock service doesn't allow for that.
I tried using IsolatedStorage (application-scoped), but this doesn't seem to work in my instance. (I'm using the following to get the store...
IsolatedStorageFile.GetStore(IsolatedStorageScope.Application |
IsolatedStorageScope.Assembly, null, null);
I guess my question is (bearing in mind the fact that I understand the limitations of IsolatedStorage) how would I go about getting this to work? If there is no consistent way to do it, I guess I'll have to fall back to persisting to a specific file location on the filesystem, and all the pain of permission setting that entails in our environment.
Self-answer.
For the pruposes of dev and test, I realised it would be easiest to limit the lifetime of the persisted objects, and use
HttpRuntime.Cache
to store the objects. This has just enough flexibility to cope with my situation.