UISearch Bar query in iPhone - iphone

I am making Map application and I have already able to call web service which show Shops location on map with annotation. My web service contain more than 200 shops in Australia. I am taking one UISearch bar in which when I insert Syd.....then Sydney, Australia kind of addresses autofill tableview should open. How can I achieve this.. Do I need to call web service Url again for address autofill tablview or Should I insert manually address or is there any method???
Edited:=
In my application I am searching shops location within 5,10,15Km of radius. And if user want another location instead of current location then he can insert his any location to find shops information around that inserted location. So when user insert his location then autofill address tableview should open

One possible solution can be to keep the predefined values in database (Core Data) and then on textDidChange delegate of searchBar you can query the database to search for a list if data matching the current serach keyword. The results thus produced can be shown in a table view with some animation effect so as to give the feel of auto suggest.
Calling the webservice on textDidChange will block the User Interface and will not be good option to proceed with. though this feature is more prevalent on web but on device I find the first choice more productive than the second one.
EDIT: Answer for problem being asked in edit part of question
You need to put an auto suggest.
Fetch data from Server (using web service).
Now to provide functionality of address suggestion like "Syd" turning into Sydney, Australia , my above answer will help you to put these static addresses in database and then providing user with auto suggest options. For the second part you can save the lat/long of the places in database and once user finalize his selection you can query the webservice to get you the data.
The steps can be summarized as ..
User Types Syd.
You query database to search for place which match Syd using some query such as place LIKE %syd%.
Populate the table with autosuggest with showing place name like
Sydney,Australia.
User selects a place , correspondingly you pick the Lat/long (fetched
along with the names in above query)and query your web service to
fetch you data for
Place = Sydny, Australia (Not really Required)
Latitude = SOME VALE for Sydney
Longitude - SOME VALUE for Sydney
Radius = 5,10,15 depending upon your application logic
Server will then fetch you all shops in given range for particular Lat/long.
More calculation intensive work can be done at server's end and at client's end less web service calls should be made to avoid latency.

I was having same app which was showing addresses of persons so I was using a webservice which was giving me all the addresses and I was keeping them in NSMutableArray and then I was checking the string in array in textDidChange function of UISearchBar and if there is any string matching the prefix then I was showing the address in UISearchBar...But in this case if you wrote only 'S' then it will return more than one record so there you need to decide according to client's requirement how to show, does he wants list or any matching record...If list is expected then you need to show the tableview and when user picks the value then just hide the tableview.....

Related

How to combine pagination with getting next / previous element of a REST collection

I am trying to design a REST API in which there is a paginated collection. In my current design I use a page based approach:
GET /entities?page=2&pageSize=25
Retrieving a single entity is trivial:
GET /entities/4
A user story related to this API requires that when viewing a single entity in a screen, two buttons "Previous" and "Next" enable switching to said entities. An example:
GET /entities?page=2&pageSize=25
returns:
[{id: 2}, {id: 4}, {id: 17}]
Now, when viewing entity with id 4, the named buttons would navigate to the entities with id 2 or id 17.
My assumption is, that a client (web frontend in my case) could be able to "remember" the pagination information and use that when fetching the previous or next entity. This could be applied to eventual filters which I might add to the endpoint.
A basic idea of how to implement this would be to get the current page and for edge cases the previous and the next page (required if currently viewing the first / last resource of the collection). But that seems like an inefficient solution.
So my question is: Is my chosen pagination method even compatible with what I try to archive? If it is, how would clients use the API to archive the next/previous feature?
In this case I ended up with a frontend based solution.
Given that my constraint was that I could not change the type of pagination used, this is a crude solution:
Frontend makes initial GET request on the paginated data. This request may not only contain pagination parameters (page + size) but also filters (color = red). This is the search context.
The user accesses one of the items and some other view is displayed. Important: The search context is kept in local storage or similar.
If the user wants to browse to the next / previous item, the search is executed again with the same search context.
In the search result, search for the ID of the currently displayed item
If the currently displayed item is found, show the next / previous item
If it is not found, do some fallback. (In my case I chose to simply display the initial search UI)
I am not happy with this implementation as it has some major drawbacks IMO:
Not efficient: For every browsing click, a search request is made. When not using an efficient search database like Elasticsearch this will probably cause a lot of stress on the db.
Not perfect: If the currently displayed item is not in the search result, there is no sensible way to get the next / previous item
If the currently displayed item is the first / last item in the search result and you want to browse back / forward, the search would have to be executed twice, once on the current page, once on the previous / next.
As stated, I think this is not a good solution, I am still looking for a clever, efficient way to do this.

How to efficiently check database object based on location/proximity to user's location?

I am constructing an app (in XCode) which, in a general sense, displays information to users. The information is stored as individual objects in a database (happens to be a Parse-server hosted by heroku). The user can elect to "see" information that has been created within a set distance from their current location. (The information, when saved to the DB, is saved along with its lat and long based on the location of the user when they initiated the save). I know I can filter the pieces of information by comparing their lat and long to the viewing user's current lat and long and only display those which are close in enough. Roughly/generally:
var currentUserLat = latitude //latitude of user's current location
var infoSet = [Objects] //set of all pulled info from DB
for info in infoSet{
if info.lat-currentUserLat < 3{//arbitrary value
//display the info
}else{
//don't display
}
}
This is set up decently enough, and it works fine. The reason it works fine, though, is because of the small number of entries in the DB at this current time (the app is in development). Under practical usage (ie many users) the DB may be full of information objects (lets say, a thousand). In my opinion, to individually pull and compare the latitude of the information and compare it to the current user's latitude for each and every DB entry would take too long. I know there must be a way to do it in a timely manner (think tinder... they only display profiles of people who are in the near vicinity and it doesn't take that long for them to do so despite millions of profiles) but I do not know what is most efficient. I thought of creating separate sections for different geographical regions in the DB and then only searching those particular section of the DB depending on where the user's current location is, but this seems unsophisticated and would still lead to large amounts of info being pulled. What is the best way to do this?
Per Deploying a Parse Server to Heroku you can Install a MongoDB add-on or another of the Data Stores in the Add-on Category in which you can use Geospatial Indexes and Queries which are specifically intended for this sort of application.
Is there a reason you need to do that sort of checking on the client side? I would suggest sending your coordinates to your server and then having the server query your database with those coordinates and figure out which items to pull based on the given coordinates respectively. Then you can have the server return back to the client side whichever items were "close" to that user
EDIT: reworded

Mongdb database design

I am developing an app and I've chosen mongodb as the database mainly because of its flexibility and the ability to query geospatial data. But I tend to be a bit old school concerning the design (read 'relational database') and I'd like a few hints on how to design my database so it best fits my need.
I have a User model , and let's say a Object item. Each user has a location (which can change rapidly over the time). The Object items also have a location and belongs to a User.
For now I kinda developed my database like I'd do it in MySql:
* A User table with an array of Object ID
* an Object table with a reference to the owner (user) ID.
Since I will need to make frequent query on the location of each model and make some range query (which objects are closer than 100m to this user etc), is this a good design ? My main concern is the location query. I know I can put an index on the location, but I did not want to put two index location on the User and the Object array of the user on the same table.
Another feature is that I will surely be doing some sharding on my database, and according from what I read on mongodb, I think I'll make the sharding on the location index (mostly the user).
Does that make sense or should I actually just go with the one-size-fits-all approach ? Or do you have another design in mind that would be better ?
Thanks.
As per my opinion:
Step 1: You can use three different collections. First one for the Users and second will be Locations and last will be ITEMS(Along with user ID).
Step 2: In your location collection you can save both of the coordinates of the location distinguished by two different types. For example, for user location you can set type as "USER_LOC" and for Object(Item) location you can set type as "ITEM_LOC" along with USER ID and ITEM ID ().
Step 3: As if ITEM and USERS locations are moving so you can save both the(user and item) coordinates in your location collection.
Step 4: You can save last coordinates of item and users in their respective collections itself.
Step: 5 As per your query if you want to fetch items which are near to user in X meter distance so you can filter location collection for ITEM type only and you have last coordinates of the user in USER collection itself. So you can find List of items which are near to user current location within a specific distance.
This is just my opinion. Thanks.

List of cities in HTML page

I want user to select city during filling his profile. What is the best way to do it in web application?
I there any other way than just store a list of cities in my DB. Maybe some public API?
The best way to do this is with a big database, unless you want to allow the user to type in a city name.
Fortunately for you, you don't have to go out and make the database yourself. Here's a directory of free databases: http://www.sqldumpster.com/databases/geographic/
I'd just go with a simple text field with an auto-completer. You can get a list of cities but you'll have to keep it up to date and you'll have to worry about nonsense like the difference between "Saint John", "St John", and "St. John".
Sending an entire list of cities to the client will just be a user interface nightmare, a selection list would have thousands and thousands of entries and you'd have to send a lot of data to the client; there's no reason to hate your visitors that much.
The auto-completer can use the currently chosen cities to provide suggestions for new cities. If you have city names in several places, then just keep a master city list somewhere for the auto-completer and updated it with new entries every day. You will end up with a list of cities but the list will build itself.
A simple text input will work the same everywhere and just about everyone can type out the name of their city pretty easily.
You could store them in a text file, have a copy in your server (for validation), and load the cities via AJAX.
That approach will break, however, for users without JS.
And, to be snobby, can you define best? Best in what sense? Fastest? Lightest? Awesomest? Most Pythonic? I'm not sure what you mean by that.

How to Implement a Reliable Web Page Counter?

What's a good way to implement a Web Page counter?
On the surface this is a simple problem, but it gets problematic when dealing with search engine crawlers and robots, multiple clicks by the same user, refresh clicks.
Specifically what is a good way to ensure links aren't just 'clicked up' by user by repeatedly clicking? IP address? Cookies? Both of these have a few drawbacks (IP Addresses aren't necessarily unique, cookies can be turned off).
Also what is the best way to store the data? Increment a counter individually or store each click as a record in a log table, then summarize occasionally.
Any live experience would be helpful,
+++ Rick ---
Use IP Addresses in conjunction with Sessions. Count every new session for an IP address as one hit against your counter. You can store this data in a log database if you think you'll ever need to look through it. This can be useful for calculating when your site gets the most traffic, how much traffic per day, per IP, etc.
So I played around with this a bit based on the comments here. What I came up with is counting up a counter in a simple field. In my app I have code snippet entities with a Views property.
When a snippet is viewed a method filters out (white list) just what should hopefully be browsers:
public bool LogSnippetView(string snippetId, string ipAddress, string userAgent)
{
if (string.IsNullOrEmpty(userAgent))
return false;
userAgent = userAgent.ToLower();
if (!(userAgent.Contains("mozilla") || !userAgent.StartsWith("safari") ||
!userAgent.StartsWith("blackberry") || !userAgent.StartsWith("t-mobile") ||
!userAgent.StartsWith("htc") || !userAgent.StartsWith("opera")))
return false;
this.Context.LogSnippetClick(snippetId, IpAddress);
}
The stored procedure then uses a separate table to temporarily hold the latest views which store the snippet Id, entered date and ip address. Each view is logged and when a new view comes in it's checked to see if the same IP address has accessed this snippet within the last 2 minutes. if so nothing is logged.
If it's a new view the view is logged (again SnippetId, IP, Entered) and the actual Views field is updated on the Snippets table.
If it's not a new view the table is cleaned up with any views logged that are older than 4 minutes. This should result in a minmal number of entries in the View log table at any time.
Here's the stored proc:
ALTER PROCEDURE [dbo].[LogSnippetClick]
-- Add the parameters for the stored procedure here
#SnippetId AS VARCHAR(MAX),
#IpAddress AS VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
-- check if don't allow updating if this ip address has already
-- clicked on this snippet in the last 2 minutes
select Id from SnippetClicks
WHERE snippetId = #SnippetId AND ipaddress = #IpAddress AND
DATEDIFF(minute, Entered, GETDATE() ) < 2
IF ##ROWCOUNT = 0
BEGIN
INSERT INTO SnippetClicks
(SnippetId,IpAddress,Entered) VALUES
(#SnippetId,#IpAddress,GETDATE())
UPDATE CodeSnippets SET VIEWS = VIEWS + 1
WHERE id = #SnippetId
END
ELSE
BEGIN
-- clean up
DELETE FROM SnippetClicks WHERE DATEDIFF(minute,Entered,GETDATE()) > 4
END
END
This seems to work fairly well. As others mentioned this isn't perfect but it looks like it's good enough in initial testing.
If you get to use PHP, you may use sessions to track activity from particular users. In conjunction with a database, you may track activity from particular IP addresses, which you may assume are the same user.
Use timestamps to limit hits (assume no more than 1 hit per 5 seconds, for example), and to tell when new "visits" to the site occur (if the last hit was over 10 minutes ago, for example).
You may find $_SERVER[] properties that aid you in detecting bots or visitor trends (such as browser usage).
edit:
I've tracked hits & visits before, counting a page view as a hit, and +1 to visits when a new session is created. It was fairly reliable (more than reliable enough for the purposes I used it for. Browsers that don't support cookies (and thus, don't support sessions) and users that disable sessions are fairly uncommon nowadays, so I wouldn't worry about it unless there is reason to be excessively accurate.
If I were you, I'd give up on my counter being accurate in the first place. Every solution (e.g. cookies, IP addresses, etc.), like you said, tends to be unreliable. So, I think your best bet is to use redundancy in your system: use cookies, "Flash-cookies" (shared objects), IP addresses (perhaps in conjunction with user-agents), and user IDs for people who are logged in.
You could implement some sort of scheme where any unknown client is given a unique ID, which gets stored (hopefully) on the client's machine and re-transmitted with every request. Then you could tie an IP address, user agent, and/or user ID (plus anything else you can think of) to every unique ID and vice-versa. The timestamp and unique ID of every click could be logged in a database table somewhere, and each click (at least, each click to your website) could be let through or denied depending on how recent the last click was for the same unique ID. This is probably reliable enough for short term click-bursts, and long-term it wouldn't matter much anyway (for the click-up problem, not the page counter).
Friendly robots should have their user agent set appropriately and can be checked against a list of known robot user agents (I found one here after a simple Google search) in order to be properly identified and dealt with seperately from real people.