Ways to reduce the number of activity - android-activity

My friends, I wanted a solution to not open an activity for each page in the program because the number of activities is too large.

Related

Proper state management architecture to implement read/unread of items

Context: We are implementing a news app. For now, you can assume the news to be the same across all users, and maintains an order based on the parameters we set (according to trends, and date).
Problem: We are not sure what the best implementation for keeping track of what users read is. We want to be able to configure a way in which we can track what users read and what they didn’t.
Assumption: You can assume that the posts in the database are in a descending order, based on time.
So, the ideal scenario is that: when there are posts: A,B,C,D,E fetched from the server in the app, and the user read A,B. Now the user only gets to see C,D,E when they check for next posts. If they do previous, they see posts in the following order B-> A.
Furthermore, when P,Q is added to the database, now, the user must see next posts in the order of P->Q->C->D->E and so on.
Example: Let us assume there are 20 news in our app right now, and Gavin picks up his phone and starts reading from our app. In midst of his usage, he finds himself occupied with some other work, so quits the app after reading 5 news posts.
The challenge for us now is to figure the best way to make sure Gavin doesn’t have to re-read the 5 posts he already did.
One way we thought we could solve this problem is through use of index. We can assume uniform ordering for our posts as mentioned in the context, so we could use an index to track where Gavin was last in the order of news and show him news based on that index.
However, one problem with that approach is, we could easily have 5 new posts when Gavin picks up his phone and uses our app again. So, if we have the news based on date, technically that indexing approach means that we omit 5 unread new posts instead of the 5 read old ones.
We've also thought of maintaining three lists: Read, Unread and New so that we fetch only posts that are not in our lists. For example, in my initial example: A-B-C-D-E is in unread initially. Then, after user reads A-B, read becomes A-B. Meanwhile, when P-Q is added in the database, P-Q is added to the list of unread posts as P-Q-C-D-E.
How do you solve this problem? Any suggestions are welcome as we kind of think we're not thinking out of box when it comes to a solution for the problem. Thank you! :)
As i first read problem the solution ends up in my mind is also having 2 different list read unread and new ones are added to end of unread ones and unread list is shown in reverse order so most recent ones are on the top. However is it the most efficient way? Discussible. For example if number of new number increases a lot, then will be memory inefficient. But i assume small numbers in general.

Find the page load time of external URLs

Is there a way to find the page load time for external URLs?
My task is to compile a table of page load times for a list of URLs. I have read about the navigation timing API but could not find a way to find the page load time for an external URL that I specify in code. For example, something like http://tools.pingdom.com/fpt/ which analyzes the page load time for the URL we enter(I'm not sure if this is accurate)
http://www.webpagetest.org/ is a great tool for this.
It measures the Page Load Time, Speed Index (probably a better measure of speed) and can even show you videos and waterfall diagrams!

Trying to prevent multiple database calls with a very large call

So we run a downline report. That gathers everyone in the downline of the person who is logged in. Some people of clients run this with no problem as it returns less than 100 records.
Some people of clients however returns 4,000 - 6,000 rows which comes out to be about 8 MB worth of information. I actually had to up my buffer limit on my development machine to handle the large request.
What are some of the best ways to store this large piece of data and help prevent it from being run multiple times consecutively?
Can it be stored in a cookie?
Session is out of the question as this would eat up way to much memory on the server.
I'm open to pretty much anything at this point, trying to better streamline the old process into a much quicker efficient one.
Right now what is done, is it loads the entire recordset, it loops through the recordset building out the data into return_value cells.
Would this be better to turn into a jquery/ajax call?
The only main requirements are:
classic asp
jquery/javascript
T-SQL
Why not change the report to be paged? Phase 1: run the entire query, but the page only displays the right set of rows based on selected page. Now your response buffer problem is fixed. Phase 2: move the paging into the query using Row_Number(), now your database usage problem is fixed. Phase 3: offer the user an option of "display to screen" (using above) or "export to csv" where you can most likely export all the data, since csv is nice and compact.
Using a cookie seems unwise, given the responses to the question What is the maximum size of a web browser's cookie's key?.
I would suggest using ASP to create a file on the Web server and writing the data to that file. When the user requests the report, you can then determine if "enough time" has passed for it to be worth running the report again, or if the cached version is sufficient. User's login details could presumably be used for naming the file, or the Session.SessionID, or you could store something new in the user's session. Advantage of using their login would be that your cache of the report can exist longer than a user's session.
Taking Brian's Answer further, query page count, which would be records returned / items per page rounded up. Then join the results of every page query on client side. Pages start at a offset provided through the query. Now you have the full amount on the client without overflowing your buffer. And it can be tailored to an interface and user option (display x per page).

Beginner database troubles

I have an iOS app that presents content in a tableView. I've added a 'like/dislike' feature that interacts with my database (I use Parse.com). Every time someone likes/dislikes a piece of content, the specifics are sent to the Parse database. For each piece of content, I'd like to calculate and display the percentage of 'likes' over 'likes' + 'dislikes'. This is pretty simple math, but I can't wrap my head around the best way of designing my database table and the most efficient way to calculate the 'liked' percentage for each piece of content before the tableView physically appears.
As it is, I already have a loop in my tableView's viewDidLoad which compares the content from another database table to the 'like/dislike' table to restore the 'like/dislike' button state of the user (if they already liked/disliked a piece of content).
At first, I thought of creating an array in the initial viewDidLoadloop. However, using the whereKey: equalTo: type of query for each piece content to simply find the amount of likes/dislikes takes forever. As predicted, it is very slow in cellForRowAtIndexPath as well.
Worst case, I can make these calculations server-side and just pull the 'liked' percentage. However, I'd like to implement this in the app somehow. I'm a complete beginner, so I may be going about this all wrong.
Here is the basis of my database table:
Edit: I've managed to build a server-side program that calculates the percentage of users that 'like' pieces of content. My app pulls this percentage from the database at runtime. To make the percentage change more responsive when the user 'likes' something, I locally calculate an updated percentage. The problem here is when the user exits the app and reopens, the data reloads. If the server-side program had not run recently, the app will display an old 'liked' percentage (the most up to date % would not be calculated yet). The two solutions I see to fix this are:
Run the server-side program every 1-3 min
Post more data to the database when someone likes content (this would involve additional database queries for every single 'like').
I think both of these options are way too expensive for what I'm trying to accomplish.
I'd suggest leaving the calculations to the server side, and responding with the information to utilize in the app. This will save you from processing and parsing the incoming results.
You have greater processing power on a Server than on a device.

Mobile : one single request or multiple smaller requests?

On an iPhone app (or mobile in general) that constantly needs to send requests to a Web Service, is it better to work with one single requests that will fetch a large amount of data or multiple (possibly simultaneous) requests for each element with smaller amount of data fetched.
Example
I want to load a list of elements in a node. I have the node's ID. The 2 ways I can fetch the elements are the following :
send a single request with the node ID and get all the information about the n first elements in the node in a single response ;
send a first request with the node ID to get the IDs of the n first elements in the node, then for each one send another request to have one response per element.
I'm balanced between about that.
the heavyweight single response may cause more lag and timeouts because of the very unstable and slow mobile internet connection ;
the phone may have trouble handling too many responses at the same time.
What's your opinion ?
Since there is overhead for every request, one large request is generally faster than several small ones of the same size. This applies to high speed networks too, but in mobile networks the ratio between transfer speed and latency is even bigger.
I don't think the phone will have any problem handling the responses, so the multiple requests approach seems better for large requests/answers. However, depending on the size of your requests/responses, it may actually be faster to do it in a single request, in order to reduce the delay associated with multiple requests. The single request approach will also need to transfer slightly less data than the multiple request one.
Every call will have its overhead (i.e. network load), the number of connections might also be limited.
You might or might not be able to update your user interface during download, depending on how often your callbacks are called - you may be able to process partial data as it arrives.
If your data is easy to compress (typically text data), then using a single call might even reduce your total network usage footprint even more.
If the chunks of data are large, I'd go with several individual ones. This will also make things easier in case of network errors. Bottom line for me is to just get the right balance - make the packets reasonable sized and don't flood the server.
This is depend upon the situation. If you don't want to bother your user to waiting everytime throughout the app then you can use single request to load all the data at a time.
If you don't mind to let user wait then you can use multiple request on demand. For example if you just want to show title in tableview and detail when user tap on any title. So you can first get the title only and then when user tap you can get details for that title by ID. so that would be pretty good way to request on demand only.
Sometimes the situation merits use for only single requests for say a certain category. Say you have a twitter app and the tweets are seperated out into categories. Someone who has the app but only cares about sports may only look at the sports section which could be a single ajax call. Another user may only be intersted in two categories out of 15 categories. This means the user doesn't have to load unneccessary data. The important thing you need to determine is this.
Does all of the data need to be loaded all at once for the app to work correctly and are your users generally going to want all that data in the first place.