Respected Seniors,
I need proper guidance for Addressable, please.
If someone wants to share his or her experiences, I would be very thankful for it. I am not able to understand after watching so many videos; I am so confused about them.
I need to upload the different spritesheets (almost 5 to 10) with different labels (should I make a separate group for each spritesheet or place them in a single group, which will be a better way in terms of performance, etc)?
And I am confused about the addressable cache and data.
When something new is added to the server, I want spritesheets to download from the server only once and update on the user's device only when we update the data at server.
The cache and data for addressable will be available until the user uninstall the application or this will be deleted in 150 days. (Someone told me this will delete automatically after 150 days.)
Please guide me about the settings and code. I have also done some stuff, but Addressable is a very confusing topic, so I am not sure if I am doing it correctly or not.
I am hoping for a positive response. Thank you for your precious time.
Related
I have 2 closed-source application that must share the same data at some point. Both uses REST APIs.
An actual example are helpdesk tickets, they can be created on both applications and i need to update the data on one application when the user adds a new ticket/closes a ticket on the other application and vice versa.
Since is closed-source I can't really modify che code.
I was thinking I can create a third application that every 5 minutes or so, list both applications' tickets for differences on the precedent call, and if the data is different from the precedent call it updates the other application too.
Is there a better way of doing this?
With closed-source applications it's nearly impossible to get something out of them, unless they have some plugin-based setup that you can hook into.
The most efficient way in terms of costs would be to have the first application publish a message on a queue, or call a web-hook that you set, whenever the event is triggered. But as I mentioned, the application needs to support that.
So yeah, your solution is pretty much everything you can do for now, but keep in mind the challenges that you may encounter over time:
What if the results of both APIs are too large to be compared directly? Maybe you need to think about paging the results.
What if your app crashes and you loose the previous state? You need to somehow back it up in an external source
How often you should poll the API to make sure you're getting the updates you need, while keeping a good performance for the existing traffic?
I am about to work on an app which handles extremely valuable data. Any loss of this data for the user would be very costly, so I'm interested in finding out more about the best architecture design for our needs.
The user will be inputting this data in their iPhone each day. The alternative to using this app is carrying around a piece of paper with this sensitive information on it. So while I know we can be more secure than a piece of paper, I want to make sure we also cover the user stories like "I flushed my phone down the toilet" or "my son deleted the app, where's my data?"
A service like Dropbox comes to mind, but I wouldn't want to require our users to have a Dropbox account; the syncing architecture must be transparent to the user. iCloud is out because web and Android versions may follow.
Can anyone suggest either some good reading on this subject, or some good frameworks to look at? I expect to use a node.js backend, and while we are targeting iPhone first, Android will follow.
The data itself consists of 2 tables, each with a small number of fields, with a many to many relationship. A few new rows will be created by the user each day, but the data will be small and highly compressible.
Turns out this is an extremely difficult issue. In data assurance (this isnt yet a security type situation although could become one because of the assurance aspect) there is ALWAYS a time element. As a simple example what happens if your use has locally updated some piece of data. Just before you have the ability to fully push the data to some cloud service, etc... he / she dumps it in the toilet. Even if good signal was there for transmitting the data there is time in transferring and time necessary for the cloud server to respond saying the data got there properly.
Generally in data assurance, you really have to work to the best you can. You will NEVER be able to solve all issues as there is no data center, nor link to a data center, etc... that is perfect. There is always a chance of data loss. Truly the best you can do, is SYNC as fast as data changes, and if there is loss of connection, as soon as the connection becomes alive again.
Now, for security. Security by itself does not create assurance. If the data itself is something that the customer does not want to lose, and that is his only requirement, then security is un-necessary. If he / she is also worried about other getting their hands on his data, then you have to be worried about data-in-transit (both up and down during syncing), and on the device itself. For the best potential security, encrypt the data locally on the device prior to pushing over the cloud. There are many known attacks that even if using SSL or other services, can get at the data. If you wish, locally encrypt a file, then you could for SOME added security still use SSL (at this point you will have doubly encrypted the data). You also want to sign the data so that there is little chance of it being manipulated in transit, or by the cloud server itself (if a hacker hacked the cloud server). Generally the way to protect the data while on device, you may choose to have the user input a password, and put some fairly strict rules around how passwords are formed, and how many tries you allow before you disallow attempts for 30 minutes or so.
You may also wish to store the data locally in an encrypted form. This way if someone gets the device, they still will need to have the password before they can get the data (unless of course they can crack the algorithm you use to generate the symetric key from the password).
In terms of online data service, you could use iCloud, etc... I am actually NOT a fan of anything cloud. I think it is SO counter enterprise / proprietary data, it isnt even funny. I think it actually almost laughable that so many of these phone / device manufacturers are going SOOOOO cloud based. I think they are abandoning the big companies, as NO big company I know of wants to place their proprietary data on a cloud server that THEY DONT CONTROL. In any case, I would argue that so long as you have a good local encryption scheme prior to sending out the data, then you should be OK. I would from an assurance perspective however look at where the servers are in locale. the reason being that if assurance of data is of prime concern, most larger IT setups like to have replicated data centers on opposing sides of the country / world etc... The reason for this is if an earthquake takes down the data center on one side of the country, it most likely will NOT take down the one on the other side of the country simultaneously. If the data centers for iCloud or whatever you can find are essentially in one locale, then you may consider syncing with one data center on the west coast, and choose a completely differing data center (in this case company) to sync with that is centered on the east coast.
This is all very high level, how you would implement this on an iPhone specifically we could also talk about, byt I hope this at least begins to help pave a path.
While developing Facebook applications I have faced this problem many times that if I delete any image, then it appears on the application while testing, even I delete the whole file then, even, it is executed successfully, so I want to know "Does Facebook have its own cache from where files are executed?".
If so then is there any solution of this problem?
If not then why is happening this?
Best Regards & Thanks in advance
Not sure about image files (they reside in CDN) but facebook uses MemCached server to cache their stuff.
It's not that it has cache but that its main backing store doesn't provide any more coherency than is strictly necessary. Coherency has a cost, so if you don't need it, it makes sense not to pay the cost.
When operations have no enforced order between them, they may complete as if they were executed in either order. If your retrieval and your delete have on enforced order, then they may complete as if they were executed in either order. This applies even if one operation receives its response before the other operation was sent.
My understanding was that there was a cache. Especially for images and styles.
I have frequently made changes to css and updated images only to be left wondering why i can not see these updates.
I always change my css url to be something like styles/styles.css?time= which remedies everything.
In regards to the images , right click on the image in application and view in browser. Refresh to get the updated image and then go back to you application.
I have a small APP which allows users to view information on Beers and Beers they have tried for a local Bars Beer Club.
I have 4 Views. Beer Menu, All Stats, My Stats and Settings.
Originally, I thought to pull all of the data via a web service and return xml at initial load of the app, and use it throughout.
OR...
I could just pull what I need when I need it. This would result in just pulling the data I need, which would be faster, but it would result in more requests. What would be better:
a) pull all data, store globally, build views as needed.
b) pull only data I need, when I need it. This means if they click on a beer, I would make a request for that beers info. If they clicked on 10 different beers, then that would be 10 different requests.
What is better? Or does it even matter.
yeah, I think on mobile devices these kind of decisions to matter.
With these kind of concerns I think sometimes there is no right answer but here are a few pointers:
Use json, not xml (if you can)
it's less verbose and, depending on the data, could make a difference to the speed.
Do not block the UI thread
This is really a general guide to all app development, in my opinion. The worst thing you can do is block the UI thread.
Coding for a progressive UI that loads data separately will always be more fiddly than just doing a batch load, and then returning everything. But the extra work will really make your User Experience a lot more pleasurable.
Be clever about your requests
This kinda of carries on from the last point. I'm not saying do a million request, but do try and find a balance before less requests, and loading data as needed (which would suggest more requests).
Try and really think about how the user is going to use your app, and see if you can do some clever pre-fetching based on what you THINK the user might need more in the certain view.
i.e What is the most likely view to be used next? can you pre fetch the data for that?
This last part is really the fine tuning, and will result in a lot of trial and error. But the end result will hopefully be a really great app that just feels fast, and feels right.
I'd go with loading cached data on launch (if it exists) and then load fresh data in the background as needed. This keeps your app as responsive as possible. it's a balance between draining battery life on requests VS responsiveness and data availability. I think the balance is caching information with a timestamp (if the data changes, if not it's even better) and then update as needed.
The reason I ask is that Stack Overflow has been Slashdotted, and Redditted.
First, what kinds of effect does this have on the servers that power a website? Second, what can be done by system administrators to ensure that their sites remain up and running as best as possible?
Unfortunately, if you haven't planned for this before it happens, it's probably too late and your users will have a poor experience.
Scalability is your first immediate concern. You may start getting more hits per second than you were getting per month. Your first line of defense is good programming and design. Make sure you're not doing anything stupid like reloading data from a database multiple times per request instead of caching it. Before the spike happens, you need to do some fairly realistic load tests to see where the bottlenecks are.
For absurdly high traffic, consider the ability to switch some dynamic pages over to static pages.
Having a server architecture that can scale also helps. Shared hosts generally don't scale. A single dedicated machine generally doesn't scale. Using something like Amazon's EC2 to host can help, especially if you plan for a cluster of servers from the beginning (even if your cluster is a single computer).
You're next major concern is security. You're suddenly a much bigger target for the bad guys. Make sure you have a good security plan in place. This is something you should always have, but it become more important with high usage.
Firstly, ask if you really want to spend weeks and thousands of $ on planning for something that might not even happen, and if it does happen, lasts about 5 hours.
Easiest solution is to have a good way to switch to a page simply allowing a signup. People will sign up and you can email them when the storm has passed.
More elaborate solutions rely on being able to scale quickly. That's firstly a software issue (can you connect to a db on another server, can you do load balancing). Secondly, your hosting solution needs to support fast expansion. Amazon EC2 comes to mind, or maybe slicehost. With both services you can easily start new instances ("Let's move the database to a different server") and expand your instances ("Let's upgrade the db server to 4GB RAM").
If you keep all data in the db (including sessions), you can easily have multiple front-end servers. For the database I'd usually try a single server with the highest resources available, but only because I haven't worked with db replication and it used to be quite hard to do, at least with mysql. Things might have improved.
The app designer needs to think about scaling up (larger machines with more cores and higher performance) and/or scaling out (distributing workload across multiple systems). The IT guy needs to work out how to best support that. The network is what you look at first, because obviously everything rides on top of it. Starting at the border, that usually means network load balancers and redundant routers being served by multiple providers. You can also look at geographic caching services and apps such as cachefly.
You want to reduce your bottlenecks as much as possible. You also want to design the environment such that it can be scaled out as needed without much work. Do the design work up front and it'll mean less headaches when you do get dugg.
Some ideas (of what I used in the past and current projects):
For boosting performance (if needed) you can put a reverse-proxying, caching squid in front of your server. Of course that only works if you don't have session keys and if the pages are somewhat static (means: they change only once an hour or so) and not personalised.
With the squid you can boost a bloated and slow CMS like typo3, thus having the performance of static websites with the comfort of a CMS.
You can outsource large files to external services like Amazon S3, saving your server's bandwidth.
And if you are able to spend some (three-figures per month) bucks, you can as well use a Content Delivery Network. Whith that in place you automatically have scaling, high-availability and low latencys for your users. Of course, your pages must be cachable, so session keys and personalised pages are a no-no. If designed carefully and with CDNs in mind, you can at least cache SOME content, like pics and videos and static stuff.
The load goes up, as other answers have mentioned.
You'll also get an influx of new users/blog comments/votes from bored folks who are only really interested in vandalism. This is mostly a problem for blogs which allow completely anonymous commenting, where some dreadful stuff will be entered. The blog platform might have spam filters sufficient to block it, but manual intervention is frequently required to clean up remaining drivel.
Even a little barrier to entry, like requiring a user name or email address even if no verification is done, will dramatically reduce the volume of the vandalism.