Does PokeAPI contain data for every Pokemon generation? - pokeapi

I am trying to impleement PokeAPI's data for a webapp, but I cannot seem to find all the data I need. When calling some data for, say, butterfree, I would call the data with (https://pokeapi.co/api/v2/pokemon/butterfree/). Under "stats" it has all 6 of butterfree's stats, however it only has the stats from the most recent generation. Is there a way to specify a specific generation somewhere in the API request URL that would show the stats for a generation before the most recent one?

It says on their website that not all information is guaranteed as completely up to date and accurate.
However, you can update this information yourself or file an issue to report missing/incorrect data in the pokedex GitHub repo.

Related

Get all running activities from Google Fit via REST API

Is there a way to get all activities from the Google fitness store via the REST API?
My current assumption is that other apps store their activities in sessions and I can retrieve them using Users.sessions.list. However, the information there, does not really include all the information that was stored or I would like to see: when I manually add a short run via the Fit Android app, I expect this information to be somehow accessible via the sessions API. This should at least include the information I have provided, such as distance or time.
Looking at the same information via the app or the web interface, I can see all the details I have previously entered plus the approximate number of steps and calories.
How do I get this information via the API?
I am currently mainly interested in activities of type running or jogging (8, 56-58) and would like to read the distance in addition to the time information already provided in the session.
Not sure, if this is the right way, but I get all the information I need, if I follow these steps
Find the correct session via Users.sessions.list
Query all data via Users.datasets.aggregate:
Set startTimeMillis and endTimeMillis to the values from the session in question
Set bucketBySession to group results by sessions.
I explicitly query all data sources: For every data source id I add a { "dataSourceId": <id>}to theaggregateBy` array. Not sure, if this is necessary
The resulting bucket has all information related to the session. For my use case I need to clean up overloaded data: some data sources return the distance as steps (derived) while I need the physical length in meters.
This seems to work for my Fit data with the additional cleaning, but I will need to check, if this works for other user's data too.

How to get tweets of a year for data mining using twitter API

I like to retrieve 2014-2015 tweets based on some search key for data mining.
I am using twitter4j(java) and calling API GET search/tweets.I am getting only last week tweets.Can anyone please suggest me the solution?
gnip
You got to pay for data older than about a week.
Via Gnip, you can order a one-time data pull (called a Historical PowerTrack One-Time Job). You provide the rule, and a data file will be sent back to you with download directions.
You can learn more about Historical PowerTrack here and here

Wordpress: Save custom plugin options from backend

I'm developing a plugin that will pull data from a third party API. The user user inputs a number of options in a normal settings form for the plugin (used Reduz Framework - that uses WP Settings API).
The user provided options will then be used to generate a request to the third party API.
Now to my problem / question: How can I store the data that's returned from that API? Is there a built in way to do this in Wordpress - or will I have to install a database table of my own? Seems to be a bit overkill... Is there any way to "hack" in to the Settings API and set custom settings without having to display them in a form on front end?
Thank you - and happy holidays to everyone!
It sounds like what you want to do is actually just store the data from the remote API request, rather than "options". If you don't want to create a table for them, I can think of three simple approaches.
Transients API
Save the data returned from the API as transients, i.e. temporary cached data. This is generally good for data that's going to expire anyway and thus will need to be refreshed. Set an expiry time! Even if you want to hang onto the data "for ever", set an expiry time or the data will be autoloaded on every page load and thus consume memory even if you don't need them. You can then easily retrieve them with get_transient; if expired, you'll get false and that is your trigger to make your API call again.
NB: on hosts with memcached or other object caches, there's a good chance that your transients will be pushed out of the object cache sooner than you intend, thus forcing your plugin to retrieve the data again from the API. Transients really are about caching, not "data storage" per se.
Options
Save your data as custom options using add_option -- and specify autoload="no" so that they don't fill up script memory when they aren't needed! Beware the update_option will add the data with autoload="yes" if it doesn't already exist, so I recommend you delete and then add rather than update. You can then retrieve your data easily.
Custom Post Type
You can easily store your data in the wp_posts table by registering a custom post type, and then you can use wp_insert to save them and the usual WordPress post queries to retrieve them. Great for long-term data that you want to hang onto. You can make use of the post_title, post_content, post_excerpt and other standard post fields to store some of your data, and if you need more, you can add post meta fields.

How to programmatically create a new version of a CQ5 page?

Is it possible to programmatically create a new version of a CQ5 page that has a start time some time in the future?
As an example, let's say we have a page that displays tax rates. We have a component that allows the author to upload a new rates table (in the form of a css file) and it creates the rates page content. We would like to allow the author to upload rates that will be effective the first of next month.
I know the jcr supports multiple versions of nodes, but its unclear how (or whether) this relates to cq5 page versioning. And, further, whether a new version can be activated in the future.
Given the requirements as you've described them, I would probably accomplish the task in a slightly different way...
Instead of storing my rates table information directly within the page's jcr:content node (or a sub node their of) I'd probably abstract it out to somewhere else in the repository. You could then, if you so desired, create some sort of an admin interface to allow content authors to upload their csv file of new rates, and ingest that into the repository as needed. Alternatively, assuming that data comes from some sort of a database, you could probably just write a job to automatically injest it on some sort of a scheduled basis by using a JDBC connection from CQ. Once the data is in the repository, you could then write the display component to read the data from the repository, instead of it being directly inside the page.
This approach has the advantage of making that data re-useable within CQ to be shown on multiple pages, multiple sites, even many different display formats if need be. In addition, you can design your jcr structure to support whatever requirement you have around updates to the data, including daily, monthly, weekly, yearly etc., obviously this will depend on the specific requirements.
The one downside to this is that since there is a separation b/w the data and the page(s) where it is displayed, you may need to find a way to ensure the cache is properly cleared whenever the data does change.
Update (based on your comment):
The problem I foresee with versioning the page, and granted I've not tried this so maybe it will work, is that there can only ever be one active version at a time. Therefore, once the next months data is uploaded, you need to maintain the old data (active) and the new data (not yet active) at the same time. What happens if you require a separate content change during that window...from a business process perspective that just seems messy to me.
Back to cache clear issues, If you know the affected pages, especially if they are all in one subtree, you could write a custom workflow process that uses the replicator service to clear the cache for the affected pages, then set up a launcher to run the wf on node change for the data.
The other option, and this one is less defined in my head, so some experimentation required, would be to use CQs built in activate later and de-activate later functionality.
Maybe create a specific template for the rates data, with the implicit requirement that only one page using that template is ever active at one time. Your display components could use a query to find the currently active rates data.
I have not personally tried this, but...
I assume that you can use the PageManager service's createRevision method, and then if that returns without throwing an exception, you may call page.getContentResource.adaptTo(Node.class), and from there take the node that is returned and edit the JCR properties for your tax rates component.
See PageManager
You could write a workflow that includes a publish step that is triggered by the arrival of a calendar date. The version of the page with the new tax rates remains in the workflow pipeline in draft form and is only published/activated when the date arrives. (So you'd need some sort of process that wakes up once a day to check the calendar.)
Each time a page is modified cq creates a version of the page.
This modified page's modification time is set in jcr:lastModified property of the page.
Manipulation of this property can be done to save future date and activate page on that date though its not preferred way.
You can store the future date as a property in the page.
Later as suggested by #David you can create a workflow or a scheduled job which activates pages with a future date.

How to Sync iPhone Core Data with web server, and then push to other devices? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been working on a method to sync core data stored in an iPhone application between multiple devices, such as an iPad or a Mac. There are not many (if any at all) sync frameworks for use with Core Data on iOS. However, I have been thinking about the following concept:
A change is made to the local core data store, and the change is saved. (a) If the device is online, it tries to send the changeset to the server, including the device ID of the device which sent the changeset. (b) If the changeset does not reach the server, or if the device is not online, the app will add the change set to a queue to send when it does come online.
The server, sitting in the cloud, merges the specific change sets it receives with its master database.
After a change set (or a queue of change sets) is merged on the cloud server, the server pushes all of those change sets to the other devices registered with the server using some sort of polling system. (I thought to use Apple's Push services, but apparently according to the comments this is not a workable system.)
Is there anything fancy that I need to be thinking about? I have looked at REST frameworks such as ObjectiveResource, Core Resource, and RestfulCoreData. Of course, these are all working with Ruby on Rails, which I am not tied to, but it's a place to start. The main requirements I have for my solution are:
Any changes should be sent in the background without pausing the main thread.
It should use as little bandwidth as possible.
I have thought about a number of the challenges:
Making sure that the object IDs for the different data stores on different devices are attached on the server. That is to say, I will have a table of object IDs and device IDs, which are tied via a reference to the object stored in the database. I will have a record (DatabaseId [unique to this table], ObjectId [unique to the item in the whole database], Datafield1, Datafield2), the ObjectId field will reference another table, AllObjects: (ObjectId, DeviceId, DeviceObjectId). Then, when the device pushes up a change set, it will pass along the device Id and the objectId from the core data object in the local data store. Then my cloud server will check against the objectId and device Id in the AllObjects table, and find the record to change in the initial table.
All changes should be timestamped, so that they can be merged.
The device will have to poll the server, without using up too much battery.
The local devices will also need to update anything held in memory if/when changes are received from the server.
Is there anything else I am missing here? What kinds of frameworks should I look at to make this possible?
I've done something similar to what you're trying to do. Let me tell you what I've learned and how I did it.
I assume you have a one-to-one relationship between your Core Data object and the model (or db schema) on the server. You simply want to keep the server contents in sync with the clients, but clients can also modify and add data. If I got that right, then keep reading.
I added four fields to assist with synchronization:
sync_status - Add this field to your core data model only. It's used by the app to determine if you have a pending change on the item. I use the following codes: 0 means no changes, 1 means it's queued to be synchronized to the server, and 2 means it's a temporary object and can be purged.
is_deleted - Add this to the server and core data model. Delete event shouldn't actually delete a row from the database or from your client model because it leaves you with nothing to synchronize back. By having this simple boolean flag, you can set is_deleted to 1, synchronize it, and everyone will be happy. You must also modify the code on the server and client to query non deleted items with "is_deleted=0".
last_modified - Add this to the server and core data model. This field should automatically be updated with the current date and time by the server whenever anything changes on that record. It should never be modified by the client.
guid - Add a globally unique id (see http://en.wikipedia.org/wiki/Globally_unique_identifier) field to the server and core data model. This field becomes the primary key and becomes important when creating new records on the client. Normally your primary key is an incrementing integer on the server, but we have to keep in mind that content could be created offline and synchronized later. The GUID allows us to create a key while being offline.
On the client, add code to set sync_status to 1 on your model object whenever something changes and needs to be synchronized to the server. New model objects must generate a GUID.
Synchronization is a single request. The request contains:
The MAX last_modified time stamp of your model objects. This tells the server you only want changes after this time stamp.
A JSON array containing all items with sync_status=1.
The server gets the request and does this:
It takes the contents from the JSON array and modifies or adds the records it contains. The last_modified field is automatically updated.
The server returns a JSON array containing all objects with a last_modified time stamp greater than the time stamp sent in the request. This will include the objects it just received, which serves as an acknowledgment that the record was successfully synchronized to the server.
The app receives the response and does this:
It takes the contents from the JSON array and modifies or adds the records it contains. Each record get set a sync_status of 0.
I used the word record and model interchangeably, but I think you get the idea.
I suggest carefully reading and implementing the sync strategy discussed by Dan Grover at iPhone 2009 conference, available here as a pdf document.
This is a viable solution and is not that difficult to implement (Dan implemented this in several of its applications), overlapping the solution described by Chris. For an in-depth, theoretical discussion of syncing, see the paper from Russ Cox (MIT) and William Josephson (Princeton):
File Synchronization with Vector Time Pairs
which applies equally well to core data with some obvious modifications. This provides an overall much more robust and reliable sync strategy, but requires more effort to be implemented correctly.
EDIT:
It seems that the Grover's pdf file is no longer available (broken link, March 2015). UPDATE: the link is available through the Way Back Machine here
The Objective-C framework called ZSync and developed by Marcus Zarra has been deprecated, given that iCloud finally seems to support correct core data synchronization.
If you are still looking for a way to go, look into the Couchbase mobile. This basically does all you want. (http://www.couchbase.com/nosql-databases/couchbase-mobile)
Similar like #Cris I've implemented class for synchronization between client and server and solved all known problems so far (send/receive data to/from server, merge conflicts based on timestamps, removed duplicate entries in unreliable network conditions, synchronize nested data and files etc .. )
You just tell the class which entity and which columns should it sync and where is your server.
M3Synchronization * syncEntity = [[M3Synchronization alloc] initForClass: #"Car"
andContext: context
andServerUrl: kWebsiteUrl
andServerReceiverScriptName: kServerReceiverScript
andServerFetcherScriptName: kServerFetcherScript
ansSyncedTableFields:#[#"licenceNumber", #"manufacturer", #"model"]
andUniqueTableFields:#[#"licenceNumber"]];
syncEntity.delegate = self; // delegate should implement onComplete and onError methods
syncEntity.additionalPostParamsDictionary = ... // add some POST params to authenticate current user
[syncEntity sync];
You can find source, working example and more instructions here: github.com/knagode/M3Synchronization.
Notice user to update data via push notification.
Use a background thread in the app to check the local data and the data on the cloud server,while change happens on server,change the local data,vice versa.
So I think the most difficult part is to estimate data in which side is invalidate.
Hope this can help u
I have just posted the first version of my new Core Data Cloud Syncing API, known as SynCloud.
SynCloud has a lot of differences with iCloud because it allows for Multi-user sync interface. It is also different from other syncing api's because it allows for multi-table, relational data.
Please find out more at http://www.syncloudapi.com
Build with iOS 6 SDK, it is very up to date as of 9/27/2012.
I think a good solution to the GUID issue is "distributed ID system". I'm not sure what the correct term is, but I think that's what MS SQL server docs used to call it (SQL uses/used this method for distributed/sync'ed databases). It's pretty simple:
The server assigns all IDs. Each time a sync is done, the first thing that is checked are "How many IDs do I have left on this client?" If the client is running low, it asks the server for a new block of IDs. The client then uses IDs in that range for new records. This works great for most needs, if you can assign a block large enough that it should "never" run out before the next sync, but not so large that the server runs out over time. If the client ever does run out, the handling can be pretty simple, just tell the user "sorry you cannot add more items until you sync"... if they are adding that many items, shouldn't they sync to avoid stale data issues anyway?
I think this is superior to using random GUIDs because random GUIDs are not 100% safe, and usually need to be much longer than a standard ID (128-bits vs 32-bits). You usually have indexes by ID and often keep ID numbers in memory, so it is important to keep them small.
Didn't really want to post as answer, but I don't know that anyone would see as a comment, and I think it's important to this topic and not included in other answers.
First you should rethink how many data, tables and relations you will have. In my solution I’ve implemented syncing through Dropbox files. I observe changes in main MOC and save these data to files (each row is saved as gzipped json). If there is an internet connection working, I check if there are any changes on Dropbox (Dropbox gives me delta changes), download them and merge (latest wins), and finally put changed files. Before sync I put lock file on Dropbox to prevent other clients syncing incomplete data. When downloading changes it’s safe that only partial data is downloaded (eg lost internet connection). When downloading is finished (fully or partial) it starts to load files into Core Data. When there are unresolved relations (not all files are downloaded) it stops loading files and tries to finish downloading later. Relations are stored only as GUID, so I can easly check which files to load to have full data integrity.
Syncing is starting after changes to core data are made. If there are no changes, than it checks for changes on Dropbox every few minutes and on app startup. Additionaly when changes are sent to server I send a broadcast to other devices to inform them about changes, so they can sync faster.
Each synced entity has GUID property (guid is used also as a filename for exchange files). I have also Sync database where I store Dropbox revision of each file (I can compare it when Dropbox delta resets it’s state). Files also contain entity name, state (deleted/not deleted), guid (same as filename), database revision (to detect data migrations or to avoid syncing with never app versions) and of course the data (if row is not deleted).
This solution is working for thousands of files and about 30 entities. Instead of Dropbox I could use key/value store as REST web service which I want to do later, but have no time for this :) For now, in my opinion, my solution is more reliable than iCloud and, which is very important, I have full control on how it’s working (mainly because it’s my own code).
Another solution is to save MOC changes as transactions - there will be much less files exchanged with server, but it’s harder to do initial load in proper order into empty core data. iCloud is working this way, and also other syncing solutions have similar approach, eg TICoreDataSync.
--
UPDATE
After a while, I migrated to Ensembles - I recommend this solution over reinventing the wheel.