Access base stats from older generation - pokeapi

I am working on some stat calculations in generation 3 and in doing so I noticed that the base stats for Pokemon are from the most recent generation. For example, Pidgeot's base speed stat is 101 but was 91 in generation 3.
Is there any way to access the base stats of a pokemon as they were in older generations?

Related

AnyLogic Source Block Creating Multiple Types of Agents with Different Interarrival Times

I am working in AnyLogic to create a model. I have a source that creates 17 different Agent Names. Each with their own inter arrival time. I would like for all 17 agents to arrive according to their interarrival time in parallel.
My database looks like this:
part_name iat processing_time
part_1 2.3 4.3
part_2 3.5 3.9
.....
.
.
.
I have searched and tried all I can online.
The AnyLogic documentation suggests creating an agent populatino, but I do not think that feature is supported anymore.
Help pls

How to use Drools decision tree in mongodb or any other rule engine

I have to store plenty of rules in my data base nearly 100k thousand.
and these rules can be modified , deleted and added dynamically, also my solution is HA(same application will run inmultiple instance ) hence instead of using Drools decision tree sheet (csv or xl) I have to use mongoDB.
So is there any adapter for drool to work with mongoDB ?
is there drool working memory limitation when it related number of rules ?
Will Drool supports below kind of decision checks ?
OSType DeviceType Make year Country Value
(attr1) (attr2) (attr3) (attr4) (attr5) (result)
BADA Galaxy Samsung 2014 IN 100
Android Samsung 2015 China 150
J7 Samsung 2018 IN 80
Android Note10 Samsung 2019 500
IOS I7 APPLE USA 1100
IOS 2019 1000
now if data to rule is
OStype=Android,DeviceType=Note10, Make=Samsung, year=2019 Country=IN
in this case I have to return 500, as country is empty so I have to ignore from rule check
OStype=IOS,DeviceType=I11, Make=APPLE, year=2029 Country=US
in this case I have to return 1000

MS Application Insights - Sql Dependencies error code 208

What does error 208 means? the query:
dependencies
| where type == "SQL" and success == "False"
| summarize count() by resultCode
is giving me 4500+ itens on the last hour alone and I can't seem to find any solid documentation about this.
Details:
The frequency of error rises as concurrency rises, meaning 1000 concurrent requests will generate more erros than 1000 sequential ones.
My application is Asp.Net MVC 4 framework 4.6 using latest EF
The error is intermittent. Performing a certain operation won't definitely result in the error
I don't think that this error means "Invalid Object Name" (as per other threads) because i can see EF auto-retrying this and eventually it goes through and the whole request is successfully returned (otherwise i would have A LOT of missed phone calls...)
The error occurs on both ASYNC and sync requests
I got in touch with MS support and according to them, this is caused by entity framework. Apparently EF keeps looking for 2 tables (migrationsHistory and edmMetadata) that I deliberately deleted. although that makes sense, i don't know why that error does not present itself on our in-house tests (the table are not present on the in-house dev env too...)
Above answer is correct however Id like to add additional information:
You need to have MigrationHistory table and it has to be populated correctly. edmMetadata is old table which was replaced by MigrationHistory so no need to worry about that.
Just by adding MigrationHistory tabled did not solve the issue completely ( I was down to 3 exceptions 208 from 5 ).
However, keep in mind that populating MigrationHistory table will render your dbContext out of sync if latest migration is not inserted in MigrationHistory!
Best way to get this is to issue:
UpdateDatabase -script
command and copy CREATE/INSERT/UPDATE statements from there.

Titan Db ignoring index

I have a graph with a couple of indices. They're two composite indices with label restraints. (both are exactly the same just on different properties/labels).
One definitely seems to work but the other doesn't. I've done the following profile() to doubled check:
One is called KeyOnNode : property uid and label node :
gremlin> g.V().hasLabel("node").has("uid", "xxxxxxxx").profile().cap(...)
==>Traversal Metrics
Step Count Traversers Time (ms) % Dur
=============================================================================================================
TitanGraphStep([~label.eq(node), uid.eq(dammit_... 1 1 2.565 96.84
optimization 1.383
backend-query 1 0.231
SideEffectCapStep([~metrics]) 1 1 0.083 3.16
>TOTAL - - 2.648 -
The above is perfectly acceptable and works well. I'm assuming the magic line is backend-query.
The other is called NameOnSuperNode : property name and label supernode:
gremlin> g.V().hasLabel("supernode").has("name", "xxxxxxxx").profile().cap(...)
==>Traversal Metrics
Step Count Traversers Time (ms) % Dur
=============================================================================================================
TitanGraphStep([~label.eq(supernode), name.eq(n... 1 1 5763.163 100.00
optimization 2.261
scan 0.000
SideEffectCapStep([~metrics]) 1 1 0.073 0.00
>TOTAL - - 5763.236 -
Here the query takes an outrageous amount of time and we have a scan line. I originally wondered if the index wasn't commit through the management system but alas the following seems to work just fine :
gremlin> m = graphT.openManagement();
==>com.thinkaurelius.titan.graphdb.database.management.ManagementSystem#73c1c105
gremlin> index = m.getGraphIndex("NameOnSuperNode")
==>NameOnSuperNode
gremlin> index.getFieldKeys()
==>name
gremlin> import static com.thinkaurelius.titan.graphdb.types.TypeDefinitionCategory.*
==>null
gremlin> sv = m.getSchemaVertex(index)
==>NameOnSuperNode
gremlin> rel = sv.getRelated(INDEX_SCHEMA_CONSTRAINT, Direction.OUT)
==>com.thinkaurelius.titan.graphdb.types.SchemaSource$Entry#26b2b8e2
gremlin> sse = rel.iterator().next()
==>com.thinkaurelius.titan.graphdb.types.SchemaSource$Entry#2d39a135
gremlin> sse.getSchemaType()
==>supernode
I can't just reset the db at this point. Any help pinpointing what the issues could be would be amazing, I'm hitting a wall here.
Is this a sign that I need to reindex?
INFO: Titan DB 1.1 (TP 3.1.1)
Cheers
UPDATE : I've found that the index in question is not in a REGISTERED state:
gremlin> :> m = graphT.openManagement(); index = m.getGraphIndex("NameOnSuperNode"); pkey = index.getFieldKeys()[0]; index.getIndexStatus(pkey)
==>INSTALLED
How do I get it to register? I've tried m.updateIndex(index, SchemaAction.REGISTER_INDEX).get(); m.commit(); graphT.tx().commit(); but it doesn't seem to do anything
UPDATE 2 : I've tried regitering the index in order to reindex with the following :
gremlin> m = graphT.openManagement();
index = m.getGraphIndex("NameOnSuperNode") ;
import static com.thinkaurelius.titan.graphdb.types.TypeDefinitionCategory.*;
import com.thinkaurelius.titan.graphdb.database.management.ManagementSystem;
m.updateIndex(index, SchemaAction.REGISTER_INDEX).get();
ManagementSystem.awaitGraphIndexStatus(graphT, "NameOnSuperNode").status(SchemaStatus.REGISTERED).timeout(20, java.time.temporal.ChronoUnit.MINUTES).call();
m.commit();
graphT.tx().commit()
But this isn't working. I still have my index in the INSTALLED status and I'm still getting a timeout. I've checked that there were no open transactions. Anyone have an idea? FYI the graph is running on a single server and has ~100K vertices and ~130k edges.
So there are a few things that can be happening here:
If both of those indices you describe were not created in the same transaction (and the problem index in question was created in after the name propertyKey was already defined) then you should issue a reindex, as per Titan docs:
The name of a graph index must be unique. Graph indexes built against
newly defined property keys, i.e. property keys that are defined in
the same management transaction as the index, are immediately
available. Graph indexes built against property keys that are already
in use require the execution of a reindex procedure to ensure that the
index contains all previously added elements. Until the reindex
procedure has completed, the index will not be available. It is
encouraged to define graph indexes in the same transaction as the
initial schema.
The index may be timing out the process that takes to move from REGISTERED to INSTALLED, in which case you want to use mgmt.awaitGraphIndexStatus(). You can even specify the amount of time you are willing to wait here.
Make sure there are no open transactions on your graph or the index status will indeed not change, as described here.
This is clearly not the case for you, but there is a bug in Titan (fixed in JanusGraph via this PR) such that if you create an index against a newly created propertyKey as well as a previously used propertyKey, the index will get stuck in the REGISTERED state
Indexes will not move to REGISTERED unless every Titan/JanusGraph node in the cluster acknowledges the index creation. If your indexes are getting stuck in the INSTALLED state, there is a chance that the other nodes in the system are not acknowledging the indexes existence. This can be due to issues with another server in the cluster, backfill in the messaging queue Titan/JanusGraph uses to talk with each other, or most unexpectedly: the existence of phantom instances. These can occur every time your server is killed through non-normal JVM shutdown processes, i.e. kill -9 the server due to it being stuck in thrash the world garbage collection. If you expect backfill to be the problem, the comments in this class offer good insight to customizable configuration options that may help fix the problem. To check for the existence of phantom nodes, use this function and then this function to kill the phantom instances.
I think you missed config to your graph.
If you used backend is cassandra, you must config with elasticsearch.
If you used backend is hbase, you must config with caching.
Read more in link below:
https://docs.janusgraph.org/0.2.0/configuration.html

Cube/Mongo: Custom metric resolutions (step)

According the documentation square/cube supports 5 metric resolutions (step), the lowest is 10 seconds. I understand this is required in order to allow pyramidal reducers. Will cube work correctly (though less efficiently) with any arbitrary step value or are there other problems? If it is just an efficiency issue, how bad would it be - even with built in step values it takes time for the cache to fill for all options.
I faced a similar situation when creating horizon charts of stock data. Some stocks are not traded at all moments during the day.
In this situation, I "backfilled" in the intermediate values and created a uniform distribution. Essentially, I took the latest data and added it to a newer time stamp until new data was available.
For example, if I had the following prices for minute-by-minute data:
11:15 AM -> 112.0
11:18 AM -> 115.0
my program created the following "imaginary" intervals.
11:15 AM -> 112.0
11:16 AM -> 112.0
11:17 AM -> 112.0
11:18 AM -> 115.0
My program used a JSON data source, so manipulating these values was really easy. I have never used cube/mongo and so I don't know how easy it will be to do the same there.
Does this answer your question?