NetLogo LevelSpace Extension for Passing Agents Between Models - netlogo

I'm using NetLogo's LevelSpace extension to pass agents between two models.
Specifically, I am wanting to make agents leave one model and appear with all of their attributes in another model, and vice versa.
For example, if one model was of a work environment and an agent spent 8 hours at work then, the other model was of the agent's home. That same agent would have all of the same attributes, just be in a different environment. The agent would need to have their attributes updated in each model as well.
I currently have a controller setup to control both model environments and, hopefully, will be able to record and plot data received from both models but I need to be able to pass the agents back and forth first. Does anyone know if this is actually possible and how to achieve it. The LevelSpace dictionary doesn't really have a solution and I can't find a tutorial.

Related

How to exclude (ignore) a GIS route programmatically in Anylogic?

While working with Anylogic user can open properties of some GIS route (or some other object) and push checkbox "Ignore". The GIS route will be excluded from the model and after running the model user wont see the object because it wont exist.
Is it possible to exclude some GIS routes in interactive mode? For example I drew in Anylogic big GIS network with many GIS routes and after running the model (!) I want to have a possibility to choose routes which should not be included in the network. E.g. it can be implemented in Simulation window.
I was looking for suitable JAVA code but I did not find anything except Visible-property.
you can not programmatically "ingore" something, this is not how Java works.
In your case, the only way is to create the GIS network programmatically as well at the start of the model. Now you have full control because you could decide to only create part of your network, depending on user choice.
Check the AnyLogic API on how to create GIS routes programmatically.
cheers

Set GeoServer to access a Postgresql database, Simple or Snapshot schema, populated by Osmosis

I have a postgresql database which I keep updated using Osmosis. Osmosis can write to two different database schemas, named Simple and Snapshot. There are not that much different from the database Geoserver uses, But I can't make Geoserver use it perfectly.
The main problem seems to be the way tags are stored in those DBs. I can add the nodes layer and display it with that default Points style, but as soon as I use a "ogc:Filter" in my style to filter the nodes by their "place" tag, the WMS is broken and does not respond (says: The requested Style can not be used with this layer. The style specifies an attribute of place and the layer is: TestDB:nodes)
Is there anyway to make GeoServer understand that one of those shemas, or make Osmosis update to the DB GeoServer knows?
This is a case for using TRIGGERs to manage the integration. The two programs use two different schemas. You can CREATE TRIGGERs in the database which ensure that data written to one application is made available to another. Another option is you can set one or both to use VIEWs populated in part by the other application. In PostgreSQL, a VIEW can have triggers attached so these are not really
This is, in any case, a potentially large project so rather than offering sample code, I will offer a general outline of what sorts of things you need to think about.
Are these generally applicable? If so do you want to start an open source integration project?
Are both of these read-only workloads? Does data ever update? In general, if you are going to use views, updates pose the most concerns, so you want to run the views on the side not doing the updates if such is the case.
What is the write model of both sides? Insert/Update? Append only? Static data? What data do you have to "replicate" between the schemas?
Once you have those answers it should be relatively straightforward to get started and ask for help (either as an open source project or here) where you get stuck.

play framework wizard dynamic fields

I am trying to follow the Wizard pattern in Forms sample provided with Play (https://github.com/playframework/Play20/blob/master/samples/java/forms/app/views/wizard/form1.scala.html).
This approach looks okay when the number of fields are static. But, how do I deal with this when the fields are dynamic. e.g. if there are going to be multiple profiles that a user can create in step2, how do I represent it on this page?
Also, does it mean that every page of my wizard will have to know about all the controls on rest of the pages, and make those hidden? There must be a better approach to solve this problem. Can you pls help?
I had similar problem while working with wizards. I solved it by decoupling my DB models from UI models. e.g. at DB level, I have one model that represents a whole car. At UI layer, I created multiple models that represent parts of the car e.g. wheels, seats, doors etc.
In the UI wizard, I use the UI models. So, at any given step my wizard step only needs to know about the parts that it operates on. I can apply the validation constraints such as #required etc. on these models. If form validation on part succeeds, I would update the DB model with that information. HTH.

How to share a piece of Core Data model across iPhone apps?

What's a good pattern, if any, to share a piece of Core Data model across iPhone apps the same way I would share code, images and other resources?
I am thinking of developing my model in one app, and then just including it in the other app as a resource. But can't wrap my head around how to do that. Is it sufficient to just include the generated model code files, which I could include as code? But this does not feel right, I think I need the actual data model file too, which is some opaque resource. But say that both of the apps have also other Core Data model objects that I don't want to share across them? (if I do want to share everything, I guess I could share the xcdatamodeld files, but I specifically want to share only an isolated part of the graph.)
To make a concrete example, app 1 has model objects A and B that are related, and C and D that are related. A-B, though, are not related to C-D. App 2 has C-D and E-F. I would like to share C-D (the two model objects and their relation) between the apps, the goal being that the schema updates stay in sync between apps. (Sharing only model, not data.)
Since stores are created by merging the models on hand in any particular app, you can mix models however you wish. However, once you create a store with a particular set of model files, you must have those model files always available in the future.

Is there any reason why someone would want to create an Core Data model programmatically?

I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff.
I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool.
And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time.
Did I miss something?
I dont think you missed anything. The only reason I can see to create your model programatically would be if the objects you are modeling are themselves dynamic: you could for instance build a coredata entity (or graph of entities) in response to a web service which changed over time, or was selected by the user. However, I think if you had that or a similar use case, you wouldn't need to write this question (and you'd probably solve it a different way anyway)
So, if your application is dealing with resources that are dynamic, as #Andiih mentioned, then this programatic is the only way to do it. I don't know what my core data entities are until runtime, I don't know what the attributes are, or what the data looks like. So, I ask the server to give me the kinds of resources I should support and what their attributes look like. I build the model, the entities, the properties, the relationships - at runtime. I still want to use Core Data because I'm dealing with a lot of data and I need the benefit of efficient memory management with NSFetchedResultsController, etc. I can only do this programmatically.
The trouble is how to handle migration to try and preserve as much of the persistent store as possible, to reduce the size of the networked data payload after the model changes. Right now I blast the whole model and the persistent store if there's a conflict. I haven't yet figured out a way to create an .xcdatamodel from a programmatically generated model, thus I can't yet create a version mapping to do the migration.
Everything is a trade off. Basically, I think IB and the visual Core Data modeler are the right tool if you're building a simple application. You'll need to make the determination when your application becomes large/complex enough that you prefer to have direct control over all aspects of the code.
Regarding Interface Builder, if you have an application with a variety of complex interactions between view controllers, and multiple custom controls, I find code more appropriate.
For Core Data, the question is pretty much the same. Does your project have a defined scope? Can you foresee everything in that scope being done within the visual modeler? If so, it's probably fine. For other projects, where you may be asked to add features on an ongoing basis, perhaps it's better to spend a little more time writing it out so you have more flexibility later.
One other thing to consider, that doesn't get mentioned much, is it's MUCH harder to ask for help with IB or any hybrid visual design/code system. When something does go wrong, or you need help, it's way easier to post your code, than try to explain what's going on in a visual modeler.
In general, there's no reason to build the managed object model in code. There's nothing you can do in code that can't be done in the model editor. There are some fancy tricks you can do in code, however, to work with multiple models. For example, you can merge two models, establishing cross-model relationships between entities in those models at load time (see Cross-model relationships in NSManagedObjectModel from merged models?).
Regarding whether it's a good idea to code or use the graphical editor, I think the balance tips heavily towards the graphical editor in this case. Being able to verify the model by visual inspection instead of (rather convoluted) code is a win. The model can still be verified by unit test, if you desire.
I have one use case that might be valid, what if you load some data from the internet whether it is XML from an RSS Feed or WSDL response, then flatten those responses into a tabular from generating an in memory data table and finally mash it all up into a single coherent data model, then you can create the entities for those in memory data tables and create master/detail relationships. That's one case I think Core Data data model generated programmatically could become handy and a powerful feature.
I've changed models programmatically in unit tests. For example, I wrote a class that is designed to work with Core Data models that have a particular protocol attached. Instead of testing against a random implementation, I mutated the default model by adding one just in the unit tests programmatically, and tested against that test-only model.