Set GeoServer to access a Postgresql database, Simple or Snapshot schema, populated by Osmosis - postgresql

I have a postgresql database which I keep updated using Osmosis. Osmosis can write to two different database schemas, named Simple and Snapshot. There are not that much different from the database Geoserver uses, But I can't make Geoserver use it perfectly.
The main problem seems to be the way tags are stored in those DBs. I can add the nodes layer and display it with that default Points style, but as soon as I use a "ogc:Filter" in my style to filter the nodes by their "place" tag, the WMS is broken and does not respond (says: The requested Style can not be used with this layer. The style specifies an attribute of place and the layer is: TestDB:nodes)
Is there anyway to make GeoServer understand that one of those shemas, or make Osmosis update to the DB GeoServer knows?

This is a case for using TRIGGERs to manage the integration. The two programs use two different schemas. You can CREATE TRIGGERs in the database which ensure that data written to one application is made available to another. Another option is you can set one or both to use VIEWs populated in part by the other application. In PostgreSQL, a VIEW can have triggers attached so these are not really
This is, in any case, a potentially large project so rather than offering sample code, I will offer a general outline of what sorts of things you need to think about.
Are these generally applicable? If so do you want to start an open source integration project?
Are both of these read-only workloads? Does data ever update? In general, if you are going to use views, updates pose the most concerns, so you want to run the views on the side not doing the updates if such is the case.
What is the write model of both sides? Insert/Update? Append only? Static data? What data do you have to "replicate" between the schemas?
Once you have those answers it should be relatively straightforward to get started and ask for help (either as an open source project or here) where you get stuck.

Related

Updating Application data in a Qlik Sense Application though Extension

Is it possible to change the application data from a extension?
I was creating a visual extension(table) in which if I change the value of a cell I should be able to change the value in the application level (not in database level),How can I achieve this?
Changing the value in Qhypercube.[].qDataPages.qDataPages... is only changing the value in Extension level.
I think the problem here is data persistence, since Qlik Sense itself is not a data warehouse or a true "data store" in the traditional sense. When you load data from a database into an app and it goes through the app's load script, it's then cached to the underlying QVF file for the app. Updating the data would need to happen at either the source level (the database in this case), an intermediary store like a QVD, or "on the fly" via variables and chart scripting. Those first two options are persistent and that third one is not.
That's why if you look at other similar Qlik extensions that enable users to input data, they are "writeback" solutions, as they update the underlying database that the app is pulling from. You can find a few examples of those here, here, and here.
A few existing ones also take the approach of outputting to QVDs, which could be your best bet if you want to avoid updating a database. See this one as an example, as well as their implementation docs here.
You could probably achieve all of this with a combination of:
Getting the hypercube of your (updated) table (more info)
Create a session app (more info)
Write to a new or existing QVD (more info)
(Partial) reload the current app (more info)
This would all depend on the Update rights of the users of the app, though.

Transition from legacy database to new one that works with legacy application

I have a problem concerning legacy application that can’t be changed in any way (single executable file with no dlls) which is connected to a database that can be changed. It is a visual basic 6 application connecting to the database using ADO.Net. Database engine is a SQL Server 2008. The goal is to create new correct database that will work with legacy application
It is coupled so tightly, that it does not even work with views instead of tables as suggested here. So the present situation look like this: current situation diagram
Currently I am trying to research into the problem and find my options. I have some idea that might work:
Since the approach to change tables to views does not work, I think that one possibility is to intercept the communication between app and legacy DB, read a sent command and redirect it somewhere else and not letting legacy db respond to the request.
Each command is either CRUD or procedure execution and we know what possible commands can be sent. Let’s suppose that a new database is set and has views corresponding to the legacy one. Commands are redirected to my own application that filters out everything and manipulates it (somehow) to work with the new schema.
Diagram of intercepted communication
This is my general idea of what I want to do to avoid rewriting the legacy application which is tightly coupled. Someone already asked a question similar to mine.
They discuss approach how to either dig commands from sql dump files or to intercept the communication.
The interception itself doesn’t seem to be a problem as discussed here. But I wonder how can the mirror reply.
The same goes for port mirroring using [TCP packet hijacking] (https://reverseengineering.stackexchange.com/a/1816)
To sum up, my questions are as follows:
Is that feasible approach to achieve smooth transition from a legacy modifiable solution to new one?
If my idea is doable, how can I listen to db requests and create responses from a different application and not the original db?
Is there a better way how to achieve my goal which is to create new database with database abstraction layer so the old legacy application will remain functional?

Keeping track of changed properties in JPA

Currently, I'm working on a Java EE project with some non-trivial requirements regarding persistence management. Changes to entities by users first need to be applied to some working copy before being validated, after which they are applied to the "live data". Any changes on that live data also need to have some record of them, to allow auditing.
The entities are managed via JPA, and Hibernate will be used as provider. That is a given, so we don't shy away from Hibernate-specific stuff. For the first requirement, two persistence units are used. One maps the entities to the "live data" tables, the other to the "working copy" tables. For the second requirement, we're going to use Hibernate Envers, a good fit for our use-case.
So far so good. Now, when users view the data on the (web-based) front-end, it would be very useful to be able to indicate which fields were changed in the working copy compared to the live data. A different colour would suffice. For this, we need some way of knowing which properties were altered. My question is, what would be a good way to go about this?
Using the JavaBeans API, a PropertyChangeListener could suffice to be notified of any changes in an entity of the working copy and keep a set of them. But the set would also need to be persisted, since the application could be restarted and changes can be long-lived before they're validated and applied to the live data. And applying the changes on the live data to obtain the working copy every time it is needed isn't feasible (hence the two persistence units).
We could also compare the working copy to the live data and find fields that are different. Some introspection and reflection code would suffice, but again that seems rather processing-intensive, not to mention the live data would need to be fetched.
Maybe I'm missing something simple, or someone know of a wonderful JPA/Hibernate feature I can use. Even if I can't avoid making (a) separate database table(s) for storing such information until it is applied to the live data, some best-practices or real-life experience with this scenario could be very useful.
I realize it's a semi-open question but surely other people must have encountered a requirement like this. Any good suggestion is appreciated, and any pointer to a ready-made solution would be a good candidate as accepted answer.
Maybe you can use the Hibernate flush entity event listener. The dirty properties are calculated before the flush. You can store them somewhere in your database.
A sample code of using the dirty properties feature of Hibernate which may give you an idea.

Entity Framework without database

I like working with the entity framework for many reasons- the ease of use of the entity designer, the power of linq, and the ease of binding.
Occasionally I want to build a simple app that doesnt need to use a database, but still needs to work with data and display it on screen, in grids etc, so I'd like to just create a quick EF model and use it for this, but it doesnt seem to work very will with just using it for local data.
My question is- is there a correct usage of the EF for working with local data, and perhaps then just serialize/deserialize the whole context to a file? Or is this just too much effort to make work properyly? I used to use Datasets in this way, along with Linq to Dataset, and it works well... So perhaps those are still the better way to go for this scenario?
Yes you can use entity framework as local, and also access the data that is currently in-memory, read details as link below:
http://msdn.microsoft.com/en-us/data/jj592872.aspx
I don't know what you mean by "local data" exactly (sounds like it's not a database), but I think the Datasets vs. EF portion of your post is (for me) the real question.
EF is great when you need to model robust business logic, are implementing a Domain Model pattern, using Domain Driven Design, etc: basically any scenario where a Table Module or Active Record pattern is inappropriate.
When you just need to display some grids of data, and the business logic is very simple, Datasets are definitely the way to go (in my experience).

How do we share data between two different services

I am currently working on a web service which is periodically polled. It does not store its state and is instantiated everytime it is queried. Essentially, it retrieves the state of other external entities e.g. databases and delivers it back to the requester.
Recently, the need to store state as arisen in that
There is the need to continously collect data from a particular source and store the bits that are important/relevant
There is the need to collect the aggregate of a particular data source over a period of time
I came up with the following idea:
My main concern here is the fact that I am using a static class (essentially a global) to share data between the two services. Is there a better way to doing this?
edit: Thanks for the responses thus far. Apologies for the vaguesness of this question: just trying to work out what is the best way to share data across different services and am unsure as to the specifics (i.e. what is required). The platform that I am developing on is the .NET framework and both services are simply WCF services hosted as a Windows service.
The database route sounds like the most conventional way to go - however I am reluctant to go down that path for now (mainly for deployment/setup issues; it introduces the need to create new tables, etc in addition to simply installing the software) for at this point the transfer of relatively small amounts of data. This may of course change in the future and going the database route might be the way to go at that point.
Is there any other way besides adding a database persistance layer?
If you need to collect and aggregate data, you might want to consider using a database between the two layers. Or have I misunderstood something?
You should consider enhancing your question with more requirements: pretty much all options are open here.
Sure - how about data binding? I don't have a lot of information to go on here - about your platform but most sufficiently advanced systems offer it in some form.
You could replace your static shared data with some database representation, with a caching layer (like memcached) between the database and the webservice, so that most of the time the data is available very quickly from the cache, but can be retrieved from the database as needed.
I appreciate that you want to keep the architecture simple. Depending on the magnitude of items you have to look up and there permanency, you might just consider leveraging your file system or a message queue. It sounds like you want a file system, because that sounds the least amount of impact to your design.
If you start dealing with tens of thousands of small files, your directories could get hard to navigate and slow to do file lookups on. I typically shoot for about 1000 - 10000 files per directory, and concoct a routine that can generate a path to the file depending on the file name pattern. Keeping the number of subdirectories even is important, some file systems have a limit on the number of subdirectories in a parent directory.