Apache Atlas Entities POST takes a very long time - apache-atlas

I'm currently working on importing GCP data into apache atlas and I defined typedefs to have parent child relationships as follows:
gcp_bigquery_dataset has array of gcp_bigquery_tables as children
gcp_bigquery_table has single gcp_bigquery_dataset as parent and array of gcp_bigquery_column as children.
gcp_bigquery_column has a single gcp_bigquery_table as parent
I explored on how I can import entities on a dataset level and created the necessary Entity JSON which I can POST on to the V2 API of apache atlas.
I tried two ways:
Import API via Zip on Server:
The total zip size was 78 MB and it had 57 individual entities and a total of about 80,000 entities with relationships and referred entities.
The server stops processes 2 of the entiteis and stops at 3rd and does not respond back. After a while, zookeeper marks the session as expired and the import call does not return any status and it just keeps waiting.
Using POST on the V2 REST API
The issue is the JSON is created on a dataset level which results in a JSON of size more than 20 MB and server takes too long to respond. I was able to successfully import about 45,000 entities but these were imported with timeouts on the client side and I still have about 60,000 entities to be imported.
The timeout is causing issues because I'm trying to have synchronous calls with the Atlas server.
Is there any better way of performing bulk imports? Thanks in advance!

Related

Timeout issue for http connector and web activity on adf

Timeout issue for http connector and web activity
Web activity and http connector on adf
We have tried loading data through Copy Activity using REST API with Json data some columns are getting skipped which is having no data at its first row. We have also tried REST API with cv data but it's throwing error. We have tried using Web Activity but its payload size is 4MB, so it is getting failed with timeout issue. We have tried using HTTP endpoint but its payload size is 0.5 MB, so it is also getting failed with timeout issue
In Mapping settings, Toggle on the advanced editor and give the respective value in collection reference to cross apply the value for nested Json data. Below is the approach.
Rest connector is used in source dataset. Source Json API is taken as in below image.
Then Sink dataset is created for Azure SQL database. Once the pipeline is run, few columns are not copied to database.
Therefore, In Mapping settings of copy actvity,
1. Schema is imported
2. Advanced editor is turned on
3. Collection reference given.
When pipeline is run after the above changes, all columns are copied in SQL database.

Play Framework Database Related endpoints hang after some up time

I have situation which is almost identical to the one described here: Play framework resource starvation after a few days
My application is simple, Play 2.6 + PostgreSQL + Slick 3.
Also, DB retrieval operations are Slick only and simple.
Usage scenario is that data comes in through one endpoint, gets stored into DB (there are some actors storing some data in async fashion which can fail with default strategy) and is served through rest endpoints.
So far so good.
After few days, every endpoint that has anything to do with database stops responding. Application is server on t3-medium on a single instance connected to RDS instance. Connection count to RDS is always the same and stable, mostly idling.
What I have also noticed is that database actually gets called and query gets executed, but request never ends or gets any data.
Simplest endpoint (POST) is for posting feedback - basically one liner:
feedbackService.storeFeedback(feedback.deviceId, feedback.message).map(_ => Success)
This Success thing is wrapper around Ok("something") so no magic there.
Feedback Service stores one record in DB in a Slick preferred way, nothing crazy there as well.
Once feedback post is called, I notice in psql client that INSERT query has been executed and data really ends up in database, but HTTP request never ends and no success data gets returned. In parallel, calling non DB related endpoints which do return some values like status endpoint goes through without problems.
Production logs don't show anything and restarting helps for a day or two.
I suppose some kind of resource starvation is happening, but which and where is currently beyond me.

Same data read by PCF instances for spring batch application

I am working on a spring batch application,which read data from data base using JdbcCursorItemReader,this application is working as expected when I run a single instance.
I deployed this application in PCF and used auto scale feature, but multiple instances are retrieving the same record from the data base.
How can I prevent the duplicate data reads from other instances?
This is normally handled by applying the processed indicator pattern. In this pattern, you have an additional field on each row that you mark with the status as each record is processed. You then use your query to filter only the records that match the status you care about. In this case, the status could be node specific so that the node only selects records that node tags.

Is querying MongoDB faster than Redis?

I have some data stored in a database (MongoDB) and in distributed cache redis.
While querying to the repository, I am using lazy loading approach which first finds the data in the cache if it's available, if not find it in the database and update the cache as well so that next time when the requirement comes it should be found in the cache.
Sample Model Used:
Person ( id, name, age, address (Reference))
Address (id, place)
PersonCacheModel extends Person with addressId.
I am not storing parent object with child object together in the cache that is why I've created personCacheModel with addressId and store this object in the cache and while getting the data personCacheModel converts to person and make a call to address repo to addressCache to fill the address details of the person object.
As far as I understand:
personRepository.findPersonByName(NAME + randomNumber);
Access Data from Cache = network time + cache access time + deserialize time
Access Data from database = network time + database query time + object mapping time
When I ran above approach for 1000 rows, accessing data from the database is faster than the accessing data from the cache. I believe cache access time must be smaller than the accessing MongoDB.
Please let me know if there's an issue with the approach or is this is the expected scenario.
to have a valid benchmark we need to consider hardware side and data processing side:
hardware - do we have same configuration, RAM, CPUs count, OS... etc
process - how data is transformed (on single thread, multi thread, per object, per request)
Performing a load test on your data set will give you an good overview of which process is faster in particular use case scenario.
It is hard to judge - what it should be as long as there mentioned above points will be know for us.
The other thing is to have more than one test scenario and have it stressed in let's say 10 sec time, minute , 5 an hour... so you can have digits that will tell you the truth.

How DSpace process a query in jspui?

How any query is processed in DSpace and data is managed between front end and PostgreSQL
Like every other webapp running in a Servlet Container like Tomcat, the file WEB-INF/web.xml controls how a query is processed. In case of DSpace's JSPUI you'll find this file in [dspace-install]/webapps/jspui/WEB-INF/web.xml. The JSPUI defines several filters, listeners and servlets to process a request.
The filters are used to report that the JSPUI is running, that restricted areas can be seen by authenticated users or even by authenticated administrators only and to handle Content Negotiation.
The listeners ensure that DSpace has started correctly. During its start DSpace loads the configuration, opens database connections that it uses in a connection pool, let Spring do its IoC magic and so on.
For the beginning the most important part to see how a query is processed are the servlets and the servlet-mappings. A servlet-mapping defines which servlet is used to process a request with a specific request path: e.g. all requests to example.com/dspace-jspui/handle/* will be processed by org.dspace.app.webui.servlet.HandleServlet, all requests to example.com/dspace-jspui/submit will be processed by org.dspace.app.webui.servlet.SubmissionController.
The servlets uses their Java code ;-) and the DSpace Java API to process the request. You'll find most of it in the dspace-api module (see [dspace-source]/dspace-api/src/main/java/...) and some smaller part in dspace-services module ([dspace-source]dspace-services/src/main/java/...). Within the DSpace Java API their are two important classes if you're interested in the communication with the database:
One is org.dspace.core.Context. The context contains information whether and which user is logged in, an initialized and connected database connection (if all went well) and a cache. The methods Context.abort(), Context.commit() and Context.complete() are used to manage the database transaction. That is the reason, why almost all methods manipulating the database requests a Context as method parameter: it controls the database connection and the database transaction.
The other one is org.dspace.storage.rdbms.DatabaseManager. The DatabaseManager is used to handle database queries, updates, deletes and so on. All DSpaceObjects contains an object TableRow which contains the information of the object stored in the database. Inside the DSpaceObject classes (e.g. org.dspace.content.Item, org.dspace.content.Collection, ...) the TableRow may be manipulated and the changes stored back to the database by using DatabaseManager.update(Context, DSpaceObject). The DatabaseManager provides several methods to send SQL queries to the database, to update, delete, insert or even create data in the database. Just take a look to its API or look for "SELECT" it the DSpace source to get an example.
In JSPUI it is important to use Context.commit() if you want to commit the database state. If a request is processed and Context.commit() was not called, then the transaction will be aborted and the changes gets lost. If you call Context.complete() the transaction will be committed, the database connection will be freed and the context is marked as been finished. After you called Context.complete() the context cannot be used for a database connection any more.
DSpace is quite a huge project and their could be written a lot more about its ORM, the initialization of the database and so on. But this should already help you to start developing for DSpace. I would recommend you to read the part "Architecture" in the DSpace manual: https://wiki.duraspace.org/display/DSDOC5x/Architecture
If you have more specific questions you are always invited to ask them here on stackoverflow or on our mailing lists (http://sourceforge.net/p/dspace/mailman/) dspace-tech (for any question about DSpace) and dspace-devel (for question regarding the development of DSpace).
It depends on the version of DSpace you are running, along with your configuration.
In DSpace 4.0 or above, by default, the DSpace JSPUI uses Apache Solr for all searching and browsing. DSpace performs all indexing and querying of Solr via its Discovery module. The Discovery (Solr) based searche/indexing classes are available under the "org.dspace.discovery" package.
In earlier versions of DSpace (3.x or below), by default, the DSpace JSPUI uses Apache Lucene directly. In these older versions, DSpace called Lucene directly for all indexing and searching. The Lucene based search/indexing classes are available under the "org.dspace.search" package.
In both situations, queries are passed directly to either Solr or Lucene (again depending on the version of DSpace). The results are parsed and displayed within the DSpace UI.