OrientDB No edge created Execption - orientdb

I am currently getting a "No edge created" exception while using OrientDB 2.1
As per the CREATE EDGE documentation (http://orientdb.com/docs/2.1/SQL-Create-Edge.html):
Beginning with version 2.1, when no edges are created OrientDB throws
a OCommandExecutionException error. This makes it easier to integrate
edge creation in transactions. In such cases, if the source or target
vertices don't exist, it rolls back the transaction.
I was wondering if there was some way to log/print out information regarding the vertices it is trying to create an edge between. I am using a JSON file to query for updates from a DB and transformer inside that JSON to create the edges using the IDs as parameters of the query results. Thanks

I solved my issue of getting ID information of the vertices by adding "log": "debug" to the JSON file.

Related

Gremlin: text searching predicates not working on OrientDB

I have the below query on OrientDB:
g.V().hasLabel('people').has('firstName',startingWith('V')).values('ID')
And I am getting a "Failed executing Gremlin query" response. I know there are 'people' with first name that start with "V", but even if there weren't it should return empty results. Any ideas why this could be happening?
You are probably running OrientDB version 3.0.x which is aligned to Gremlin 3.3.x.
These new text predicates, like startingWith, where added in Gremlin 3.4.x, and available in OriendDB version 3.1.x, which is currently in milestone preview.

How to force to set Pipelines' status to failed

I'm using Copy Data.
When there is some data error. I would export them to a blob.
But in this case, the Pipelines's status is still Succeeded. I want to set it to false. Is it possible?
When there is some data error.
It depends on what error you mentioned here.
1.If you mean it's common incompatibility or mismatch error, ADF supports built-in feature named Fault tolerance in Copy Activity which supports below 3 scenarios:
Incompatibility between the source data type and the sink native
type.
Mismatch in the number of columns between the source and the sink.
Primary key violation when writing to SQL Server/Azure SQL
Database/Azure Cosmos DB.
If you configure to log the incompatible rows, you can find the log file at this path: https://[your-blob-account].blob.core.windows.net/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csv.
If you want to abort the job as soon as any error occurs,you could set as below:
Please see this case: Fault tolerance and log the incompatible rows in Azure Blob storage
2.If you are talking about your own logic for the data error,may some business logic. I'm afraid that ADF can't detect that for you, though it's also a common requirement I think. However,you could follow this case (How to control data failures in Azure Data Factory Pipelines?) to do a workaround. The main idea is using custom activity to divert the bad rows before the execution of copy activity. In custom activity, you could upload the bad rows into Azure Blob Storage with .net SDK as you want.
Update:
Since you want to log all incompatible rows and enforce the job failed at the same time, I'm afraid that it can not be implemented in the copy activity directly.
However, I came up with an idea that you could use If Condition activity after Copy Activity to judge if the output contains rowsSkipped. If so, output False,then you will know there are some skip data so that you could check them in the blob storage.

In which case will this error occurs "Restricted dimension(s): ga:userAgeBracket, ga:userGender can only be queried under certain conditions"?

I'm using Google Analytics Core Reporting API v4. When I query using the dimensions: ga:userAgeBracket & ga:userGender, I get the following error:
Restricted dimension(s): ga:userAgeBracket, ga:userGender can only be queried under certain conditions
Can someone tell me why this error occurs?
Not all dimensions and metrics can be queried together. This can be for several reasons it may not make sense to have them mixed. It may also be that a relation between them does not exist.
My guess would be that there is no relation between ga:userAgeBracket, ga:userGender. Gender came from double click cookie.

How to query domain in jaspersoft with Dynamic Parameters

I am new to jaspersoft reporting. I am currently designing and developing reports by considering following requirements.
I want to create template based reports where all dynamic parameters I need to pass in SQL query.
I was going through japsersoft reporting I found that we can create join views and cache data by creating domains. So that it reduces hits at db level.
While creating report I found that I cant execute SQL script on Domain objects.
Please advice whether I am on right track or not.
Basically I want to query on cached data such as domain view instead of hitting DB directly.
Please suggest if any workaround is available for this problem.
Please note, although JasperReports Server manages a cache for Ad Hoc Views and Ad Hoc Reports running on Domains, running a JRXML report (e.g. designed in Jaspersoft Studio) on a Domain does not guarantee hitting that cache.
You also have the option of using a layer that provides caching between JasperReports Server and your database. For example, support has been recently added for TIBCO Data Virtualization (not a free product) in v.7, see https://www.jaspersoft.com/introducing-jaspersoft-7.
In any case, Domains are not relational databases and therefore do not support straight SQL.
You can use the "Domain query language" though, which offers a subset of the features of SQL. The easiest way to write a query is using Jaspersoft Studio and selecting "domain" in the Language dropdown (top-left corner of the Dataset and Query Dialog, indicated by the red arrow in the screenshot below from Studio 6.4.0):
For example the design above (which uses the Supermart Domain, provided with the sample data) will generate this query and the required "dynamic" parameter as you requested – in this case a Collection as the filter is 'Is One Of' which can take multiple values:
<query>
<queryFields>
<queryField id="sales_fact_ALL.sales__product.sales__product__product_name"/>
<queryField id="sales_fact_ALL.sales_fact_ALL__store_sales_2013"/>
</queryFields>
<queryFilterString>sales_fact_ALL.sales__store.sales__store__region.sales__store__region__sales_country in sales__store__region__sales_country_0</queryFilterString>
</query>
See here for another example of a query (current version of docs based on 7.1.0 release), in this case for use with the REST API: https://community.jaspersoft.com/documentation/tibco-jasperreports-server-rest-api-reference/v710/queryexecutor-service
The queryFilterString tag follows the DomEL syntax as documented here (also for 7.1.0): https://community.jaspersoft.com/documentation/tibco-jasperreports-server-user-guide/v71/domel-syntax

Get URI of RSM Diagram Elements

I would like to be able to programmatically retrieve the same URI that is available through BIRT (getURI). I am developing an Rational Software Modeler plug-let and need to get the unique identifier for the diagram elements. This would enable the elements to be recognized in later database ETL processes.
I have found a URI available through EObject.eResource().getURI(), except it only returns half of what is returned in the BIRT reports. BIRT reports something like "platform:/resource/Common/S.efx#_c0KLYFImEd-iIqDctBy_JQ" while EObject.eResource().getURI() only returns "platform:/resource/Common/S.efx"
Any help would be appreciated.
You should be able to get the whole URI with ECoreUtil.getURI(EObject) function, it should also include the fragment part.
EObject.eResource().getURI() returns you the resource's URI where this object is located, so it will not include the Objects own unique ID there.
That fragment ending hash there is EObjects XMI-ID, that can be returned using ECoreUtil.getID(EObject) if needed. But that ECoreUtil.getURI(EObject) should be just ok.