Unable to access UDF from external view Redshift - amazon-redshift

I have created datashare, and shared VIEW with Consumer cluster. That view contains one UDF, I have added that view in External Schema. Now when I try to run that view in consumer cluster, I am getting below error.
[Amazon](500310) Invalid operation: External view contains unsupported objects;
Can someone help me with this?
Thanks in advance

Related

Kubernetes Custom Operator

I have a table in a database that gets frequently updated. The table holds host/port information for Egressing. I want to create a custom operator or a custom controller to sync this information over to Kubernetes, specifically Istio ServiceEntry’s.
Is a custom operator the way to go about this? How can I keep this table and kubernetes in sync – through constant polling?

MongoDB Atlas scheduled trigger service returning undefined

I’m trying to create a scheduled trigger to clear a collection weekly, but I am unable to get the service…
const collection = context.services.get('mongodb-atlas'); is returning undefined when I log it to console, and when I try and using it, it just says Cannot access member ‘db’ of undefined. I’ve also tried setting the service name to Cluster0 and mongodb-datalake, neither of which worked.
If someone could lend a hand on what I’m doing wrong and how I’m meant to do this, that would be awesome. Thanks.
You need the following syntax to get to your collection:
const collection = context.services.get(<SERVICE_NAME>).db("db_name").collection("coll_name");
SERVICE_NAME you can find from here:
Go into Realm tab
In the left navigation, click on Linked Data Sources
Copy the Realm Service Name

Mapping Data Flows Error The stream is either not connected or column is unavailable

I have a meta-data driven pipeline and a mapping data flow to load my data. When I try to run this pipeline, I get the following error.
{"message":"at Derive 'TargetSATKey'(Line 42/Col 26): Column 'PersonVID' not found. The stream is either not connected or column is unavailable. Details:at Derive 'TargetSATKey'(Line 42/Col 26): Column 'PersonVID' not found. The stream is either not connected or column is unavailable","failureType":"UserError","target":"Data Vault Load","errorCode":"DFExecutorUserError"}
When I debug the mapping data flow, all the components in the data flow work as intended.
I guess that my source connection parameters aren't flowing through properly. Below is an email of my source connection
Please let me know if you have any thoughts and questions
I found a resolution to my problem. The error was the data being passed in was a string but when the variable was unpacked my variable value didn't have a quote around it. Putting in the quotes fixed it.
For example
'BusinessEntityID'
Please let me know if there are any questions

How to force to set Pipelines' status to failed

I'm using Copy Data.
When there is some data error. I would export them to a blob.
But in this case, the Pipelines's status is still Succeeded. I want to set it to false. Is it possible?
When there is some data error.
It depends on what error you mentioned here.
1.If you mean it's common incompatibility or mismatch error, ADF supports built-in feature named Fault tolerance in Copy Activity which supports below 3 scenarios:
Incompatibility between the source data type and the sink native
type.
Mismatch in the number of columns between the source and the sink.
Primary key violation when writing to SQL Server/Azure SQL
Database/Azure Cosmos DB.
If you configure to log the incompatible rows, you can find the log file at this path: https://[your-blob-account].blob.core.windows.net/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csv.
If you want to abort the job as soon as any error occurs,you could set as below:
Please see this case: Fault tolerance and log the incompatible rows in Azure Blob storage
2.If you are talking about your own logic for the data error,may some business logic. I'm afraid that ADF can't detect that for you, though it's also a common requirement I think. However,you could follow this case (How to control data failures in Azure Data Factory Pipelines?) to do a workaround. The main idea is using custom activity to divert the bad rows before the execution of copy activity. In custom activity, you could upload the bad rows into Azure Blob Storage with .net SDK as you want.
Update:
Since you want to log all incompatible rows and enforce the job failed at the same time, I'm afraid that it can not be implemented in the copy activity directly.
However, I came up with an idea that you could use If Condition activity after Copy Activity to judge if the output contains rowsSkipped. If so, output False,then you will know there are some skip data so that you could check them in the blob storage.

Crash while Mapping output to Core data in Restkit

I have 3 tables userID, profile which has to one relationship and searchId, profile has to one relationship. I set the delete rule to Nullify for both relationship. I have two View Controllers in which i am mapping the json data from server using Restkit.
In View Controller 1:
In this view Datas are fetched from Server and Mapped in DB using Restkit. (userID - > profile)
In View Contrller 2:
Data is retrieved from server and mapped into DB (searchID -> profile).
There is a refresh button in View Controller 1, which will fetch the data from server and map(update) the db.
Problem: After View Controller 1 loads the data from server, I am trying to search the data in View Controller 2 which loads the data from the server. I am Loading those data in ViewController 1 UI (Because View Controller 1 results and View Controller 2 Results are in Similar Format) No Problem here.
But When i Click the refresh button again, App got Crashed with Following Error. I can't able to figure out the problem.
*** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x8b4b950 <x-coredata://EE00CF63-BECD-40FC-B531-1424930D75D6/USERID/p42>''
*** First throw call stack:
(0x2b78022 0x2d28cd6 0x230b506 0x230b0a7 0x230ac86 0x2316db9 0x2316c26 0x231a38e 0x234a5f6 0x2338df7 0x233791e 0x233765d 0x2336f0a 0x1d59d91 0x1d59895 0x1d3f33e 0x231af3f 0x231a449 0x234a5f6 0x2338df7 0x23379ec 0x233765d 0x2336f0a 0x1d59d91 0x1d59895 0x1d3f33e 0x231af3f 0x231a449 0x234a5f6 0x2338df7 0x2338d64 0x1d70d50 0x273ebd 0x274727 0x1d7463e 0x1d6d1e7 0x1d6ceea 0x29c725 0x2a78b0 0x29e0f7 0x29e773 0x29ea5a 0x28ed330 0x28eef0c 0x28eecb4 0x28ee402 0x97dd7b24 0x97dd96fe)
terminate called throwing an exception
Any help is much appreciated
Are all your managed object contexts saved before you hit the refresh button? The "CoreData could not fulfill a fault" error means that Core Data was trying to find a row with a particular ID, but it could not find the row. This can happen when you pass managed objects across threads (which you should never do, you should pass their object IDs) or pass their IDs across threads without saving the context in which the object was created.