Boto3 Write to DynamoDB from AWS Glue, Unhashable Type: 'dict' - pyspark

I am attempting to use the boto3 module in PySpark (AWS Glue ETL job) to load a dataframe into DynamoDB but am being met with the error:
Parameter validation failed: Invalid type for parameter Item, value: set(["{u'ecosystemName': u'animals', u'regionalCenter': u'center', u'dataExtractedTimestamp': u'20190502T13:12:11.111Z'}"]), type: <type 'set'>, valid types: <type 'dict'> Could not write to the Audit Table for regional center:
Can anyone please help me resolve this issue and explain why it is happening? I'd like to understand the root of the issue.
Here is my current code. Please note that there will ever only be a single row in this data frame. The SQL used below is fake data for testing only.
try:
timestamp_sql = """select
'animals' as ecosystemName,
'20190502T13:12:11.111Z' as dataExtractedTimestamp,
'center' as regionalCenter
"""
# Make a data frame from SQL
audit_data_frame=spark.sql(timestamp_sql)
# Write the joined dynamic frame out to a datasink
dynamodb = boto3.resource('dynamodb','us-west-2')
table = dynamodb.Table(args['DynamoDBTable2Name'])
audit_data_frame_prep = json.loads(audit_data_frame.toJSON().first())
print(audit_data_frame_prep)
table.put_item( Item = { "{}".format(audit_data_frame_prep) } )
print('Audit trail successfully written for ' + regional_center + '.')
except Exception as e:
print(e)
print('Could not write to the Audit Table for regional center: ' + regional_center + '.')
pass

Dang, 5 minute rule strikes again...
It was in my put_item call. I had the extra brackets there, effectively rendering the JSON invalid. This change solves my issue:
table.put_item( Item = audit_data_frame_prep )

Related

How to return data from azure databricks notebook in Azure Data Factory

I have a requirement where I need to transform data in azure databricks and then return the transformed data. Below is notebook sample code where I am trying to return some json.
from pyspark.sql.functions import *
from pyspark.sql.types import *
import json
import pandas as pd
# Define a dictionary containing ICC rankings
rankings = {'test': ['India', 'South Africa', 'England',
'New Zealand', 'Australia'],
'odi': ['England', 'India', 'New Zealand',
'South Africa', 'Pakistan'],
't20': ['Pakistan', 'India', 'Australia',
'England', 'New Zealand']}
# Convert the dictionary into DataFrame
rankings_pd = pd.DataFrame(rankings)
# Before renaming the columns
rankings_pd.rename(columns = {'test':'TEST'}, inplace = True)
rankings_pd.rename(columns = {'odi':'ODI'}, inplace = True)
rankings_pd.rename(columns = {'t20':'twenty-20'}, inplace = True)
# After renaming the columns
#print(rankings_pd.to_json())
dbutils.notebook.exit(rankings_pd.to_json())
In order to achieve the same, I created a job under a cluster for this notebook and then I had to create a custom connector too following this article https://medium.com/#poojaanilshinde/create-azure-logic-apps-custom-connector-for-azure-databricks-e51f4524ab27. Using the connectors with API endpoint '/2.1/jobs/run-now' and then '/2.1/jobs/runs/get-output' in Azure Logic App, I am able to get the return value but after the job is executed successfully, sometimes I just get the status as running with no output. I need to get the output when job is executed successfully with transformation.
Please suggest a way better way for this if I am missing anything.
looks like dbutils.notebooks.exit() only accpet "string", you can return the value as json string and convert to json object in DataFactory or Logic App. https://learn.microsoft.com/en-us/azure/databricks/dev-tools/databricks-utils#--notebook-utility-dbutilsnotebook

AWS Glue PySpark replace NULLs

I am running an AWS Glue job to load a pipe delimited file on S3 into an RDS Postgres instance, using the auto-generated PySpark script from Glue.
Initially, it complained about NULL values in some columns:
pyspark.sql.utils.IllegalArgumentException: u"Can't get JDBC type for null"
After some googling and reading on SO, I tried to replace the NULLs in my file by converting my AWS Glue Dynamic Dataframe to a Spark Dataframe, executing the function fillna() and reconverting back to a Dynamic Dataframe.
datasource0 = glueContext.create_dynamic_frame.from_catalog(database =
"xyz_catalog", table_name = "xyz_staging_files", transformation_ctx =
"datasource0")
custom_df = datasource0.toDF()
custom_df2 = custom_df.fillna(-1)
custom_df3 = custom_df2.fromDF()
applymapping1 = ApplyMapping.apply(frame = custom_df3, mappings = [("id",
"string", "id", "int"),........more code
References:
https://github.com/awslabs/aws-glue-samples/blob/master/FAQ_and_How_to.md#3-there-are-some-transforms-that-i-cannot-figure-out
How to replace all Null values of a dataframe in Pyspark
http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.fillna
Now, when I run my job, it throws the following error:
Log Contents:
Traceback (most recent call last):
File "script_2017-12-20-22-02-13.py", line 23, in <module>
custom_df3 = custom_df2.fromDF()
AttributeError: 'DataFrame' object has no attribute 'fromDF'
End of LogType:stdout
I am new to Python and Spark and have tried a lot, but can't make sense of this. Appreciate some expert help on this.
I tried changing my reconvert command to this:
custom_df3 = glueContext.create_dynamic_frame.fromDF(frame = custom_df2)
But still got the error:
AttributeError: 'DynamicFrameReader' object has no attribute 'fromDF'
UPDATE:
I suspect this is not about NULL values. The message "Can't get JDBC type for null" seems not to refer to a NULL value, but some data/type that JDBC is unable to decipher.
I created a file with only 1 record, no NULL values, changed all Boolean types to INT (and replaced values with 0 and 1), but still get the same error:
pyspark.sql.utils.IllegalArgumentException: u"Can't get JDBC type for null"
UPDATE:
Make sure DynamicFrame is imported (from awsglue.context import DynamicFrame), since fromDF / toDF are part of DynamicFrame.
Refer to https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-dynamic-frame.html
You are calling .fromDF on the wrong class. It should look like this:
from awsglue.dynamicframe import DynamicFrame
DyamicFrame.fromDF(custom_df2, glueContext, 'label')
For this error, pyspark.sql.utils.IllegalArgumentException: u"Can't get JDBC type for null"
you should use the drop Null columns.
I was getting similar errors while loading to Redshift DB Tables. After using the below command, the issue got resolved
Loading= DropNullFields.apply(frame = resolvechoice3, transformation_ctx = "Loading")
In Pandas, and for Pandas DataFrame, pd.fillna() is used to fill null values with other specified values. However, DropNullFields drops all null fields in a DynamicFrame whose type is NullType. These are fields with missing or null values in every record in the DynamicFrame data set.
In your specific situation, you need to make sure you are using the write class for the appropriate dataset.
Here is the edited version of your code:
datasource0 = glueContext.create_dynamic_frame.from_catalog(database =
"xyz_catalog", table_name = "xyz_staging_files", transformation_ctx =
"datasource0")
custom_df = datasource0.toDF()
custom_df2 = custom_df.fillna(-1)
custom_df3 = DyamicFrame.fromDF(custom_df2, glueContext, 'your_label')
applymapping1 = ApplyMapping.apply(frame = custom_df3, mappings = [("id",
"string", "id", "int"),........more code
This is what you are doing: 1. Read the file in DynamicFrame, 2. Convert it to DataFrame, 3. Drop null values, 4. Convert back to DynamicFrame, and 5. ApplyMapping. You were getting the following error because your step 4 was wrong and you were were feeding a DataFrame to ApplyMapping which does not work. ApplyMapping is designed for DynamicFrames.
I would suggest read your data in DynamicFrame and stick to the same data type. It would look like this (one way to do it):
from awsglue.dynamicframe import DynamicFrame
datasource0 = glueContext.create_dynamic_frame.from_catalog(database =
"xyz_catalog", table_name = "xyz_staging_files", transformation_ctx =
"datasource0")
custom_df = DropNullFields.apply(frame=datasource0)
applymapping1 = ApplyMapping.apply(frame = custom_df, mappings = [("id",
"string", "id", "int"),........more code

SparkSQL performance issue with collect method

We are currently facing a performance issue in sparksql written in scala language. Application flow is mentioned below.
Spark application reads a text file from input hdfs directory
Creates a data frame on top of the file using programmatically specifying schema. This dataframe will be an exact replication of the input file kept in memory. Will have around 18 columns in the dataframe
var eqpDF = sqlContext.createDataFrame(eqpRowRdd, eqpSchema)
Creates a filtered dataframe from the first data frame constructed in step 2. This dataframe will contain unique account numbers with the help of distinct keyword.
var distAccNrsDF = eqpDF.select("accountnumber").distinct().collect()
Using the two dataframes constructed in step 2 & 3, we will get all the records which belong to one account number and do some Json parsing logic on top of the filtered data.
var filtrEqpDF =
eqpDF.where("accountnumber='" + data.getString(0) + "'").collect()
Finally the json parsed data will be put into Hbase table
Here we are facing performance issues while calling the collect method on top of the data frames. Because collect will fetch all the data into a single node and then do the processing, thus losing the parallel processing benefit.
Also in real scenario there will be 10 billion records of data which we can expect. Hence collecting all those records in to driver node will might crash the program itself due to memory or disk space limitations.
I don't think the take method can be used in our case which will fetch limited number of records at a time. We have to get all the unique account numbers from the whole data and hence I am not sure whether take method, which takes
limited records at a time, will suit our requirements
Appreciate any help to avoid calling collect methods and have some other best practises to follow. Code snippets/suggestions/git links will be very helpful if anyone have had faced similar issues
Code snippet
val eqpSchemaString = "acoountnumber ....."
val eqpSchema = StructType(eqpSchemaString.split(" ").map(fieldName =>
StructField(fieldName, StringType, true)));
val eqpRdd = sc.textFile(inputPath)
val eqpRowRdd = eqpRdd.map(_.split(",")).map(eqpRow => Row(eqpRow(0).trim, eqpRow(1).trim, ....)
var eqpDF = sqlContext.createDataFrame(eqpRowRdd, eqpSchema);
var distAccNrsDF = eqpDF.select("accountnumber").distinct().collect()
distAccNrsDF.foreach { data =>
var filtrEqpDF = eqpDF.where("accountnumber='" + data.getString(0) + "'").collect()
var result = new JSONObject()
result.put("jsonSchemaVersion", "1.0")
val firstRowAcc = filtrEqpDF(0)
//Json parsing logic
{
.....
.....
}
}
The approach usually take in this kind of situation is:
Instead of collect, invoke foreachPartition: foreachPartition applies a function to each partition (represented by an Iterator[Row]) of the underlying DataFrame separately (the partition being the atomic unit of parallelism of Spark)
the function will open a connection to HBase (thus making it one per partition) and send all the contained values through this connection
This means the every executor opens a connection (which is not serializable but lives within the boundaries of the function, thus not needing to be sent across the network) and independently sends its contents to HBase, without any need to collect all data on the driver (or any one node, for that matter).
It looks like you are reading a CSV file, so probably something like the following will do the trick:
spark.read.csv(inputPath). // Using DataFrameReader but your way works too
foreachPartition { rows =>
val conn = ??? // Create HBase connection
for (row <- rows) { // Loop over the iterator
val data = parseJson(row) // Your parsing logic
??? // Use 'conn' to save 'data'
}
}
You can ignore collect in your code if you have large set of data.
Collect Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.
Also this can cause the driver to run out of memory, though, because collect() fetches the entire RDD/DF to a single machine.
I have just edited your code, which should work for you.
var distAccNrsDF = eqpDF.select("accountnumber").distinct()
distAccNrsDF.foreach { data =>
var filtrEqpDF = eqpDF.where("accountnumber='" + data.getString(0) + "'")
var result = new JSONObject()
result.put("jsonSchemaVersion", "1.0")
val firstRowAcc = filtrEqpDF(0)
//Json parsing logic
{
.....
.....
}
}

Deploying Keras model to Google Cloud ML for serving predictions

I need to understand how to deploy models on Google Cloud ML. My first task is to deploy a very simple text classifier on the service. I do it in the following steps (could perhaps be shortened to fewer steps, if so, feel free to let me know):
Define the model using Keras and export to YAML
Load up YAML and export as a Tensorflow SavedModel
Upload model to Google Cloud Storage
Deploy model from storage to Google Cloud ML
Set the upload model version as default on the models website.
Run model with a sample input
I've finally made step 1-5 work, but now I get this strange error seen below when running the model. Can anyone help? Details on the steps is below. Hopefully, it can also help others that are stuck on one of the previous steps. My model works fine locally.
I've seen Deploying Keras Models via Google Cloud ML and Export a basic Tensorflow model to Google Cloud ML, but they seem to be stuck on other steps of the process.
Error
Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="In[0] is not a matrix
[[Node: MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[-1,64]], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](Mean, softmax_W/read)]]")
Step 1
# import necessary classes from Keras..
model_input = Input(shape=(maxlen,), dtype='int32')
embed = Embedding(input_dim=nb_tokens,
output_dim=256,
mask_zero=False,
input_length=maxlen,
name='embedding')
x = embed(model_input)
x = GlobalAveragePooling1D()(x)
outputs = [Dense(nb_classes, activation='softmax', name='softmax')(x)]
model = Model(input=[model_input], output=outputs, name="fasttext")
# export to YAML..
Step 2
from __future__ import print_function
import sys
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import keras
from keras import backend as K
from keras.models import model_from_config, model_from_yaml
from optparse import OptionParser
EXPORT_VERSION = 1 # for us to keep track of different model versions (integer)
def export_model(model_def, model_weights, export_path):
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
K.set_learning_phase(0) # all new operations will be in test mode from now on
yaml_file = open(model_def, 'r')
yaml_string = yaml_file.read()
yaml_file.close()
model = model_from_yaml(yaml_string)
# force initialization
model.compile(loss='categorical_crossentropy',
optimizer='adam')
Wsave = model.get_weights()
model.set_weights(Wsave)
# weights are not loaded as I'm just testing, not really deploying
# model.load_weights(model_weights)
print(model.input)
print(model.output)
pred_node_names = output_node_names = 'Softmax:0'
num_output = 1
export_path_base = export_path
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes('initial'))
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
x = model.input
y = model.output
values, indices = tf.nn.top_k(y, 5)
table = tf.contrib.lookup.index_to_string_table_from_tensor(tf.constant([str(i) for i in xrange(5)]))
prediction_classes = table.lookup(tf.to_int64(indices))
classification_inputs = tf.saved_model.utils.build_tensor_info(model.input)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(prediction_classes)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(values)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(inputs={tf.saved_model.signature_constants.CLASSIFY_INPUTS: classification_inputs},
outputs={tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES: classification_outputs_classes, tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES: classification_outputs_scores},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
prediction_signature = (tf.saved_model.signature_def_utils.build_signature_def(
inputs={'images': tensor_info_x},
outputs={'scores': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={'predict_images': prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: classification_signature,},
legacy_init_op=legacy_init_op)
builder.save()
print('Done exporting!')
raise SystemExit
if __name__ == '__main__':
usage = "usage: %prog [options] arg"
parser = OptionParser(usage)
(options, args) = parser.parse_args()
if len(args) < 3:
raise ValueError("Too few arguments!")
model_def = args[0]
model_weights = args[1]
export_path = args[2]
export_model(model_def, model_weights, export_path)
Step 3
gsutil cp -r fasttext_cloud/ gs://quiet-notch-xyz.appspot.com
Step 4
from __future__ import print_function
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
import time
projectID = 'projects/{}'.format('quiet-notch-xyz')
modelName = 'fasttext'
modelID = '{}/models/{}'.format(projectID, modelName)
versionName = 'Initial'
versionDescription = 'Initial release.'
trainedModelLocation = 'gs://quiet-notch-xyz.appspot.com/fasttext/'
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)
# Create a dictionary with the fields from the request body.
requestDict = {'name': modelName, 'description': 'Online predictions.'}
# Create a request to call projects.models.create.
request = ml.projects().models().create(parent=projectID, body=requestDict)
# Make the call.
try:
response = request.execute()
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error creating the model.' +
' Check the details:')
print(err._get_reason())
# Clear the response for next time.
response = None
raise
time.sleep(10)
requestDict = {'name': versionName,
'description': versionDescription,
'deploymentUri': trainedModelLocation}
# Create a request to call projects.models.versions.create
request = ml.projects().models().versions().create(parent=modelID,
body=requestDict)
# Make the call.
try:
print("Creating model setup..", end=' ')
response = request.execute()
# Get the operation name.
operationID = response['name']
print('Done.')
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error creating the version.' +
' Check the details:')
print(err._get_reason())
raise
done = False
request = ml.projects().operations().get(name=operationID)
print("Adding model from storage..", end=' ')
while (not done):
response = None
# Wait for 10000 milliseconds.
time.sleep(10)
# Make the next call.
try:
response = request.execute()
# Check for finish.
done = True # response.get('done', False)
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error getting the operation.' +
'Check the details:')
print(err._get_reason())
done = True
raise
print("Done.")
Step 5
Use website.
Step 6
def predict_json(instances, project='quiet-notch-xyz', model='fasttext', version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the Cloud ML Engine Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to tensors.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
Then run function with test input: predict_json({'inputs':[[18, 87, 13, 589, 0]]})
There is now a sample demonstrating the use of Keras on CloudML engine, including prediction. You can find the sample here:
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/keras
I would suggest comparing your code to that code.
Some additional suggestions that will still be relevant:
CloudML Engine currently only supports using a single signature (the default signature). Looking at your code, I think prediction_signature is more likely to lead to success, but you haven't made that the default signature. I suggest the following:
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature,},
legacy_init_op=legacy_init_op)
If you are deploying to the service, then you would invoke prediction like so:
predict_json({'images':[[18, 87, 13, 589, 0]]})
If you are testing locally using gcloud ml-engine local predict --json-instances the input data is slightly different (matches that of the batch prediction service). Each newline-separated line looks like this (showing a file with two lines):
{'images':[[18, 87, 13, 589, 0]]}
{'images':[[21, 85, 13, 100, 1]]}
I don't actually know enough about the shape of model.x to ensure the data being sent is correct for your model.
By way of explanation, it may be insightful to consider the difference between the Classification and Prediction methods in SavedModel. One difference is that, when using tensorflow_serving, which is based on gRPC, which is strongly typed, Classification provides a strongly-typed signature that most classifiers can use. Then you can reuse the same client on any classifier.
That's not overly useful when using JSON since JSON isn't strongly typed.
One other difference is that, when using tensorflow_serving, Prediction accepts column-based inputs (a map from feature name to every value for that feature in the whole batch) whereas Classification accepts row based inputs (each input instance/example is a row).
CloudML abstracts that away a bit and always requires row-based inputs (a list of instances). We even though we only officially support Prediction, but Classification should work as well.

Tridion Workflow Error on EDA_ITEMS_UPDATE Stored Procedure When Restarting WF Activities

I'm hitting an error whenever 2 or more instances of a workflow is restarted. We have suspended workflows during a network incident in our office(s). It was a script timeout that disrupted the process. I couldn't reproduce this timeout anymore so I attempted to restart the workflow activities to complete the process. However, when I restart more than 1 suspended activity, I get this error (the full log is farther below):
Cannot insert the value NULL into column 'ITEM_ID', table 'tridion_cm.dbo.ITEM_ASSOCIATIONS'; column does not allow nulls. INSERT fails.
The statement has been terminated.
Restarting activities and letting it finish one by one has no problem.
Full Event Log:
An error occurred while executing the Workflow script.
The Script Engine returned the following information:
SOURCE:
Line = 0
Column = 0
Number = -2147220673
Source = Component.Save
Description =
<?xml version="1.0" standalone="yes"?>
<tcm:Error ErrorCode="8004033F" Category="4" Source="Kernel" Severity="1" xmlns:tcm="http://www.tridion.com/ContentManager/5.0">
<tcm:Line ErrorCode="8004033F" Cause="false" MessageID="16137"><![CDATA[Unable to save Component (tcm:5-32795).]]>
<tcm:Token>RESID_4574</tcm:Token>
<tcm:Token>RESID_4418</tcm:Token>
<tcm:Token>tcm:5-32795</tcm:Token>
</tcm:Line>
<tcm:Line ErrorCode="8004033F" Cause="true">CDATA[No data found. ETA_ITEMS, U Cannot insert the value NULL into column 'ITEM_ID', table 'tridion_cm.dbo.ITEM_ASSOCIATIONS'; column does not allow nulls. INSERT fails. The statement has been terminated.
</tcm:Line>
<tcm:Line ErrorCode="8004033F" Cause="false"><![CDATA[A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATE".]]>
<tcm:Token>EDA_ITEMS_UPDATE</tcm:Token>
</tcm:Line>
<tcm:Details>
<tcm:CallStack>
<tcm:Location>System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(</tcm:Location>
<tcm:Location>System.Data.SqlClient.TdsParser.Run(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,DbAsyncResult)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult,String,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.ExecuteNonQuery()</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.DatabaseUtilities.ExecuteNonQuery(StoredProcedureInvocation)</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.ExecuteNonQuery(StoredProcedureInvocation)</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.ExecuteNonQuery(StoredProcedureInvocation)</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.IdentifiableObjectDataMapper.Tridion.ContentManager.Data.IIdentifiableObjectDataMapper.Update(IdentifiableObjectData)</tcm:Location>
<tcm:Location>Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)</tcm:Location>
<tcm:Location>Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)</tcm:Location>
<tcm:Location>Tridion.ContentManager.ContentManagement.VersionedItem.Save()</tcm:Location>
<tcm:Location>Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)</tcm:Location>
<tcm:Location>XMLState.Save</tcm:Location>
<tcm:Location>Component.Save</tcm:Location>
</tcm:CallStack>
</tcm:Details>
</tcm:Error>
HelpFile =
HelpContext = 1000440
caused by: Component.Save
and description:
<?xml version="1.0" standalone="yes"?>
<tcm:Error ErrorCode="8004033F" Category="4" Source="Kernel" Severity="1" xmlns:tcm="http://www.tridion.com/ContentManager/5.0">
<tcm:Line ErrorCode="8004033F" Cause="false" MessageID="16137"><![CDATA[Unable to save Component (tcm:5-32795).]]>
<tcm:Token>RESID_4574</tcm:Token>
<tcm:Token>RESID_4418</tcm:Token>
<tcm:Token>tcm:5-32795</tcm:Token>
</tcm:Line>
<tcm:Line ErrorCode="8004033F" Cause="true">CDATA[No data found. ETA_ITEMS, U Cannot insert the value NULL into column 'ITEM_ID', table 'tridion_cm.dbo.ITEM_ASSOCIATIONS'; column does not allow nulls. INSERT fails. The statement has been terminated.
</tcm:Line>
<tcm:Line ErrorCode="8004033F" Cause="false"><![CDATA[A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATE".]]>
<tcm:Token>EDA_ITEMS_UPDATE</tcm:Token>
</tcm:Line>
<tcm:Details>
<tcm:CallStack>
<tcm:Location>System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(</tcm:Location>
<tcm:Location>System.Data.SqlClient.TdsParser.Run(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,DbAsyncResult)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult,String,Boolean)</tcm:Location>
<tcm:Location>System.Data.SqlClient.SqlCommand.ExecuteNonQuery()</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.DatabaseUtilities.ExecuteNonQuery(StoredProcedureInvocation)</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.ExecuteNonQuery(StoredProcedureInvocation)</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.ExecuteNonQuery(StoredProcedureInvocation)</tcm:Location>
<tcm:Location>Tridion.ContentManager.Data.AdoNet.IdentifiableObjectDataMapper.Tridion.ContentManager.Data.IIdentifiableObjectDataMapper.Update(IdentifiableObjectData)</tcm:Location>
<tcm:Location>Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)</tcm:Location>
<tcm:Location>Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)</tcm:Location>
<tcm:Location>Tridion.ContentManager.ContentManagement.VersionedItem.Save()</tcm:Location>
<tcm:Location>Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)</tcm:Location>
<tcm:Location>XMLState.Save</tcm:Location>
<tcm:Location>Component.Save</tcm:Location>
</tcm:CallStack>
</tcm:Details>
</tcm:Error>
Source:
LogScriptError
Workflow Info
The components are newly created by another application through BusinessConnector. It performs revision saves before finishing the activity. The objective of the WF is to
assign the necessary component links based on text values from the other system;
create a page;
and publish it to a target.
I know this could've been done better by other means but it is how it is right now, please bear with the implementation.
Model / Schema
Product
- modelName (text)
- categoryName (text)
- categoryCL (component link)
- statusName (text)
- statusCL (component link)
- blah1 (text)
- blah2 (text)
- blah3 (text)
- blah4 (text)
Automatic Activity Script:
' COMPONENT WORKFLOW for PRODUCT SCHEMA (START)
' Load common functions
ExecuteGlobal oTDSE.GetObject("/webdav/200%20Design/Building%20Blocks/Library/Design/Workflow/Common/Workflow%20Functions.tbbs", 1).Content
' Initialize
Set objComp = CurrentWorkItem.GetItem()
Set oContentPub = objComp.Publication
Set oWebsitePub = oTDSE.GetObject(GetStagingLangPub("EN", "website1"), 1)
modelName = objComp.Fields.Item("modelName").Value.Item(1)
' Search and Assign component links based on some text fields from BC client
Set compListRowFilter = oTDSE.CreateListRowFilter()
Call compListRowFilter.SetCondition("ShowNewItems", TRUE)
Call compListRowFilter.SetCondition("ItemType", 16)
' Assign the first component link (categoryCL)
categoryName = objComp.Fields.Item("categoryName").Value.Item(1)
Set categoryComp = GetObjectFromFolder(categoryName, oTDSE.GetObject(categoryFolderWebDav, 1), compListRowFilter)
If Not categoryComp is Nothing Then
objComp.Fields.Item("categoryCL").Value.RemoveAll
objComp.Fields.Item("categoryCL").Value.Add(categoryComp)
Call objComp.Save(True)
Else
Call SendNoObjectEmail("Category", categoryName, "categoryCL", GetEmailsFromGroup(oContentPub, categoryOwner), "")
End if
' Assign the second component link (statusCL)
statusName = objComp.Fields.Item("statusName").Value.Item(1)
Set statusComp = GetObjectFromFolder(statusName, oTDSE.GetObject(statusFolderWebDav, 1), compListRowFilter)
If Not statusComp is Nothing Then
objComp.Fields.Item("statusCL").Value.RemoveAll
objComp.Fields.Item("statusCL").Value.Add(statusComp)
Call objComp.Save(True)
Else
Call SendNoObjectEmail("Status", statusName, "statusCL", GetEmailsFromGroup(oContentPub, statusOwner), "")
End if
' Create a page with the component in WF
Set oPage = CreateDefaultPage(modelName, oWebSitePub, SaveToFolderWebDav, PageTemplateWebDav)
Call oPage.ComponentPresentations.Add(oWebSitePub, oTDSE.GetObject(ComponentTemplateWebDav, 1))
oPage.Save(True)
' Publish the page
....
' Send email
....
' Finish the Activity
Call CurrentWorkItem.ActivityInstance.FinishActivity("someMsg")
' COMPONENT WORKFLOW for PRODUCT SCHEMA (END)
I haven't zoomed in to where exactly it happens but I would assume it is either the categoryCL or statusCL's objComp.Save(True) line.
I notice that you are calling Call objComp.Save(True) - The "True" parameter forces a "CheckIn", as it means "Done Editing".
It looks like this can occur twice (i.e. If there is a statusCL and a categoryCL) whichmay be causing the problem. I would try changing the behavior so that you only the call Save() method at the end if a change has been made, and not saving it twice.
From what I see the problem is that you are trying to assign nulls as component links. Are you sure that you are not trying to link these 2 components one to another? It might be that the link you are trying to create is to new component in workflow (it has no checked-in version yet), hence you can't create link to it. Try adding some logging to your script, so we could see where exactly does it fail and what are the parameters.
And I agree with Frank that this exception is worth a defect report.