Field does not exist on transformations to extract key with Debezium - apache-kafka

I am trying to create a Debezium MySQL connector with a transformation to extract the key.
Before key transformations:
create source connector mysql with(
"connector.class" = 'io.debezium.connector.mysql.MySqlConnector',
"database.hostname" = 'mysql',
"tasks.max" = '1',
"database.port" = '3306',
"database.user" = 'debezium',
"database.password" = 'dbz',
"database.server.id" = '42',
"database.server.name" = 'before',
"table.whitelist" = 'deepprices.deepprices',
"database.history.kafka.bootstrap.servers" = 'kafka:29092',
"database.history.kafka.topic" = 'dbz.deepprices',
"include.schema.changes" = 'true',
"transforms" = 'unwrap',
"transforms.unwrap.type" = 'io.debezium.transforms.UnwrapFromEnvelope');
Topic results are :
> rowtime: 2020/05/20 16:47:23.354 Z, key: [St#5778462697648631933/8247607644536792125], value: {"id": "P195910", "price": "1511.64"}
When the key.converter is set to JSON, Key becomes {"id": "P195910"}
So, I want to extract id from key and make it a string key:
Expected results :
rowtime: 2020/05/20 16:47:23.354 Z,
key: 'P195910',
value: {"id": "P195910", "price": "1511.64"}
While trying to use a transformation with ExtractField or ValueToKey I get:
DataException: Field does not exist: id:
My try with instruction containing ValueToKey:
create source connector mysql with(
"connector.class" = 'io.debezium.connector.mysql.MySqlConnector',
"database.hostname" = 'mysql',
"tasks.max" = '1',
"database.port" = '3306',
"database.user" = 'debezium',
"database.password" = 'dbz',
"database.server.id" = '42',
"database.server.name" = 'after',
"table.whitelist" = 'deepprices.deepprices',
"database.history.kafka.bootstrap.servers" = 'kafka:29092',
"database.history.kafka.topic" = 'dbz.deepprices',
"include.schema.changes" = 'true',
"key.converter" = 'org.apache.kafka.connect.json.JsonConverter',
"key.converter.schemas.enable" = 'TRUE',
"value.converter" = 'org.apache.kafka.connect.json.JsonConverter',
"value.converter.schemas.enable" = 'TRUE',
"transforms" = 'unwrap,createkey',
"transforms.unwrap.type" = 'io.debezium.transforms.UnwrapFromEnvelope',
"transforms.createkey.type" = 'org.apache.kafka.connect.transforms.ValueToKey',
"transforms.createkey.fields" = 'id'
);
Causes the following error in my Kafka-connect log:
Caused by: org.apache.kafka.connect.errors.DataException: Field does not exist: id
at org.apache.kafka.connect.transforms.ValueToKey.applyWithSchema(ValueToKey.java:89)
at org.apache.kafka.connect.transforms.ValueToKey.apply(ValueToKey.java:67)

Changing the transformation type from UnwrapFromEnvelope to ExtractNewRecordState, solved the issue on Debezium MySQL CDC Connector, version 1.1.0.
transforms.unwrap.type" = 'io.debezium.transforms.ExtractNewRecordState'

Since you're using ksqlDB here you'll want to set your source connector to write the key as a String:
key.converter=org.apache.kafka.connect.storage.StringConverter

Related

PyFlink 1.11 not connecting to Confluent Cloud (Kafka) cluster

I am configuring PyFlink to connect to a confluent cloud kafka cluster. I am using SASL/PLAIN. Below is the code snippet:
""" CREATE TABLE {0} (
`transaction_amt` BIGINT NOT NULL,
`event_id` VARCHAR(64) NOT NULL,
`event_time` TIMESTAMP(6) NOT NULL
)
WITH (
'connector' = 'kafka',
'topic' = '{1}',
'properties.bootstrap.servers' = '{2}',
'properties.group.id' = 'testGroupTFI',
'format' = 'json',
'json.timestamp-format.standard' = 'ISO-8601',
'properties.security.protocol' = 'SASL_SSL',
'properties.sasl.mechanism' = 'PLAIN',
'properties.sasl.jaas.config' = 'org.apache.kafka.common.security.plain.PlainLoginModule required username=\"{3}\" password=\"{4}\";'
) """.format(table_name, stream_name, broker, user, secret)
I am getting this error:
{
"applicationARN": "arn:aws:kinesisanalytics:us-east-2:xxxxxxxxxxx:application/sentiment",
"applicationVersionId": "13",
"locationInformation": "org.apache.flink.runtime.taskmanager.Task.transitionState(Task.java:973)",
"logger": "org.apache.flink.runtime.taskmanager.Task",
"message": "Source: TableSourceScan(table=[[default_catalog, default_database, input_table]], fields=[transaction_amt, event_id, event_time]) -> Sink: Sink(table=[default_catalog.default_database.output_table_msk], fields=[transaction_amt, event_id, event_time]) (3/12) (25a905455865731943be6aa60927a49c) switched from RUNNING to FAILED.",
"messageSchemaVersion": "1",
"messageType": "WARN",
"threadName": "Source: TableSourceScan(table=[[default_catalog, default_database, input_table]], fields=[transaction_amt, event_id, event_time]) -> Sink: Sink(table=[default_catalog.default_database.output_table_msk], fields=[transaction_amt, event_id, event_time]) (3/12)",
"throwableInformation": "org.apache.flink.kafka.shaded.org.apache.kafka.common.KafkaException: Failed to construct kafka producer\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)\n\tat org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.<init>(FlinkKafkaInternalProducer.java:78)\n\tat org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.createProducer(FlinkKafkaProducer.java:1141)\n\tat org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initProducer(FlinkKafkaProducer.java:1242)\n\tat org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initNonTransactionalProducer(FlinkKafkaProducer.java:1238)\n\tat org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.beginTransaction(FlinkKafkaProducer.java:940)\n\tat org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.beginTransaction(FlinkKafkaProducer.java:99)\n\tat org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.beginTransactionInternal(TwoPhaseCommitSinkFunction.java:398)\n\tat org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.initializeState(TwoPhaseCommitSinkFunction.java:389)\n\tat org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initializeState(FlinkKafkaProducer.java:1111)\n\tat org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:185)\n\tat org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:167)\n\tat org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)\n\tat org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)\n\tat org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)\n\tat org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)\n\tat org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:474)\n\tat org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:92)\n\tat org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:470)\n\tat org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:529)\n\tat org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:724)\n\tat org.apache.flink.runtime.taskmanager.Task.run(Task.java:549)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: org.apache.flink.kafka.shaded.org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: No LoginModule found for org.apache.kafka.common.security.plain.PlainLoginModule\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:158)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:450)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:421)\n\t... 23 more\nCaused by: javax.security.auth.login.LoginException: No LoginModule found for org.apache.kafka.common.security.plain.PlainLoginModule\n\tat java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:731)\n\tat java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:672)\n\tat java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:670)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:670)\n\tat java.base/javax.security.auth.login.LoginContext.login(LoginContext.java:581)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:62)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:105)\n\tat org.apache.flink.kafka.shaded.org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:147)\n\t... 28 more\n"
}
I feel that SASL is not supported by PyFlink SQL Connector for 1.11 or 1.13. Is this is correct? Is there a workaround I can work on?

Fivetran Connector - connection_type not getting persisted

I'm trying to use the Fivetran API to dynamically create Connectors. One problem I've run into is that when I set connection_type = PrivateLink, it doesn't work.
Specifically, when I try to set connection_type on the Connector itself, I get an error:
error: fivetran:index/connector:Connector resource 'my-connector' has a problem: Computed attribute cannot be set. Examine values at 'Connector.Config.ConnectionType'.
When I set the connection_type on the Destination which the Connectors belong to... nothing happens! I look at the Connectors in the UI and they all have their connection_type set to Direct Connection, even though the Destination is supposed to be set to PrivateLink.
Is there some trick to get this working?
I'm interacting with Fivetran via Pulumi, not direct REST calls. The config for a Connector looks something like this:
destination = Destination(resource_name = destination_name,
opts = resource_options,
group_id = fivetran_group.id,
region = "AWS_US_EAST_1",
service= "big_query",
config = {
"project_id" : "SomeProjectId",
"connection_type": "PrivateLink",
"data_set_location" : "AWS_US_EAST_1"
},
run_setup_tests = False,
time_zone_offset = "-5")
#...
connector = Connector(resource_name = db + "_data_src",
opts = resource_options,
group_id = fivetran_group.id,
service = "postgres",
paused = True,
pause_after_trial = True,
sync_frequency = 60,
destination_schema = {
"prefix": "xyz_" + db.lower()
},
config = {
"host": "very.long.aws.hostname.com",
"port": 5432,
"database": "XYZ_" + db.upper(),
# "connection_type": "PrivateLink",
"user": "someUser",
"password": "somePassword",
"update_method": "XMIN"
})

Error With RowKey Definition on Confluent BigTable Sink Connector

I'm trying to use the BigTable Sink Connector from Confluent to read data from kafka and write it into my BigTable Instance, but I'm receiving the following message error:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:614)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.ConnectException: Error with RowKey definition: Row key definition was defined, but received, deserialized kafka key is not a struct. Unable to construct a row key.
at io.confluent.connect.bigtable.client.RowKeyExtractor.getRowKey(RowKeyExtractor.java:69)
at io.confluent.connect.bigtable.client.BufferedWriter.addWriteToBatch(BufferedWriter.java:84)
at io.confluent.connect.bigtable.client.InsertWriter.write(InsertWriter.java:47)
at io.confluent.connect.bigtable.BaseBigtableSinkTask.put(BaseBigtableSinkTask.java:99)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
... 10 more
The message producer, due to some technical limitations, will not be able to produce the messages with the key property and, because of that, I'm using some Transforms to get information from payload and setting it as the key message.
Here's my connector payload:
{
"name" : "DATALAKE.BIGTABLE.SINK.QUEUEING.ZTXXD",
"config" : {
"connector.class" : "io.confluent.connect.gcp.bigtable.BigtableSinkConnector",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"value.converter" : "org.apache.kafka.connect.json.JsonConverter",
"topics" : "APP-DATALAKE-QUEUEING-ZTXXD_DATALAKE-V1",
"transforms" : "HoistField,AddKeys,ExtractKey",
"gcp.bigtable.project.id" : "bigtable-project-id",
"gcp.bigtable.instance.id" : "bigtable-instance-id",
"gcp.bigtable.credentials.json" : "XXXXX",
"transforms.ExtractKey.type" : "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.HoistField.field" : "raw_data_cf",
"transforms.ExtractKey.field" : "KEY1,ATT1",
"transforms.HoistField.type" : "org.apache.kafka.connect.transforms.HoistField$Value",
"transforms.AddKeys.type" : "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.AddKeys.fields" : "KEY1,ATT1",
"row.key.definition" : "KEY1,ATT1",
"table.name.format" : "raw_ZTXXD_DATALAKE",
"consumer.override.group.id" : "svc-datalake-KAFKA_2_BIGTABLE",
"confluent.topic.bootstrap.servers" : "xxxxxx:9092",
"input.data.format" : "JSON",
"confluent.topic" : "_dsp-confluent-license",
"input.key.format" : "STRING",
"key.converter.schemas.enable" : "false",
"confluent.topic.security.protocol" : "SASL_SSL",
"row.key.delimiter" : "/",
"confluent.topic.sasl.jaas.config" : "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"XXXXX\" password=\"XXXXXX\";",
"value.converter.schemas.enable" : "false",
"auto.create.tables" : "true",
"auto.create.column.families" : "true",
"confluent.topic.sasl.mechanism" : "PLAIN"
}
}
And here's my message produced to Kafka:
{
"MANDT": "110",
"KEY1": "1",
"KEY2": null,
"ATT1": "1M",
"ATT2": "0000000000",
"TABLE_NAME": "ZTXXD_DATALAKE",
"IUUC_OPERATION": "I",
"CREATETIMESTAMP": "2022-01-24T20:26:45.247Z"
}
In my transforms I'm doing three operations:
HoistField is putting my payload inside a two-level structure (the connect docs for BigTable says that connect expects a two-level structure in order to be able to infer the family columns
addKey is adding the columns that I consider key to the message key
ExtractKey is removing the key from the fields added in the header, leaving only the values ​​themselves.
I've been reading the documentation for this connector for Bigtable and it's not clear to me if the connector works well with the JSON format. Could you let me know?
JSON should work, but...
deserialized kafka key is not a struct
This is because you have set the schemas.enable=false property on the value converter, such that when you do ValueToKey, it's not a Connect Struct type; the HoistField makes a Java Map instead.
If you're not able to use the Schema Registry and switch the serialization format, then you'll need to try and find a way to get the REST Proxy to infer the schema of the JSON message before it produces the data (I don't think it can). Otherwise, your records need to include schema and payload fields, and you need to enable schemas on the converters. Explained here
Another option - There may be a transform project around that sets the schema of the record, but it's not builtin.. (it's not part of SetSchemaMetadata)

TimeStamp convertion from aws glue while transfering data to redshift

I have a file in S3, we are importing it to redshift using Glue.
The crawler part is done.
One column the data is datetime type but not properly format, so the clawler not able to identify and marked it as string.
Now I have created the table in redshift and mark the column datatype is timestamp, now while creating the job where and what need to change in the script so the string converted to redshift timestamp.
The format of date in S3 file is 'yyyy.mm.dd HH:mi:ss';
and the script is below.
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## #params: [TempDir, JOB_NAME]
args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## #type: DataSource
## #args: [database = "", table_name = "", transformation_ctx = "datasource0"]
## #return: datasource0
## #inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "", table_name = "", transformation_ctx = "datasource0")
## #type: ApplyMapping
## #args: [mapping = [("mrp", "long", "mrp", "decimal(10,2)"), ("mop", "double", "mop", "decimal(10,2)"), ("mop_update_timestamp", "string", "mop_update_timestamp", "timestamp"), ("special_price", "long", "special_price", "decimal(10,2)"), ("promotion_identifier", "string", "promotion_identifier", "string"), ("is_percentage_promotion", "string", "is_percentage_promotion", "string"), ("promotion_value", "string", "promotion_value", "decimal(10,2)"), ("max_discount", "long", "max_discount", "decimal(10,2)"), ("promotion_start_date", "string", "promotion_start_date", "timestamp"), ("promotion_end_date", "string", "promotion_end_date", "timestamp")], transformation_ctx = "applymapping1"]
## #return: applymapping1
## #inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [ ("mrp", "long", "mrp", "decimal(10,2)"), ("mop", "double", "mop", "decimal(10,2)"), ("mop_update_timestamp", "string", "mop_update_timestamp", "timestamp"), ("special_price", "long", "special_price", "decimal(10,2)"), ("promotion_identifier", "string", "promotion_identifier", "string"), ("is_percentage_promotion", "string", "is_percentage_promotion", "string"), ("promotion_value", "string", "promotion_value", "decimal(10,2)"), ("max_discount", "long", "max_discount", "decimal(10,2)"), ("promotion_start_date", "string", "promotion_start_date", "timestamp"), ("promotion_end_date", "string", "promotion_end_date", "timestamp")], transformation_ctx = "applymapping1")
## #type: ResolveChoice
## #args: [choice = "make_cols", transformation_ctx = "resolvechoice2"]
## #return: resolvechoice2
## #inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_cols", transformation_ctx = "resolvechoice2")
## #type: DropNullFields
## #args: [transformation_ctx = "dropnullfields3"]
## #return: dropnullfields3
## #inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## #type: DataSink
## #args: [catalog_connection = "", connection_options = {"dbtable": "", "database": ""}, redshift_tmp_dir = TempDir, transformation_ctx = "datasink4"]
## #return: datasink4
## #inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields3, catalog_connection = "", connection_options = {"dbtable": "", "database": ""}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink4")
job.commit()
Have you tried to make it into a dataframe and then cast to timestamp since you have it in 'yyyy.mm.dd HH:mi:ss' format? Something like this:
## Add this in order to use DynamicFrame.fromDF
from awsglue.dynamicframe import DynamicFrame
## Make a dataframe
df_datasource0 = datasource0.toDF()
## add a column mop_update_timestamp_ts where you cast mop_update_timestamp to a timestamp
df_datasource0 = df_datasource0.withColumn('mop_update_timestamp_ts',df_datasource0.mop_update_timestamp.cast('timestamp'))
## Transform the dataframe back to a dynamic frame again
datasource0 = DynamicFrame.fromDF(df_datasource0, glueContext, "datasource0")
## Use the mop_update_timestamp_ts column instead as below.
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [ ("mrp", "long", "mrp", "decimal(10,2)"), ("mop", "double", "mop", "decimal(10,2)"), ("mop_update_timestamp_ts", "timestamp", "mop_update_timestamp", "timestamp"), ("special_price", "long", "special_price", "decimal(10,2)"), ("promotion_identifier", "string", "promotion_identifier", "string"), ("is_percentage_promotion", "string", "is_percentage_promotion", "string"), ("promotion_value", "string", "promotion_value", "decimal(10,2)"), ("max_discount", "long", "max_discount", "decimal(10,2)"), ("promotion_start_date", "string", "promotion_start_date", "timestamp"), ("promotion_end_date", "string", "promotion_end_date", "timestamp")], transformation_ctx = "applymapping1")
Let me know if it works for you
I had a similar problem converting a string to timestamp with PySpark.
The way I did it so that it appears in Athena with type timestamp was to import the spark functions with an alias to avoid name clashing and then create a new column with timestamp datatype, convert the values and finally write that column to S3 and it's picked up by Athena.
from pyspark.sql import functions as spark_f # avoid name clash
dyf = glue_context.create_dynamic_frame.from_options...
df = dyf.toDF()
modified_df = df.withColumn("value_timestamp",
spark_f.to_timestamp(lit(col("value_string")), 'yyyy/MM/dd HH:mm'))
modified_dyf = DynamicFrame.fromDF(modified_df, glue_context, "modified_dyf")
Transform = ApplyMapping.apply(frame=modified_dyf,
mappings=[
( ('value_timestamp', 'timestamp', 'column-name',
'timestamp'),

PostgresOperator in Airflow getting error while passing parameter

I have a dag which queries the postgress database, And I am using postgresOperator
however when passing the parameter I am getting the below Error.
psycopg2.ProgrammingError: column "132" does not exist
LINE 1: ...d,derived_tstamp FROM atomic.events WHERE event_name = "132"
snapshot of my dag below :
default_args = {
"owner": "airflow",
"depends_on_past": False,
"start_date": airflow.utils.dates.days_ago(1),
"email": ["airflow#airflow.com"],
"email_on_failure": False,
"email_on_retry": False,
"retries": 1,
"retry_delay": timedelta(minutes=1),
}
dag = DAG("PostgresTest", default_args=default_args, schedule_interval='3,33 * * * *',template_searchpath = ['/root/airflow/sql/'])
dailyOperator = PostgresOperator(
task_id='Refresh_DailyScore',
postgres_conn_id='postgress_sophi',
params={"e_name":'"132"'},
sql='atomTest.sql',
dag=dag)
Snapshot of atomTest.sql
SELECT domain_userid,derived_tstamp FROM atomic.events WHERE event_name = {{ params.e_name }}
I am hitting my head the whole day to understand why airflow is considering 132 values as column.
Please suggest.