Golang Postgres pq failed scanning to *string - postgresql

I'm trying to scan a postgresql list into an empty slice of string. However, I'm getting the error below:
Failed creating education: sql: Scan error on column index 14, name "descriptions": unsupported Scan, storing driver.Value type string into type *[]*string. Looks like I need to customize the scanner somehow, but how do I do that with squirrel? thanks.
Here's how I'm building the query:
squirrel.StatementBuilder.PlaceholderFormat(squirrel.Dollar).RunWith(db).Insert("educations").
Columns("id", "school", "city", "state", "degree", "month_start", "year_start", "month_end", "year_end", "\"order\"", "logo_url", "created_at", "updated_at", "style", "descriptions").
Values(
uuid.Must(uuid.NewV4()).String(),
education.School,
education.City,
education.State,
education.Degree,
education.MonthStart,
education.YearStart,
education.MonthEnd,
education.YearEnd,
education.Order,
education.LogoURL,
currentTime,
currentTime,
savedStyle.ID,
pq.Array(education.Descriptions),
).
Suffix("RETURNING *").
Scan(
&savedEducation.ID,
&savedEducation.School,
&savedEducation.City,
&savedEducation.State,
&savedEducation.Degree,
&savedEducation.MonthStart,
&savedEducation.YearStart,
&savedEducation.MonthEnd,
&savedEducation.YearEnd,
&savedEducation.Order,
&savedEducation.LogoURL,
&savedEducation.CreatedAt,
&savedEducation.UpdatedAt,
&ignored,
&savedEducation.Descriptions,
)

Related

DB2 select JSON_ARRAYAGG

Using of JSON_ARRAYAGG does not working for me
1) [Code: -104, SQL State: 42601] An unexpected token "ORDER" was found following "CITY)
". Expected tokens may include: ")".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.28.11
2) [Code: -727, SQL State: 56098] An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-104", SQLSTATE "42601" and message tokens "ORDER|CITY)
|)".. SQLCODE=-727, SQLSTATE=56098, DRIVER=4.28.11
I want an output like this
{"id":901, "name": "Hansi", "addresses" :
[
{ "address":"A", "city":"B"},
{ "address":"C", "city":"D"}
]
}
I am using IBM DB2 11.1 for Linux, UNIX and Windows.
values (
json_array(
select json_object ('ID' value ID,
'NAME' value NAME,
'ADDRESSES' VALUE JSON_ARRAYAGG(
JSON_OBJECT('ADDRESS' VALUE ADDRESS,
'CITY' VALUE CITY)
ORDER BY ADDRESS)
)
FROM CUSTOMER
JOIN CUSTOMER_ADDRESS ON ADDRESS_CUSTOMER_ID = ID
GROUP BY ID, NAME
FORMAT JSON
));
Used tables are:
CUSTOMER - ID (INT), NAME (VARCHAR64)
ADDRESS - ADDRESS (VARCHAR64), CITY (VARCHAR64)

How to use result of Lookup Activity in next Lookup of Azure Data Factory?

I have Lookup "Fetch Customers" with SQL statement:
Select Count(CustomerId) As 'Row_count' ,Min(sales_amount) as 'Min_Sales' From [sales].
[Customers]
It returns value
10, 5000
Next I have Lookup "Update Min Sales" with SQL statement, but getting error:
Update Sales_Min_Sales
SET Row_Count = #activity('Fetch Customers').output.Row_count,
Min_Sales = #activity('Fetch Customers').output.Min_Sales
Select 1
Same error occurs even I set Lookup to
Select #activity('Fetch Fetch Customers').output.Row_count
Error:
A database operation failed with the following error: 'Must declare the scalar variable
"#activity".',Source=,''Type=System.Data.SqlClient.SqlException,Message=Must declare the
scalar variable "#activity".,Source=.Net SqlClient Data
Provider,SqlErrorNumber=137,Class=15,ErrorCode=-2146232060,State=2,Errors=
[{Class=15,Number=137,State=2,Message=Must declare the scalar variable "#activity".,},],'
I have similar set up as yours. Two lookup activities.
First look up brings min ID and Max ID as shown
{
"count": 1,
"value": [
{
"Min": 1,
"Max": 30118
}
],
"effectiveIntegrationRuntime": "DefaultIntegrationRuntime (East US)",
"billingReference": {
"activityType": "PipelineActivity",
"billableDuration": [
{
"meterType": "AzureIR",
"duration": 0.016666666666666666,
"unit": "DIUHours"
}
]
},
"durationInQueue": {
"integrationRuntimeQueue": 22
}
}
in my second lookup i am using the below expression
Update saleslt.customer set somecol=someval where CustomerID=#{activity('Lookup1').output.Value[0]['Min']}
Select 1 as dummy
Just that we have to access lookup output using indices as mentioned and place the activity output inside {}.

How to insert 'NULL' values for 'int' column tupes in Aurora PostgreSQL db using Python boto3 client

I have a CSV file (MS SQL server table export) and I would like to import it to Aurora Serverless PostgreSQL database table. I did a basic preprocessing of the CSV file to replace all of the NULL values in it (i.e. '') to "NULL". The file looks like that:
CSV file:
ID,DRAW_WORKS
10000002,NULL
10000005,NULL
10000004,FLEXRIG3
10000003,FLEXRIG3
The PostgreSQL table has the following schema:
CREATE TABLE T_RIG_ACTIVITY_STATUS_DATE (
ID varchar(20) NOT NULL,
DRAW_WORKS_RATING int NULL
)
The code I am using to read and insert the CSV file is the following:
import boto3
import csv
rds_client = boto3.client('rds-data')
...
def batch_execute_statement(sql, sql_parameter_sets, transaction_id=None):
parameters = {
'secretArn': db_credentials_secrets_store_arn,
'database': database_name,
'resourceArn': db_cluster_arn,
'sql': sql,
'parameterSets': sql_parameter_sets
}
if transaction_id is not None:
parameters['transactionId'] = transaction_id
response = rds_client.batch_execute_statement(**parameters)
return response
transaction = rds_client.begin_transaction(
secretArn=db_credentials_secrets_store_arn,
resourceArn=db_cluster_arn,
database=database_name)
sql = 'INSERT INTO T_RIG_ACTIVITY_STATUS_DATE VALUES (:ID, :DRAW_WORKS);'
parameter_set = []
with open('test.csv', 'r') as file:
reader = csv.DictReader(file, delimiter=',')
for row in reader:
entry = [
{'name': 'ID','value': {'stringValue': row['RIG_ID']}},
{'name': 'DRAW_WORKS', 'value': {'longValue': row['DRAW_WORKS']}}
]
parameter_set.append(entry)
response = batch_execute_statement(
sql, parameter_set, transaction['transactionId'])
However, there is an error that gets returned suggests that there is a type mismatch:
Invalid type for parameter parameterSets[0][5].value.longValue,
value: NULL, type: <class 'str'>, valid types: <class 'int'>"
Is there a way to configure Aurora to accept NULL values for types such as int?
Reading the boto3 documentation more carefully I found that we can use isNull value set to True in case a field is NULL. The bellow code snippet shows how to insert null value to the database:
...
entry = [
{'name': 'ID','value': {'stringValue': row['ID']}}
]
if row['DRAW_WORKS'] == 'NULL':
entry.append({'name': 'DRAW_WORKS', 'value': {'isNull': True}})
else:
entry.append({'name': 'DRAW_WORKS_RATING', 'value': {'longValue': int(row['DRAW_WORKS'])}})
parameter_set.append(entry)

Elixir ecto: check if postgresql map column has key

To use json/jsonb data type ecto suggets to use fragments.
In my case, I've to use PostgreSQL ? operator to see if the map has such key, this however it will become something like:
where(events, [e], e.type == 1 and not fragment("???", e.qualifiers, "?", "2"))
but of course fragment reads the PostgreSQL ? as a placeholder. How can I check if the map has such key?
You need to escape the middle ? and pass a total of three arguments to fragment:
fragment("? \\? ?", e.qualifiers, "2")
Demo:
iex(1)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{price: 1}}
iex(2)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{}}
iex(3)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "price"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'price') [] OK query=8.0ms
[%MyApp.Food{__meta__: #Ecto.Schema.Metadata<:loaded>, id: 1,
inserted_at: #Ecto.DateTime<2016-06-19T03:51:40Z>, meta: %{"price" => 1},
name: "Foo", updated_at: #Ecto.DateTime<2016-06-19T03:51:40Z>}]
iex(4)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "a"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'a') [] OK query=0.8ms
[]
I'm not sure if this is documented anywhere, but I found the method from this test.

While saving a collection MongoDB is creating Index name which is too long and exceeds 127 bytes limit. How to solve this. can i disable indexing?

com.mongodb.CommandFailureException: { "serverUsed" : "localhost:27017" , "createdCollectionAutomatically" : true , "numIndexesBefore" : 1 , "ok" : 0.0 , "errmsg" : "namespace name generated from index name \"NDS.ABCD_pre_import.$importabilityEvaluations.perNameResults.straightImportResults.resultPolContent_NOT_IN_CURRENT_USE.officialPolResultNameContentId\" is too long (127 byte max)" , "code" : 67}
at com.mongodb.CommandResult.getException(CommandResult.java:76)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:131)
at com.mongodb.DBCollectionImpl.createIndex(DBCollectionImpl.java:362)
at com.mongodb.DBCollection.createIndex(DBCollection.java:563)
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.createIndex(MongoPersistentEntityIndexCreator.java:136)
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.checkForAndCreateIndexes(MongoPersistentEntityIndexCreator.java:129)
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.checkForIndexes(MongoPersistentEntityIndexCreator.java:121)
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.onApplicationEvent(MongoPersistentEntityIndexCreator.java:105)
at org.springframework.data.mongodb.core.index.MongoMappingEventPublisher.publishEvent(MongoMappingEventPublisher.java:60)
at org.springframework.data.mapping.context.AbstractMappingContext.addPersistentEntity(AbstractMappingContext.java:306)
at org.springframework.data.mapping.context.AbstractMappingContext.getPersistentEntity(AbstractMappingContext.java:180)
at org.springframework.data.mapping.context.AbstractMappingContext.getPersistentEntity(AbstractMappingContext.java:140)
at org.springframework.data.mapping.context.AbstractMappingContext.getPersistentEntity(AbstractMappingContext.java:67)
at org.springframework.data.mongodb.core.MongoTemplate.determineCollectionName(MongoTemplate.java:1881)
at org.springframework.data.mongodb.core.MongoTemplate.determineEntityCollectionName(MongoTemplate.java:1868)
at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:825)
You can pass an index name as parameter to ensureIndex:
db.collection.ensureIndex({"birds.parrots.macaw.blue.id": 1}, {name:"myIndex1"});
db.collection.ensureIndex({"birds.parrots.macaw.blue.id": 1, "field2": 1}, {name:"myIndex1"});
record my case for other refer:
my case: code
indexKeyList = [
("shortLink", pymongo.ASCENDING),
("parsedLink.isParseOk", pymongo.ASCENDING),
("parsedLink.errType", pymongo.ASCENDING),
("parsedGame.realGameName", pymongo.ASCENDING),
("parsedGame.gameTheme", pymongo.ASCENDING),
]
mongoCollectionShortlink.create_index(indexKeyList)
report error:
发生异常: OperationFailure
namespace name generated from index name "shortLink.gameShortLink.$shortLink_1_parsedLink.isParseOk_1_parsedLink.errType_1_parsedGame.realGameName_1_parsedGame.gameTheme_1" is too long (127 byte max), full error: {'ok': 0.0, 'errmsg': 'namespace name generated from index name "shortLink.gameShortLink.$shortLink_1_parsedLink.isParseOk_1_parsedLink.errType_1_parsedGame.realGameName_1_parsedGame.gameTheme_1" is too long (127 byte max)', 'code': 67, 'codeName': 'CannotCreateIndex'}
root cause: use create_index to create multiple index. then multiple index name joined together, cause name too long, exceed 127 byte limit
Solution: should use create_indexes
import pymongo
from pymongo import IndexModel
indexShortLink = IndexModel([("shortLink", pymongo.ASCENDING)], name="shortLink")
indexIsParseOk = IndexModel([("parsedLink.isParseOk", pymongo.ASCENDING)], name="parsedLink_isParseOk")
indexErrType = IndexModel([("parsedLink.errType", pymongo.ASCENDING)], name="parsedLink_errType")
indexRealGameName = IndexModel([("parsedGame.realGameName", pymongo.ASCENDING)], name="parsedGame_realGameName")
indexGameTheme = IndexModel([("parsedGame.gameTheme", pymongo.ASCENDING)], name="parsedGame_gameTheme")
indexModelList = [
indexShortLink,
indexIsParseOk,
indexErrType,
indexRealGameName,
indexGameTheme,
]
mongoCollectionShortlink.create_indexes(indexModelList)
You can not disable indexing as MongoDB will always create an index for _id. Shorten your collection name instead - saves you some typing too