Active Record generating a wrong query - postgresql

I'm trying to do an extremely simple active record query :
#row = Blazer.where(username: 'test').first
PostgreSQL generation :
Blazer.where(username: 'test').to_sql
outputs :
SELECT "blazer".* FROM "blazer" WHERE "username"."value" = 'test'
which causes an error :
ERROR -- : PG::UndefinedTable: ERROR: missing FROM-clause entry for table "username"
I expected the following PostgreSQL result :
SELECT "blazer".* FROM "blazer" WHERE "username" = 'test'
How can I fix this?
I'm using active record with Sinatra and the app runs on Heroku.
My Blazer class is the following :
class Blazer < ActiveRecord::Base
# phone:string username:string location:string
end

Maybe this is not the correct answer, but I was getting the same error.
I was looking for a Venue with a certain name
venue_name = params.require(:season).permit([:venue_name])
venue = Venue.find_by_name(venue_name)
I also tried with
venue = Venue.where(name: venue_name).first
Both failed saying because it was trying this query
SELECT "venues".* FROM "venues" WHERE "name"."venue_name" = 'venue name' LIMIT 1)
When I was trying this in the console everything looked OK. But when I checked what I was looking for was quite obvious
params.require(:season).permit([:venue_name])
Returns a Hash with only venue_name as key, so instead of looking for a key i was looking for a Hash.
I switched to params.require(:season).permit([:venue_name])[:venue_name] and the issue was fixed.

Related

How to select by ID , with different colum name in typeORM?

How can I find a camp name as "campName" ( column name of PSQL table database ) using type ORM. Right now I am using plain SQL query like below one
SELECT
camp.name as "campName"
FROM
campaigns camp
WHERE
camp.id = 13
but when I use findOne() function of type ORM , I am always getting camp name as "name" ( actual colum name )
typeorm fuction :
const user = await this.campRepository.findOne(
{ where:
{ id: 13 }
}
);
I tried to use above function of type orm, I am getting all user data , for example : name: camp one ,
Is there a way to get it as
campName : camp one ?
EXAMPLE TABLE
: table camp
id
name
place
13
camp one
LA

Mongo's bulkWrite with updateOne + upsert works the first time but gives duplicate key error subsequent times

I'm using the Mongo-php-library to insert many documents into my collection (using bulkWrite). I want the documents to be updated if it already exists or to be inserted if it doesn't, so I'm using "upsert = true".
The code works fine the first time I run it (it inserts the documents), but the second time I run it it gives me this error:
Fatal error: Uncaught MongoDB\Driver\Exception\BulkWriteException: E11000 duplicate key error collection: accounts.posts index: postid dup key: { id: "2338...
I can't see anything wrong with my code. I have already gone through all SO posts but none helped.
This is my code:
// I prepare the array $post_operations with all updateOne operations
// where $data is an object that contains all the document elements I want to insert
$posts_operations = array();
foreach ($this->posts as $id => $data) {
array_push($posts_operations, array('updateOne' => [['id' => $id], ['$set' => $data], ['upsert' => true]]));
}
// Then I execute the method bulkWrite to run all the updateOne operations
$insertPosts = $account_posts->bulkWrite($posts_operations);
It works the first time (when it inserts), but then it doesn't the second time (when it should update).
I have a unique index set up in the collection for 'id'.
Thanks so much for your help.
Ok I was able to fix it. I believe this might be a bug and I've reported it already in the Github repo.
The problem occurred only when "id" was a string of numbers. Once I converted "id" (the field that I was indexing) to an integer it works perfectly.

Can't insert string to Delta Table using Update in Pyspark

I have encountered an issue were it will not allow me to insert a string using update and returns. I'm running 6.5 (includes Apache Spark 2.4.5, Scala 2.11), but it is not working on 6.4 runtime as well.
I have a delta table with the following columns, partitioned by the created date
ID string
, addressLineOne string
, addressLineTwo string
, addressLineThree string
, addressLineFour string
, matchName string
, createdDate
And I'm running a process that hits an API and updates the matchName column.
Using Pyspark if it do this, just to test writing
deltaTable.update(col("ID") == "ABC123", {"matchName ": "example text"})
I get the following error:
Py4JJavaError: An error occurred while calling o1285.update.
: org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to dataType on unresolved object, tree: 'example
If I try this, change the string to 123, it updates without an issue
deltaTable.update(col("ID") == "ABC123", {"matchName ": "123"})
Yet if I use sql and do
UPDATE myTable SET matchName = "Some text" WHERE ID = "ABC123"
It inserts fine. I've searched and can't see a similar issue, Any suggestions? Have I missed something obvious?
Looks like you have an extra space after matchName in your python code

Sequelize update function expects all fields to be provided

Am using Sequlize 5.8.5 and trying to update a model but, it doesn't seem to be possible unless all fields are provided. For example performing Project.update(args) where args cloud sometimes has a name field with changed value and sometimes not even passed at all, if the name field doesn't need to be updated, am getting an error such as err: { SequelizeValidationError: notNull Violation: projects.name cannot be null ... }.
This method is possible in Sequelize. Here's a test I ran:
let u1 = {username : 'Test User', title_id: 4};
let u2 = {title_id: 4};
User.update(u1, {where : {id : 3}});
User.update(u2, {where : {id : 5}});
Here is the generated SQL:
Executing (default): UPDATE `muser` SET `username`='Test User',`title_id`=4 WHERE `id` = 3
Executing (default): UPDATE `muser` SET `title_id`=4 WHERE `id` = 5
How are you creating your args object? The error suggests something like this: let args = {field1 = null} where Project.field1 does not allow null values.

How to update embedded column's property in orientdb

I am facing an issue while updating an embedded field's property in Orientdb.
Below are the steps to reproduce the issue:
CREATE VERTEX Foo set value = { 'abc-def-hgi':"blah blah", '1ab-2cd-3ef': "aaaaa", '345-jkl-mno':'ppppp' }, id = 1
CREATE VERTEX Foo set value = { 'abc-def-hgi':"mmmmm", '1ab-2cd-3ef': "nmnmnmn", '345-jkl-mno':'qqqq' }, id = 2
CREATE VERTEX Foo set value = { 'abc-def-hgi':"lorem ipsum", '1ab-2cd-3ef': "mmmmm", '345-jkl-mno':'llll' }, id = 3
Property "value" has been declared as of type "Embedded".
Now, I want to update record with id "1" for "abc-def-hgi" property in column "value".
I have tried with below queries, but neither of them worked:
update Foo set value["abc-def-hgi"] = "new new" where id = 1
update Foo set value.abc-def-hgi = "new new" where id = 1
It seems that it is having problem with hypen ("-") in the field's property name.
I am using Orientdb's version: 2.2.11
Note: I have looked upon issues in orientdb Git repo, where I found this. Not sure whether it is related to my issue or not, but it's not working at my end.
Any help would be great appreciated.
As you said yourself the problem is with -.
If you try with a field without - the following query works.
Example
update Foo set value.prop = "myprop1" where id=1
If you try to create a field with - , you got an exception.
UPDATE
To create a property with hyphen you could use this command
create property foo.`abc-def-hgi` string
Hope it helps
Regarding the usage of '-' in the names of the properties you can use the quotes as Alessandro says or disable the "Strict" value in database option (in that case you are rolling back to the old parser that was a little bit less rigid)