Azure CosmoDB error: pymongo.errors.OperationFailure Bad Request(400) on sorting over column - azure-cosmosdb-mongoapi

Have data over Azure cosmos DB. While sorting over a column by following query:
db.getCollection('xyz').find({}).sort({'created_at':-1,'_id':-1}).limit(10)
getting following error:
Note: have masked ActivityID
pymongo.errors.OperationFailure: Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId:xyz; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId:xyz; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: xyz; Reason: (Message: {"Errors":["The index path corresponding to the specified order-by item is excluded."]}

As per the error and the reason behind the Bad request that it states : The index path corresponding to the specified order-by item is excluded.Hence, you have to add an matching index or a composite index for the sort by query being attempted.
Azure Cosmos DB has an wire protocol version 4.0, 3.0 and legacy support for version 3.2 as Azure Cosmos DB does not host the MongoDB engine. So the possibilities of these errors are only found in Azure Cosmos DB's API for MongoDB
Hence, would request you to refer this for more detailed information: Common errors and solutions

Related

WSO2 API-Manager with Postgres database is not working properly

I have shifted the default h2 database to Postgresql for WSO2 API Manager by following this documentation: https://apim.docs.wso2.com/en/latest/install-and-setup/setup/setting-up-databases/changing-default-databases/changing-to-postgresql/
Creating a new API on throws:
"Something went wrong while getting the Revisions!"
On server found this error
ERROR - ApiMgtDAO Failed to get API Revision deployment mapping details for api uuid: a96f7266-c340-49b6-bbe1-cb252b49860e
org.postgresql.util.PSQLException: ERROR: UNION types integer and boolean cannot be matched
Any help would be greatly appreciated... Thanks...

MongoDB - Robo3t: Failed to do query, no good nodes, Field 'cursor' must be a nested object

When viewing documents of a collection and trying to move between pages using "left" and "right" arrow buttons:
suddenly started to get the following error:
Failed to load documents.
Error:
Failed to do query, no good nodes in MyCluster-shard-0, last error: can't query replica set node mycluster-shard-xx-xx.xxx.xxx.net:27017 :: caused by :: Field 'cursor' must be a nested object in: { conversationId: 7, done: false, payload: BinData(0, 723D424753514D4F432F494C776E73765A7263356774622F42564B695A62746F45523832456A5244475473346E30616B4B597938686352413D3D2C733D6F52614C316438586F...), ok: 1 }
Any idea of why is this happening?
Using Robo3T 1.4 on Ubuntu/Windows 10 - same thing on both OS.

MongoDB Error when querying a capped collection

I need some help interpreting/resolving this error:
OperationFailure: Executor error during find command :: caused by :: errmsg: "CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(225404776)"
which occurs when I run this command:
mongodb_connection["databaseName"]["cappedCollectionName"].find(query)
The mongodb instance is a "single" instance, and we are querying a "capped" collection. The query is looking at recent data, which should be in the DB (and not written over via the cap).
Thanks!

com.mongodb.MongoQueryException: Query failed with error code 13

We are getting com.mongodb.MongoQueryException:
> Query failed with error code 13 while connecting to MongDB trhough
> spring-data.
MongoDB version 3.x
Spring 4.1.6, mongo-java-driver - 3.0.2, spring-data-commons - 1.10.0.RELEASE, spring-data-mongodb - 1.7.0.RELEASE
Unable to run the find query on a collection.
I am able to view the collection on GUI using same credentials.
Any help would be appreciated.
Here is the full exception:
> org.springframework.data.mongodb.UncategorizedMongoDbException: Query
> failed with error code 13 and error message 'not authorized for query
> on <db.table>' on server xxx; nested exception is
> com.mongodb.MongoQueryException: Query failed with error code 13 and
> error message 'not authorized for query on db.table on server xxx
> at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:96)
> at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2002)
> at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1885)
Check mongo XSD for 2.6 vs 3.0 java drivers, they are different - it seems you still use old way to authenticate.
<mongo:db-factory dbname="${mongo.database}" username="${mongo.user}"
password="${mongo.pwd}" mongo-ref="mongo"/>
This works with 2.6 java driver only, not with 3.0 java driver.
Use this mongo-client-option with credentials attribute.
<mongo:mongo-client replica-set="${mongo.replica-set}" credentials="you need to put here user/password with specific DB">
The comma delimited list of username:password#database entries to use for authentication. Appending ? uri.authMechanism allows to specify the authentication challenge mechanism. If the credential you're trying to pass contains a comma itself, quote it with single quotes: '…'.

S3ServiceException when using AWS RedshiftBasicEmitter

I am using the sample AWS kinesis/redshift code from GitHub. I ran the code in an EC2 instance and ran into the following exception. Note that the emitting from Kinesis to S3 actually succeeded. But the emitting from S3 to Redshift failed. As both emitters in the same program used the same credentials, I am very puzzled why only one of them failed!?
I understand most people getting “The AWS Access Key Id you provided does not exist in our records” exception probably may have issue setting up the S3 key pair properly. But it does not seem to be the case here as emitting to S3 succeeded. If the credentials do not have read access, it should throw an authorization error instead.
Please comment if you have any insight.
Mar 16, 2014 4:32:49 AM com.amazonaws.services.kinesis.connectors.s3.S3Emitter emit
INFO: Successfully emitted 31 records to S3 in s3://mybucket/495362565978733426345566872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
Mar 16, 2014 4:32:50 AM com.amazonaws.services.kinesis.connectors.redshift.RedshiftBasicEmitter executeStatement
SEVERE: org.postgresql.util.PSQLException: ERROR: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
Detail:
-----------------------------------------------
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
code: 8001
context: Listing bucket=mfpredshift prefix=49536256597873342637951299872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
query: 3464108
location: s3_utility.cpp:536
process: padbmaster [pid=8116]
-----------------------------------------------
Mar 16, 2014 4:32:50 AM com.amazonaws.services.kinesis.connectors.redshift.RedshiftBasicEmitter emit
SEVERE: java.io.IOException: org.postgresql.util.PSQLException: ERROR: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
Detail:
-----------------------------------------------
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
code: 8001
context: Listing bucket=mybucket prefix=495362565978733426345566872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
query: 3464108
location: s3_utility.cpp:536
process: padbmaster [pid=8116]
-----------------------------------------------
I encountered the same errors. I'm using IAM role to get credentials. In my case, it was solved by modify RedshiftBasicEmitter to add ;token=TOKEN to CREDENTIALS parameter (finally I created my own IEmitter).
See http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html