"App init setup failed: a project already exists" MongoDB Realm App - mongodb

I have an error message that I do know know who to fix regarding Mongodb Realm CLI.
https://docs.mongodb.com/realm/cli/realm-cli-apps-create/
When I write the following command in Terminal:
realm-cli apps init -n "test"
I get the error message "app init setup failed: a project already exists"
I have already had a project name "test" but I have deleted it (Simply deleting the folder which might have been the mistake) but I still get the error message. The error occurs always, no matter the name or path/folder at the moment.
if realm-cli push is used it seems to use the old "test" application since the name is filled out when going through the [options]
https://docs.mongodb.com/realm/cli/realm-cli-push/
If I push the application it will deploy the test application and if deleted through either CLI or GUI it returns to the first problem mention at the start.
Where to go from here? Is the application somehow stored as a draft or something making it impossible for me to create another before its discarded or am I missing something?

Related

Amplify gets an appClientId from nowhere and now can't update the stack

I'm developing an application using Amplify
Everything was doing fine, I have done some changes on my dev environment to include Social login, and it was working fine locally
Then when I tried to deploy using Amplify Console CD it was failing, after digging into it I found the solution here using a custom script for the amplify simplepush
Just to put this on context
After having everything working again, I was happy to push my changes to staging
So, I have changed my branch, checkout the staging environment and tried to push
And then I got stuck in an error raised saying that it can't find the AppClientID
Resource Name: XXXXXXXXXXX (AWS::Cognito::UserPoolClient)
Event Type: update
Reason: User pool client does not exist. (Service: AWSCognitoIdentityProviderService; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: YYYYYYYYYYYYYYYYYY
URL: https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/xxxxxxxxxxx
The URL goes to a The page you are looking for does not exist. page
The client Id, true, it doesn't exist, and I have no idea why it is trying to update it
So I looked at both
amplify/#current-cloud-backend/amplify-meta.json
and
amplify/backend/amplify-meta.json
Both contain a line of code like this (on the auth->output section):
"AppClientID": "XXXXXXXXXX"
The #current-cloud-backend is supposed to come from the cloud, so I'm not supposed to touch it, but I have no idea how it did get that code, the dev appClient is not this code either.
So, I tried changing the code to (on the amplify/backend/amplify-meta.json file):
"AppClientID": "MY-VALID-ID"
and then pushing again
But the error continues, and then the amplify/backend/amplify-meta.json was updated with the wrong id again
Any idea what might be causing it and how to fix it?

Invalid permissions after setting gcloud caching use_kaniko?

I encountered a strange permissions error while building Docker images on the cloud. I switched to another machine, installed Gcloud, did gcloud init and everything worked again.
However, I noticed while building images, it took much longer because I didn't enable kaniko cache (which I figured out from this post: gcloud rebuilds complete container but Dockerfile is the same, only the script has changed)
After enabling this feature, I tried to rebuild my last image and bam, the same error message:
Status: Downloaded newer image for gcr.io/kaniko-project/executor:latest
gcr.io/kaniko-project/executor:latest
error checking push permissions --
make sure you entered the correct tag name, and that you are authenticated correctly, and try again:
checking push permission for "eu.gcr.io/pipeline/tree-par": creating push check transport for eu.gcr.io failed:
GET https://eu.gcr.io/v2/token?scope=repository%3pipeline%2Ftree-par%3Apush%2Cpull&service=eu.gcr.io:
UNAUTHORIZED: You don't have the needed permissions to perform this operation, and you may have invalid credentials.
To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
ERROR
ERROR: build step 0 "gcr.io/kaniko-project/executor:latest" failed: step exited with non-zero status: 1
-------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build bad4a9a4-054d-4ad7-991d-e5aeae039b7c completed with status "FAILURE"
Anyone any idea why this failed upon enabling the Kaniko cache? I hate to not use it because when it still worked, it really decreased the time it took to create docker images.
It seems that the issue comes from Kaniko's end.
Three days ago, on version v0.21.0, they added this fix:
Fix: GCR credential helper check does not respect DOCKER_CONFIG environment variable
Even after this release, 1 day later, this issue was reported where users saw a very similar Error message:
"[...] You don't have the needed permissions to perform this operation, and you may have invalid credentials[...] "
This was already fixed yesterday with the release of the v0.22.0 version. The suggested workaround is to execute the following command:
gcr.io/kaniko-project/executor:v0.22.0
I would suggest use that command instead of executor:latest to "force" the use of the v0.22.0 version.
I hope this is helpful! :)

Azure batch Application package not getting copied to Working Directory of Task

I have created Azure Batch pool with Linux Machine and specified Application Package for the Pool.
My command line is
command='python $AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py',
python3: can't open file '$AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py':
[Errno 2] No such file or directory
when i connect to node and look at working directory non of the Application Package files are present there.
How do i make sure that files from Application Package are available in working directory or I can invoke/execute files under Application Package from command line ?
Make sure that your async operation have proper await in place before you start using the package in your code.
Also please share your design \ pseudo-code scenario and how you are approaching it as a design?
Further to add:
Seems like this one is pool level package.
The error seems like that the application env variable is either incorrectly used or there is some other user level issue. Please checkout linmk below and specially the section where use of env variable is mentioned.
This seems like user level issue because In case of downloading the package resource, if there will be an error it will be visible to you via exception handler or at the tool level is you are using batch explorer \ Batch-labs or code level exception handling.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Reason \ Rationale:
If the pool level or the task application has error, an error-list will come back if there was an error in the application package then it will be returned as the UserError or and AppPackageError which will be visible in the exception handle of the code.
Key you can always RDP into your node and checkout the package availability: information here: https://learn.microsoft.com/en-us/azure/batch/batch-api-basics#connecting-to-compute-nodes
I once created a small sample to help peeps around so this resource might help you to checkeout the use here.
Hope rest helps.
On Linux, the application package with version string is formatted as:
AZ_BATCH_APP_PACKAGE_{0}_{1}
On Windows it is formatted as:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version
Where 0 is the application name and 1 is the version.
$AZ_BATCH_APP_PACKAGE_scriptv1_1 will take you to the root folder where the application was unzipped.
Does this "exact" path exist in that location?
tasks/XXX/get_XXXXX_data.py
You can see more information here:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Edit: Just saw this question: "or can I invoke/execute files under Application Package from command line"
Yes you can invoke and execute files from the application package directory with the environment variable above.
If you type env on the node you will see the environment variables that have been set.

Coldfusion FarCry CMS error on start up after server reboot

We're using Farcry CMS which runs on top of ColdFusion. Site was running fine but we are getting this error message after a web server reboot.
"Failed to initialise core type: dmHTML.cfc"
"Parameter 1 of function IsDefined, which is now application.stcoapi.dmHTML.stWebskins.Copy of displayPageCalculatorSelector.displayname, must be a syntactically valid variable name."
Really not sure where to start, could anyone suggest a strategy for troubleshooting this type of error.
Looks like you have a file called "Copy of displayPageCalculatorSelector.cfm" in your dmHTML webskin folder.
Remove this file is the best option.
Or rename it and remove the spaces, e.g. "Copy_of_displayPageCalculatorSelector.cfm"

Salesforce Refresh Error Entity of type 'WorkflowFieldUpdate' Not found

Refresh error: Entity of type 'WorkflowFieldUpdate' named 'Case.ISHERE' not found Project/src package.xml line 1 1248190153153 205
any ideas what this means?
It's stopping us from deploying, as workflows don't appear on the deployment candidate list in Eclipse.
The fieldUpdates, definitely do exist in the Sanbox, when it says "not found" does in mean on the server or in the src\workflows\Case.workflow file?
It means that your package.xml file references a workflow field update on Case called "ISHERE", but there is no corresponding file on your local filesystem. You should either remove the reference in package.xml if you don't care about that being deployed or you should do a refresh from server to get the latest metadata (watch out for any local changes you've made!).